About Me
I'm an assistant professor at
the Department of Computer Science, ETH Zurich,
where I joined in the fall 2019.
My research is focused on fast algorithms and their applications to machine learning.
I'm not actively looking for PhD students, but I
do consider applicants to our
Direct Doctorate
Program (joint Master's and PhD)
.
I spent 2018 and the spring of 2019 as a postdoc in the Theory of
Computation Group at
Harvard working with Jelani Nelson.
Before that, I spent the fall semester 2017 as a
research fellow at the
Simons Institute at UC Berkeley.
I finished my PhD at the Department
of Computer Science at Yale
in the summer 2017, where I was advised by Daniel A.
Spielman.
Before attending Yale, I received B.A. in Computer Science from the University of Cambridge in 2011.
My Curriculum
Vitae (March, 2019).
Papers

Four Deviations Suffice for Rank 1 Matrices

with Kyle Luh and Zhao Song.

In Advances in Mathematics, Volume 375, 2 December 2020.

We prove a matrix discrepancy bound that strengthens the famous
KadisonSinger result of Marcus, Spielman, and Srivastava. Consider any
independent scalar random variables \(ξ_1, \ldots, ξ_n\) with finite support,
e.g.
\(\{ \pm 1 \}\) or \(\{ 0,1 \}\)valued random variables, or some combination
thereof. Let \(u_1, \dots, u_n \in \mathbb{C}^m\) and \( σ^2 = \left\
\sum_{i=1}^n \text{Var}[ ξ_i ] (u_i u_i^{*})^2 \right\.\) Then there exists
a choice of outcomes \(\varepsilon_1,\ldots,\varepsilon_n\) in the support of
\(ξ_1, \ldots, ξ_n\) s.t. \( \left \\sum_{i=1}^n \mathbb{E} [ ξ_i] u_i
u_i^*  \sum_{i=1}^n \varepsilon_i u_i u_i^* \right \ \leq 4 σ.\) A
simple consequence of our result is an improvement of a Lyapunovtype theorem
of Akemann and Weaver.

Packing LPs are Hard to Solve Accurately, Assuming Linear Equations are Hard

with Di Wang and Peng Zhang.

In SODA 2020.

We study the complexity of approximately solving packing linear programs. In the Real
RAM model, it is known how to solve packing LPs with N nonzeros in time
Õ(N/ϵ). We investigate whether the ϵ dependence in the running time
can be improved.
Our first main result relates the difficulty of this problem to
hardness assumptions for solving dense linear equations. We show that, in the Real RAM model,
unless linear equations in matrices n × n with condition number O(n^{10})
can be solved to ϵ accuracy faster than Õ(n^{2.01} log(1/ϵ)), no algorithm (1−ϵ)approximately
solves a O(n)×O(n) packing LPs (where N = O(n^{2}))
in time Õ(n^{2}ϵ^{−0.0003}). It would be surprising to solve linear equations
in the Real RAM model this fast, as we currently cannot solve them faster than Õ(n^{ω}), where
ω denotes the exponent in the running time for matrix multiplication in the Real RAM model (and equivalently matrix inversion).
The current best bound on this exponent is roughly ω ≤ 2.372. Note, however, that a fast solver for linear equations does not
directly imply faster matrix multiplication. But, our reduction shows that if fast and accurate packing LP solvers exist, then either
linear equations can be solved much faster than matrix multiplication or the matrix multiplication constant is very close to 2.
Instantiating the same reduction with different parameters, we show that unless linear equations in matrices with condition number
O(n^{1.5}) can be solved to ϵ accuracy faster than Õ(n^{2.372} log(1/ϵ)), no
algorithm (1 – ϵ)approximately solves packing LPs in time Õ(n^{2}ϵ^{−0.067}).
Thus smaller improvements in the exponent for ϵ in the running time of Packing LP solvers also imply improvements in the current
stateoftheart for solving linear equations.
Our second main result relates the difficulty of approximately solving packing
linear programs to hardness assumptions for solving sparse linear equations: In the Real RAM model, unless wellconditioned sparse systems
of linear equations can be solved faster than Õ((no. nonzeros of matrix) ),
no algorithm (1 – ϵ)approximately solves packing LPs with N nonzeros in time Õ(Nϵ^{−0.165}). This
running time of Õ((no. nonzeros of matrix) )
is obtained by the classical Conjugate Gradient algorithm by a standard analysis. Our reduction implies that if sufficiently good packing LP solvers exist, then this longstanding bestknown bound on the running time for solving wellconditioned systems of linear equations is suboptimal. While we prove results in the Real RAM model, our condition number assumptions ensure that our results can be translated to fixed point arithmetic with (log n)^{O(1)} bits per number.
Conference

Flows in Almost Linear Time via Adaptive Preconditioning

with Richard Peng, Sushant
Sachdeva, and Di Wang.

In STOC 2019.

We present algorithms for solving a large class of flow and regression
problems on unit weighted graphs to \((1 + 1 / poly(n))\) accuracy in
almostlinear time. These problems include \(\ell_p\)norm minimizing flow for
\(p\) large (\(p \in [ω(1), o(\log^{2/3} n) ]\)), and their duals,
\(\ell_p\)norm semisupervised learning for \(p\) close to \(1\).
As \(p\) tends to infinity, \(\ell_p\)norm flow and its dual tend to maxflow
and mincut respectively. Using this connection and our algorithms, we give an
alternate approach for approximating undirected maxflow, and the first
almostlinear time approximations of discretizations of total variation
minimization objectives.
This algorithm demonstrates that many tools previous viewed as limited to
linear systems are in fact applicable to a much wider range of convex
objectives. It is based on the the routingbased solver for Laplacian linear
systems by Spielman and Teng (STOC '04, SIMAX '14), but require several new
tools: adaptive nonlinear preconditioning, treerouting based
ultrasparsification for mixed \(\ell_2\) and \(\ell_p\) norm objectives, and
decomposing graphs into uniform expanders.

Iterative Refinement for \(\ell_p\)norm Regression

with Deeksha Adil, Richard Peng, and Sushant Sachdeva.

In SODA 2019.

We give improved algorithms for the \(\ell_{p}\)regression problem, \(\min_{x}
\x\_{p}\) such that \(A x=b,\) for all \(p \in (1,2) \cup (2,\infty).\) Our
algorithms obtain a high accuracy solution in \(\tilde{O}_{p}(m^{\frac{p2}{2p
+ p2}}) \le \tilde{O}_{p}(m^{\frac{1}{3}})\) iterations, where each iteration
requires solving an \(m \times m\) linear system, \(m\) being the dimension of the
ambient space.
By maintaining an approximate inverse of the linear systems that we solve in
each iteration, we give algorithms for solving \(\ell_{p}\)regression to \(1 /
\text{poly}(n)\) accuracy that run in time \(\tilde{O}_p(m^{\max\{ω,
7/3\}}),\) where \(ω\) is the matrix multiplication constant. For the current
best value of \(ω> 2.37\), we can thus solve \(\ell_{p}\) regression as fast
as \(\ell_{2}\) regression, for all constant \(p\) bounded away from \(1.\)
Our algorithms can be combined with fast graph Laplacian linear equation
solvers to give minimum \(\ell_{p}\)norm flow / voltage solutions to \(1 /
\text{poly}(n)\) accuracy on an undirected graph with \(m\) edges in
\( \tilde{O}_{p}(m^{1 + \frac{p2}{2p + p2}}) \le
\tilde{O}_{p}(m^{\frac{4}{3}}) \) time.
For sparse graphs and for matrices with similar dimensions, our iteration
counts and running times improve on the \(p\)norm regression algorithm by
[BubeckCohenLeeLi STOC`18] and generalpurpose convex optimization
algorithms. At the core of our algorithms is an iterative refinement scheme for
\(\ell_{p}\)norms, using the smoothed \(\ell_{p}\)norms introduced in the work of
Bubeck et al. Given an initial solution, we construct a problem that seeks to
minimize a quadraticallysmoothed \(\ell_{p}\) norm over a subspace, such that a
crude solution to this problem allows us to improve the initial solution by a
constant factor, leading to algorithms with fast convergence.

A Matrix Chernoff Bound for Strongly Rayleigh Distributions and Spectral Sparsifiers

with Zhao Song.

In FOCS 2018.

Strongly Rayleigh distributions are a class of negatively dependent
distributions of binaryvalued random variables [Borcea, Branden, Liggett JAMS
09]. Recently, these distributions have played a crucial role in the analysis
of algorithms for fundamental graph problems, e.g. Traveling Salesman Problem
[Gharan, Saberi, Singh FOCS 11]. We prove a new matrix Chernoff bound for
Strongly Rayleigh distributions.
As an immediate application, we show that adding together the Laplacians of
\(ε^{2} \log^2 n\) random spanning trees gives an \((1\pm ε)\)
spectral sparsifiers of graph Laplacians with high probability. Thus, we
positively answer an open question posed in [Baston, Spielman, Srivastava, Teng
JACM 13]. Our number of spanning trees for spectral sparsifier matches the
number of spanning trees required to obtain a cut sparsifier in [Fung,
Hariharan, Harvey, Panigraphi STOC 11]. The previous best result was by naively
applying a classical matrix Chernoff bound which requires \(ε^{2} n \log
n\) spanning trees. For the tree averaging procedure to agree with the original
graph Laplacian in expectation, each edge of the tree should be reweighted by
the inverse of the edge leverage score in the original graph. We also show that
when using this reweighting of the edges, the Laplacian of single random tree
is bounded above in the PSD order by the original graph Laplacian times a
factor \(\log n\) with high probability, i.e. \(L_T \preceq O(\log n) L_G\).
We show a lower bound that almost matches our last result, namely that in
some graphs, with high probability, the random spanning tree is \(\it{not}\)
bounded above in the spectral order by \(\frac{\log n}{\log\log n}\) times the
original graph Laplacian. We also show a lower bound that in \(ε^{2}
\log n\) spanning trees are necessary to get a \((1\pm ε)\) spectral
sparsifier.

Solving Directed Laplacians in Nearly Linear Time
through Sparse LU Factorizations

with Michael B. Cohen, Jonathan Kelner, John Peebles,
Richard Peng, Anup B. Rao, Aaron Sidford.

In FOCS 2018.

We show how to solve directed Laplacian systems in nearlylinear time. Given
a linear system in an \(n \times n\) Eulerian directed Laplacian with \(m\) nonzero
entries, we show how to compute an \(ε\)approximate solution in time \(O(m
\log^{O(1)} (n) \log (1/ε))\). Through reductions from [Cohen et al.
FOCS'16] , this gives the first nearlylinear time algorithms for computing
\(ε\)approximate solutions to row or column diagonally dominant linear
systems (including arbitrary directed Laplacians) and computing
\(ε\)approximations to various properties of random walks on directed
graphs, including stationary distributions, personalized PageRank vectors,
hitting times, and escape probabilities. These bounds improve upon the recent
almostlinear algorithms of [Cohen et al. STOC'17], which gave an algorithm to
solve Eulerian Laplacian systems in time \(O((m+n2^{O(\sqrt{\log n \log \log
n})})\log^{O(1)}(n ε^{1}))\).
To achieve our results, we provide a structural result that we believe is of
independent interest. We show that Laplacians of all strongly connected
directed graphs have sparse approximate LUfactorizations. That is, for every
such directed Laplacian \({\mathbf{L}}\), there is a lower triangular matrix
\(\boldsymbol{\mathit{\mathfrak{L}}}\) and an upper triangular matrix
\(\boldsymbol{\mathit{\mathfrak{U}}}\), each with at most \(\tilde{O}(n)\)
nonzero entries, such that their product \(\boldsymbol{\mathit{\mathfrak{L}}}
\boldsymbol{\mathit{\mathfrak{U}}}\) spectrally approximates \({\mathbf{L}}\)
in an appropriate norm. This claim can be viewed as an analogue of recent work
on sparse Cholesky factorizations of Laplacians of undirected graphs. We show
how to construct such factorizations in nearlylinear time and prove that, once
constructed, they yield nearlylinear time algorithms for solving directed
Laplacian systems.

Incomplete Nested Dissection

with Richard Peng, Robert
Schwieterman, and Peng Zhang.

In STOC 2018.

We present an asymptotically faster algorithm for solving linear systems in
wellstructured 3dimensional truss stiffness matrices. These linear systems
arise from linear elasticity problems, and can be viewed as extensions of graph
Laplacians into higher dimensions. Faster solvers for the 2D variants of such
systems have been studied using generalizations of tools for solving graph
Laplacians [DaitchSpielman CSC'07, ShklarskiToledo SIMAX'08].
Given a 3dimensional truss over \(n\) vertices which is formed from a union of
\(k\) convex structures (tetrahedral meshes) with bounded aspect ratios, whose
individual tetrahedrons are also in some sense wellconditioned, our algorithm
solves a linear system in the associated stiffness matrix up to accuracy
\(ε\) in time \(O(k^{1/3} n^{5/3} \log (1 / ε))\). This
asymptotically improves the running time \(O(n^2)\) by Nested Dissection for all
\(k \ll n\).
We also give a result that improves on Nested Dissection even when we allow
any aspect ratio for each of the \(k\) convex structures (but we still require
wellconditioned individual tetrahedrons). In this regime, we improve on Nested
Dissection for \(k \ll n^{1/44}\).
The key idea of our algorithm is to combine nested dissection and support
theory. Both of these techniques for solving linear systems are well studied,
but usually separately. Our algorithm decomposes a 3dimensional truss into
separate and balanced regions with small boundaries. We then bound the spectrum
of each such region separately, and utilize such bounds to obtain improved
algorithms by preconditioning with partial states of separatorbased Gaussian
elimination.

Approximate Gaussian Elimination

Rasmus Kyng.

My PhD dissertation, 2017.


Hardness Results for Structured Linear Systems

with Peng Zhang.

In FOCS 2017. Won the Machtey Award for Best Student
Paper.

We show that if the nearlylinear time solvers for Laplacian matrices and
their generalizations can be extended to solve just slightly larger families of
linear systems, then they can be used to quickly solve all systems of linear
equations over the reals. This result can be viewed either positively or
negatively: either we will develop nearlylinear time algorithms for solving
all systems of linear equations over the reals, or progress on the families we
can solve in nearlylinear time will soon halt.

Sampling Random Spanning Trees Faster than Matrix Multiplication

with David Durfee, John Peebles, Anup B. Rao, and Sushant Sachdeva.

In STOC 2017.

We present an algorithm that, with high probability, generates a random
spanning tree from an edgeweighted undirected graph in
\(\tilde{O}(n^{4/3}m^{1/2}+n^{2})\) time (The \(\tilde{O}(\cdot)\) notation hides
\(\operatorname{polylog}(n)\) factors). The tree is sampled from a distribution
where the probability of each tree is proportional to the product of its edge
weights. This improves upon the previous best algorithm due to Colbourn et al.
that runs in matrix multiplication time, \(O(n^ω)\). For the special case of
unweighted graphs, this improves upon the best previously known running time of
\(\tilde{O}(\min\{n^ω,m\sqrt{n},m^{4/3}\})\) for \(m \gg n^{5/3}\) (Colbourn
et al. '96, KelnerMadry '09, Madry et al. '15).
The effective resistance metric is essential to our algorithm, as in the work
of Madry et al., but we eschew determinantbased and random walkbased
techniques used by previous algorithms. Instead, our algorithm is based on
Gaussian elimination, and the fact that effective resistance is preserved in
the graph resulting from eliminating a subset of vertices (called a Schur
complement). As part of our algorithm, we show how to compute
\(ε\)approximate effective resistances for a set \(S\) of vertex pairs via
approximate Schur complements in \(\tilde{O}(m+(n + S)ε^{2})\) time,
without using the JohnsonLindenstrauss lemma which requires \(\tilde{O}(
\min\{(m + S)ε^{2}, m+nε^{4} +Sε^{2}\})\) time. We
combine this approximation procedure with an error correction procedure for
handing edges where our estimate isn't sufficiently accurate.

A Framework for Analyzing Resparsification Algorithms

with Jakub Pachocki, Richard Peng, and Sushant Sachdeva.

In SODA 2017.

A spectral sparsifier of a graph \(G\) is a sparser graph \(H\) that
approximately preserves the quadratic form of \(G\), i.e. for all vectors \(x\),
\(x^T L_G x \approx x^T L_H x\), where \(L_G\) and \(L_H\) denote the respective
graph Laplacians. Spectral sparsifiers generalize cut sparsifiers, and have
found many applications in designing graph algorithms. In recent years, there
has been interest in computing spectral sparsifiers in semistreaming and
dynamic settings. Natural algorithms in these settings often involve repeated
sparsification of a graph, and accumulation of errors across these steps. We
present a framework for analyzing algorithms that perform repeated
sparsifications that only incur error corresponding to a single sparsification
step, leading to better results for many resparsificationbased algorithms. As
an application, we show how to maintain a spectral sparsifier in the
semistreaming setting: We present a simple algorithm that, for a graph \(G\) on
\(n\) vertices and \(m\) edges, computes a spectral sparsifier of \(G\) with \(O(n
\log n)\) edges in a single pass over \(G\), using only \(O(n \log n)\) space, and
\(O(m \log^2 n)\) total time. This improves on previous best semistreaming
algorithms for both spectral and cut sparsifiers by a factor of \(\log{n}\) in
both space and runtime. The algorithm extends to semistreaming row sampling
for general PSD matrices. We also use our framework to combine a spectral
sparsification algorithm by Koutis with improved spanner constructions to give
a parallel algorithm for constructing \(O(n\log^2{n}\log\log{n})\) sized spectral
sparsifiers in \(O(m\log^2{n}\log\log{n})\) time. This is the best known
combinatorial graph sparsification algorithm.The size of the sparsifiers is
only a factor \(\log{n}\log\log{n}\) more than ones produced by numerical
routines.

Approximate Gaussian Elimination for Laplacians: Fast, Sparse, and Simple

with
Sushant Sachdeva.

In FOCS 2016.

We show how to perform sparse approximate Gaussian elimination for Laplacian
matrices. We present a simple, nearly linear time algorithm that approximates a
Laplacian by a matrix with a sparse Cholesky factorization, the version of
Gaussian elimination for symmetric matrices. This is the first nearly linear
time solver for Laplacian systems that is based purely on random sampling, and
does not use any graph theoretic constructions such as lowstretch trees,
sparsifiers, or expanders. The crux of our analysis is a novel concentration
bound for matrix martingales where the differences are sums of conditionally
independent variables.

Sparsified Cholesky and Multigrid Solvers for Connection Laplacians

with
Yin Tat Lee,
Richard Peng,
Sushant Sachdeva, and
Daniel A. Spielman.

In STOC 2016.

We introduce the sparsified Cholesky and sparsified multigrid algorithms for
solving systems of linear equations. These algorithms accelerate Gaussian
elimination by sparsifying the nonzero matrix entries created by the
elimination process. We use these new algorithms to derive the first nearly
linear time algorithms for solving systems of equations in connection
Laplacians, a generalization of Laplacian matrices that arise in many problems
in image and signal processing. We also prove that every connection Laplacian
has a linear sized approximate inverse. This is an LU factorization with a
linear number of nonzero entries that is a strong approximation of the original
matrix. Using such a factorization one can solve systems of equations in a
connection Laplacian in linear time. Such a factorization was unknown even for
ordinary graph Laplacians.
independent variables.

Fast, Provable Algorithms for Isotonic Regression in all \(\ell_p\)norms

with
Anup B. Rao and
Sushant Sachdeva.

In NIPS 2015.

Given a directed acyclic graph \(G,\) and a set of values \(y\) on the vertices,
the Isotonic Regression of \(y\) is a vector \(x\) that respects the partial order
described by \(G,\) and minimizes \(xy,\) for a specified norm. This paper
gives improved algorithms for computing the Isotonic Regression for all
weighted \(\ell_{p}\)norms with rigorous performance guarantees. Our algorithms
are quite practical, and their variants can be implemented to run fast in
practice.

Our implementation of the
leastsquares Isotonic regression algorithm is available on
the Isotonic Github repository.

Algorithms for Lipschitz Learning on Graphs

with
Anup B. Rao,
Daniel A. Spielman, and
Sushant Sachdeva.

In COLT 2015.

We develop fast algorithms for solving regression problems on graphs where
one is given the value of a function at some vertices, and must find its
smoothest possible extension to all vertices. The extension we compute is the
absolutely minimal Lipschitz extension, and is the limit for large \(p\) of
\(p\)Laplacian regularization. We present an algorithm that computes a minimal
Lipschitz extension in expected linear time, and an algorithm that computes an
absolutely minimal Lipschitz extension in expected time \(\widetilde{O} (m n)\).
The latter algorithm has variants that seem to run much faster in practice.
These extensions are particularly amenable to regularization: we can perform
\(l_{0}\)regularization on the given values in polynomial time and
\(l_{1}\)regularization on the initial function values and on graph edge weights
in time \(\widetilde{O} (m^{3/2})\).

Our implementations of the lexminimization algorithms are available on
the YINSlex Github repository.

Solving SDD Linear Systems in Nearly \( m\log^{1/2}n \) Time

with
Michael B. Cohen,
Gary L. Miller,
Jakub W. Pachocki,
Richard Peng,
Anup B. Rao
and Shen Chen Xu.

In STOC 2014.

We show an algorithm for solving symmetric diagonally dominant (SDD) linear systems with m nonzero entries to a relative error of ε in O(m log^{1/2}n log^{c}n log(1/ε)) time. Our approach follows the recursive preconditioning framework, which aims to reduce graphs to trees using iterative methods. We improve two key components of this framework: random sampling and tree embeddings. Both of these components are used in a variety of other algorithms, and our approach also extends to the dual problem of computing electrical flows.
We show that preconditioners constructed by random sampling can perform well without meeting the standard requirements of iterative methods. In the graph setting, this leads to ultrasparsifiers that have optimal behavior in expectation.
The improved running time makes previous low stretch embedding algorithms the running time bottleneck in this framework. In our analysis, we relax the requirement of these embeddings to snowflake spaces. We then obtain a twopass approach algorithm for constructing optimal embeddings in snowflake spaces that runs in O(m log log n) time. This algorithm is also readily parallelizable.
The paper Solving SDD Linear Systems in Nearly \( m\log^{1/2}n \) Time was a merged submission of the following two papers:

Preconditioning in Expectation

with
Michael B. Cohen,
Jakub W. Pachocki,
Richard Peng,
and Anup B. Rao.

We show that preconditioners constructed by random sampling can perform well
without meeting the standard requirements of iterative methods. When applied to
graph Laplacians, this leads to ultrasparsifiers that in expectation behave as
the nearlyoptimal ones given by [KollaMakarychevSaberiTeng STOC`10].
Combining this with the recursive preconditioning framework by [SpielmanTeng
STOC`04] and improved embedding algorithms, this leads to algorithms that solve
symmetric diagonally dominant linear systems and electrical flow problems in
expected time close to \(m\log^{1/2}n\).

Stretching Stretch
Stretching Stretch

Michael B. Cohen,
Gary L. Miller,
Jakub W. Pachocki,
Richard Peng,
and Shen Chen Xu.

We give a generalized definition of stretch that simplifies the efficient
construction of lowstretch embeddings suitable for graph algorithms. The
generalization, based on discounting highly stretched edges by taking their
\(p\)th power for some \(0 < p < 1\), is directly related to performances of
existing algorithms. This discounting of highstretch edges allows us to treat
many classes of edges with coarser granularity. It leads to a twopass approach
that combines bottomup clustering and topdown decompositions to construct
these embeddings in \(\mathcal{O}(m\log\log{n})\) time. Our algorithm
parallelizes readily and can also produce generalizations of lowstretch
subgraphs.
Teaching
Recent and Upcoming Talks

EPFL Theory Seminar;
Lausanne,
March
2020.

A Numerical Analysis Approach to Convex Optimization
(Slides).


ICCOPT;
Berlin,
August
2019.

Optimization on Graphs
(Slides).


UT Austin Theory Seminar;
May
2019.

A Numerical Analysis Approach to Convex Optimization.


Harvard Theory of Computation Seminar;
February
2019.

A Numerical Analysis Approach to Convex Optimization.


Beyond Randomized Rounding and the Probabilistic
Method Workshop;
Simons Institute, Berkeley,
February
2019.

A Matrix Chernoff Bound for Strongly Rayleigh
Distributions and Spectral Sparsifiers from a few Random Spanning Trees
(Video),
(Slides).


SODA 2019;
San Diego,
January
2019.

Iterative Refinement for \(\ell_p\)norm Regression.


Bridging Continuous and Discrete Optimization Reunion Workshop;
Simons Institute, Berkeley,
December
2018.

Iterative Refinement for \(\ell_p\)norm Regression.


Caltech Theory Seminar;
November
2018.

Approximate Gaussian Elimination
(Slides).


Northwestern Quarterly Theory Workshop;
Northwestern,
November
2018.

Analysis Using Matrix Martingales
(Slides).


FOCS;
Paris,
October
2018.

A Matrix Chernoff Bound for Strongly Rayleigh
Distributions and Spectral Sparsifiers from a few Random Spanning Trees.


FOCS;
Paris,
October
2018.

Solving Directed Laplacians in Nearly Linear Time through Sparse LU Factorizations.


Laplacian 2.0 Workshop;
FOCS, Paris,
October
2018.

Analysis Using Matrix Martingales
(Slides).


RNLA Workshop;
Simons Institute, Berkeley,
September
2018.

Matrix Martingales in Randomized Numerical
Linear Algebra
(Video).


HighPerformance Graph Algorithms Seminar;
Dagstuhl,
June 2018.

Optimization on Graphs.


Discrepancy & IP Workshop;
CWI Amsterdam, June 2018.

Matrix
Approximation by Row Sampling
(Notes).


GraphXD Workshop;
UC Berkeley,
March 2018.

Optimization on Graphs
(Video),
(Slides).


Michael Cohen Memorial Symposium;
Simons Institute, Berkeley,
November 2017.

Michael Cohen and Directed Laplacians
(Video).

More Talks

Stanford Theory Seminar; October 2017.
Approximate Gaussian Elimination.

FOCS, Berkeley; October 2017.
Hardness Results for Structured Linear Systems.

UC Berkeley; September 2017.
Hardness Results for Structured Linear Systems.

Google Research Mountain View; August 2017.
Hardness Results for Structured Linear Systems.

YPNG Seminar, Yale Dept. of Statistics and Data
Science; July 2017.
Approximate Gaussian Elimination.

MSR Redmond; January 2017.
Regression, Elimination, and Sampling on Graphs.

University of Copenhagen; January 2017.
Approximate Gaussian Elimination.

CMU; December 2016.
Approximate Gaussian Elimination.

Georgia Tech; November 2016.
Approximate Gaussian Elimination.

Princeton; October 2016.
Approximate Gaussian Elimination.

UC Berkeley; October 2016.
Approximate Gaussian Elimination.

Google Research NYC; October 2016.
Approximate Gaussian Elimination.

FOCS, New Brunswick; October 2016.
Approximate Gaussian Elimination.

MIT A&C Seminar; September 2016.
Approximate Gaussian Elimination.

Aarhus University; September 2016.
Lipschitz Learning on Graphs.

China Theory Week, Hong Kong; August 2016.
Approximate Gaussian Elimination.

SIAM Annual Meeting, Boston; July 2016.
Approximate Cholesky Factorization

STOC, Boston; June 2016.
Sparsified Cholesky and Multigrid Solvers for Connection Laplacians.

IT University of Copenhagen; December 2015.
Lipschitz Learning and Isotonic Regression on
Graphs
Code

Laplacians.jl

Laplacians.jl is a Julia package containing graph
algorithms, especially ones related to
spectral and algebraic graph theory. It was started by
Dan Spielman, and has contributions from Dan, Serban
Stan, myself and several others.

YINSlex Github Repository

Our implementations of the lexminimization
algorithms from the paper
Algorithms for Lipschitz Learning on Graphs
.
The code was written by Anup Rao, Sushant Sachdeva,
Dan Spielman, and myself.

Isotonic Github Repository

An implementation of the
leastsquares Isotonic regression algorithm
from the paper
Fast, Provable Algorithms for Isotonic Regression in all \(\ell_p\)norms
.
The code was written by Anup Rao, Sushant Sachdeva, and myself.
Contact
I can be reached by email at
[last name] @ inf.ethz.ch
[last name] = kyng