2015 Hong Kong Theory Day


Friday 9 January 2015
The Chinese University of Hong Kong

Speakers

Tentative Schedule


9.30
 
coffee
10.00 - 10.45
 
Yin-Tat Lee
A faster algorithm for linear programming
11.00 - 11.25
 
Zhiyi Huang
Online primal dual: beyond linear programs
11.30 - 11.55
 
Hubert Chan
Revealing optimal thresholds for generalized secretary problem...
12.00
 
lunch
2.00 - 2.45
 
Siu On Chan
Efficient density estimation via piecewise polynomial approximation
3.00 - 3.25
 
Ke Yi
Summarizing distributed data
3.30
 
coffee
4.00 - 4.45
 
Luca Trevisan
Graph Partitioning Algorithms and Laplacian Eigenvalues
5.00 - 5.25
 
Siyao Guo
Candidate weak pseudorandom functions in AC0 o MOD2

Information

Abstracts

A faster algorithm for linear programming
Yin-Tat Lee (Massachusetts Institute of Technology)

In this talk I will present a new algorithm for solving linear programs. Given a linear program with n variables, m > n constraints, and bit complexity L, our algorithm runs in Õ(√n L) iterations each consisting of solving Õ(1) linear systems and additional nearly linear time computation. Our method improves upon the convergence rate of previous state-of-the-art linear programming methods which required solving either Õ(√m L) linear systems [R88] or consisted of Õ((mn)1/4) steps of more expensive linear algebra [VA93].

Interestingly, our algorithm not only nearly matches the convergence rate of the universal barrier of Nesterov and Nemirovskii [NN94], but in the special case of the linear programming formulation of various flow problems our methods converge at a rate faster than that predicted by any self-concordant barrier. In particular, we achieve a running time of Õ(|E| √|V| log2U) for solving the maximum flow problem on a directed graph with |E| edges, |V| vertices, and capacity ratio U, thereby improving upon the previous fastest running time for solving this problem when |E| > Ω(|V|1+ε) for any constant ε.

This talk will assume little exposure to linear programming algorithms, convex optimization, or graph theory and will require no previous experience with the universal barrier or self-concordance.

This talk reflects joint work with Aaron Sidford.
See http://arxiv.org/abs/1312.6677 and http://arxiv.org/abs/1312.6713.


Online primal dual: beyond linear programs
Zhiyi Huang (The University of Hong Kong)

The online primal dual technique has given nearly optimal online algorithms for many problems. In this talk, we will survey some recent research results on extending the online primal dual technique beyond linear programs. The extension allows us to (1) apply the technique to problems that do not have natural linear program relaxations (e.g., speed-scalable online scheduling), to (2) show that for a large class of resource allocation problems, better competitive ratios can be obtained if we relax the hard resource constraints with soft constraints subject to convex production cost functions (e.g., online knapsack, online combinatorial auctions), and to (3) use convex programs to remodel existing problems (that have LP relaxations) and get better competitive ratio.


Revealing optimal thresholds for generalized secretary problem via continuous LP: Impacts on online K-item auction and bipartite K-matching with random arrival order
Hubert Chan (The University of Hong Kong)

We consider the general (J, K)-secretary problem. An algorithm observes the relative merits of arriving items and is allowed to make J selections. The objective is to maximize the expected number of items selected among the K best items. We give a construction procedure for an optimal algorithm involving JK thresholds via a continuous linear programming method. We further provide new results on online K-item auction and bipartite K-matching with random arrival order.

This is joint work with Fei Chen, Shaofeng H.-C. Jiang, and appears in SODA 2015.


Efficient density estimation via piecewise polynomial approximation
Siu On Chan (Microsoft Research New England)

We give a highly efficient "semi-agnostic" algorithm for learning univariate probability distributions that are well approximated by piecewise polynomial density functions. Let p be an arbitrary distribution over an interval I which is τ-close (in total variation distance) to an unknown probability distribution q that is defined by an unknown partition of I into t intervals and t unknown degree-d polynomials specifying q over each of the intervals.

We give an algorithm that draws Õ(t(d + 1)/ε2) samples from p, runs in time poly(t, d, 1/ε), and with high probability outputs a piecewise polynomial hypothesis distribution h that is (O(τ) + ε)-close (in total variation distance) to p. This sample complexity is essentially optimal; we show that even for τ = 0, any algorithm that learns an unknown t-piecewise degree-d probability distribution over I to accuracy ε must use Ω(t(d + 1)/(polylog(d + 1) ε2)) samples from the distribution, regardless of its running time. Our algorithm combines tools from approximation theory, uniform convergence, linear programming, and dynamic programming.

We apply this general algorithm to obtain a wide range of results for many natural problems in density estimation over both continuous and discrete domains. These include state-of-the-art results for learning mixtures of log-concave distributions; mixtures of t-modal distributions; mixtures of Monotone Hazard Rate distributions; mixtures of Poisson Binomial Distributions; mixtures of Gaussians; and mixtures of k-monotone densities. Our general technique yields computationally efficient algorithms for all these problems, in many cases with provably optimal sample complexities (up to logarithmic factors) in all parameters.

This talk is based on a joint work with Ilias Diakonikolas, Rocco Servedio, and Xiaorui Sun.


Summarizing distributed data
Ke Yi (Hong Kong University of Science and Technology)

Data summarization is an effective approach to dealing with the “big data” problem. While data summarization problems traditionally have been studied is the streaming model, the focus is starting to shift to distributed models, as distributed/parallel computation seems to be the only viable way to handle today’s massive data sets. In this talk, I will show how some fundamental data summaries can be computed over distributed data with low communication costs.


Graph Partitioning Algorithms and Laplacian Eigenvalues
Luca Trevisan (University of California, Berkeley)

Spectral graph theory studies applications of linear algebra to graph theory and to the design and analysis of graph algorithms. “Spectral” graph algorithms are algorithms that exploit properties of the eigenvalues and eigenvectors of matrices associated with a graph, such as the Laplacian matrix.

The Cheeger inequality is a classical result in spectral graph theory which states that the second Laplacian eigenvalue of a graph is small if and only if the graph has a sparse cut. The proof of the Cheeger inequality also gives a worst-case analysis of the “sweep” spectral partitioning algorithm of Fiedler as an approximation algorithm for the sparsest cut problem.

We discuss three generalizations of this result:
(i) the k-th Laplacian eigenvalue is small if and only if the vertices can be partitioned into k subsets, each defining a sparse cut
(ii) if the k-th Laplacian eigenvalue is large, then  Fiedler's sweep algorithm performs better than the worst-case bounds implied by Cheeger's inequality
(iii) if the k-th Laplacian eigenvalue is small and the (k+1)-st is large, then the vertices can be partitioned into k subsets such that each subset defines a sparse cut and each subset induces an expanding subgraph.

 

Candidate weak pseudorandom functions in AC0 o MOD2
Siyao Guo (Chinese University of Hong Kong)

Pseudorandom functions (PRFs) play a fundamental role in symmetric-key cryptography. However, they are inherently complex and cannot be implemented in the class AC0(MOD2) (of polynomial-size, constant-depth circuit families with unbounded fan-in AND, OR and MOD2 gates). Weak pseudorandom functions (weak PRFs) do not suffer from this complexity limitation, yet they suffice for many cryptographic applications. We study the minimal complexity requirements for constructing weak PRFs. In this talk, we will present as simple as possible candidate weak PRFs in class AC0oMOD2 (a subclass of AC0(MOD2)) which resist natural attacks from complexity theory. We argue that such a complexity-driven approach can play a role in bridging the gap between the theory and practice of cryptography.

This is a joint work with Adi Akavia, Andrej Bogdanov, Akshay Kamath and Alon Rosen.