Yujia Jin. Aaron Sidford. with Vidya Muthukumar and Aaron Sidford I am an Assistant Professor in the School of Computer Science at Georgia Tech. ", "Collection of variance-reduced / coordinate methods for solving matrix games, with simplex or Euclidean ball domains. With Bill Fefferman, Soumik Ghosh, Umesh Vazirani, and Zixin Zhou (2022). I am particularly interested in work at the intersection of continuous optimization, graph theory, numerical linear algebra, and data structures. in Chemistry at the University of Chicago. ", "A special case where variance reduction can be used to nonconvex optimization (monotone operators). [pdf] [poster] Aaron Sidford is an Assistant Professor of Management Science and Engineering at Stanford University, where he also has a courtesy appointment in Computer Science and an affiliation with the Institute for Computational and Mathematical Engineering (ICME). Given an independence oracle, we provide an exact O (nr log rT-ind) time algorithm. Intranet Web Portal. Source: www.ebay.ie Another research focus are optimization algorithms. Yair Carmon, Arun Jambulapati, Yujia Jin, Yin Tat Lee, Daogao Liu, Aaron Sidford, and Kevin Tian. We provide a generic technique for constructing families of submodular functions to obtain lower bounds for submodular function minimization (SFM). 4026. ReSQueing Parallel and Private Stochastic Convex Optimization. 2016. Here are some lecture notes that I have written over the years. Annie Marsden, Vatsal Sharan, Aaron Sidford, and Gregory Valiant, Efficient Convex Optimization Requires Superlinear Memory. van vu professor, yale Verified email at yale.edu. . 2013. pdf, Fourier Transformation at a Representation, Annie Marsden. >> Symposium on Foundations of Computer Science (FOCS), 2020, Efficiently Solving MDPs with Stochastic Mirror Descent . 9-21. Prof. Erik Demaine TAs: Timothy Kaler, Aaron Sidford [Home] [Assignments] [Open Problems] [Accessibility] sample frame from lecture videos Data structures play a central role in modern computer science. Email / Np%p `a!2D4! in Mathematics and B.A. In each setting we provide faster exact and approximate algorithms. en_US: dc.format.extent: 266 pages: en_US: dc.language.iso: eng: en_US: dc.publisher: Massachusetts Institute of Technology: en_US: dc.rights: M.I.T. Congratulations to Prof. Aaron Sidford for receiving the Best Paper Award at the 2022 Conference on Learning Theory (COLT 2022)! Many of these algorithms are iterative and solve a sequence of smaller subproblems, whose solution can be maintained via the aforementioned dynamic algorithms. [pdf] [talk] The paper, Efficient Convex Optimization Requires Superlinear Memory, was co-authored with Stanford professor Gregory Valiant as well as current Stanford student Annie Marsden and alumnus Vatsal Sharan. 2021 - 2022 Postdoc, Simons Institute & UC . By using this site, you agree to its use of cookies. with Aaron Sidford /N 3 My research focuses on AI and machine learning, with an emphasis on robotics applications. We establish lower bounds on the complexity of finding $$-stationary points of smooth, non-convex high-dimensional functions using first-order methods. D Garber, E Hazan, C Jin, SM Kakade, C Musco, P Netrapalli, A Sidford. resume/cv; publications. Michael B. Cohen, Yin Tat Lee, Gary L. Miller, Jakub Pachocki, and Aaron Sidford. I also completed my undergraduate degree (in mathematics) at MIT. I am generally interested in algorithms and learning theory, particularly developing algorithms for machine learning with provable guarantees. MS&E welcomes new faculty member, Aaron Sidford ! with Aaron Sidford July 2015. pdf, Szemerdi Regularity Lemma and Arthimetic Progressions, Annie Marsden. Faculty Spotlight: Aaron Sidford. Gary L. Miller Carnegie Mellon University Verified email at cs.cmu.edu. The following articles are merged in Scholar. Their, This "Cited by" count includes citations to the following articles in Scholar. ACM-SIAM Symposium on Discrete Algorithms (SODA), 2022, Stochastic Bias-Reduced Gradient Methods Sidford received his PhD from the department of Electrical Engineering and Computer Science at the Massachusetts Institute of Technology where he was advised by Professor Jonathan Kelner. "t a","H (arXiv), A Faster Cutting Plane Method and its Implications for Combinatorial and Convex Optimization, In Symposium on Foundations of Computer Science (FOCS 2015), Machtey Award for Best Student Paper (arXiv), Efficient Inverse Maintenance and Faster Algorithms for Linear Programming, In Symposium on Foundations of Computer Science (FOCS 2015) (arXiv), Competing with the Empirical Risk Minimizer in a Single Pass, With Roy Frostig, Rong Ge, and Sham Kakade, In Conference on Learning Theory (COLT 2015) (arXiv), Un-regularizing: approximate proximal point and faster stochastic algorithms for empirical risk minimization, In International Conference on Machine Learning (ICML 2015) (arXiv), Uniform Sampling for Matrix Approximation, With Michael B. Cohen, Yin Tat Lee, Cameron Musco, Christopher Musco, and Richard Peng, In Innovations in Theoretical Computer Science (ITCS 2015) (arXiv), Path-Finding Methods for Linear Programming : Solving Linear Programs in (rank) Iterations and Faster Algorithms for Maximum Flow, In Symposium on Foundations of Computer Science (FOCS 2014), Best Paper Award and Machtey Award for Best Student Paper (arXiv), Single Pass Spectral Sparsification in Dynamic Streams, With Michael Kapralov, Yin Tat Lee, Cameron Musco, and Christopher Musco, An Almost-Linear-Time Algorithm for Approximate Max Flow in Undirected Graphs, and its Multicommodity Generalizations, With Jonathan A. Kelner, Yin Tat Lee, and Lorenzo Orecchia, In Symposium on Discrete Algorithms (SODA 2014), Efficient Accelerated Coordinate Descent Methods and Faster Algorithms for Solving Linear Systems, In Symposium on Fondations of Computer Science (FOCS 2013) (arXiv), A Simple, Combinatorial Algorithm for Solving SDD Systems in Nearly-Linear Time, With Jonathan A. Kelner, Lorenzo Orecchia, and Zeyuan Allen Zhu, In Symposium on the Theory of Computing (STOC 2013) (arXiv), SIAM Journal on Computing (arXiv before merge), Derandomization beyond Connectivity: Undirected Laplacian Systems in Nearly Logarithmic Space, With Jack Murtagh, Omer Reingold, and Salil Vadhan, Book chapter in Building Bridges II: Mathematics of Laszlo Lovasz, 2020 (arXiv), Lower Bounds for Finding Stationary Points II: First-Order Methods. February 16, 2022 aaron sidford cv on alcatel kaios flip phone manual. I maintain a mailing list for my graduate students and the broader Stanford community that it is interested in the work of my research group. Here is a slightly more formal third-person biography, and here is a recent-ish CV. Etude for the Park City Math Institute Undergraduate Summer School. SODA 2023: 5068-5089. Student Intranet. He received his PhD from the Electrical Engineering and Computer Science Department at the Massachusetts Institute of Technology, where he was advised by Jonathan Kelner. This work presents an accelerated gradient method for nonconvex optimization problems with Lipschitz continuous first and second derivatives that is Hessian free, i.e., it only requires gradient computations, and is therefore suitable for large-scale applications. 2023. . The site facilitates research and collaboration in academic endeavors. Deeparnab Chakrabarty, Andrei Graur, Haotian Jiang, Aaron Sidford. Daniel Spielman Professor of Computer Science, Yale University Verified email at yale.edu. Annie Marsden. by Aaron Sidford. My long term goal is to bring robots into human-centered domains such as homes and hospitals. I completed my PhD at AISTATS, 2021. I am a senior researcher in the Algorithms group at Microsoft Research Redmond. . University, Research Institute for Interdisciplinary Sciences (RIIS) at I am an assistant professor in the department of Management Science and Engineering and the department of Computer Science at Stanford University. Page 1 of 5 Aaron Sidford Assistant Professor of Management Science and Engineering and of Computer Science CONTACT INFORMATION Administrative Contact Jackie Nguyen - Administrative Associate Neural Information Processing Systems (NeurIPS, Spotlight), 2019, Variance Reduction for Matrix Games Title. "FV %H"Hr ![EE1PL* rP+PPT/j5&uVhWt :G+MvY c0 L& 9cX& ", "We characterize when solving the max \(\min_{x}\max_{i\in[n]}f_i(x)\) is (not) harder than solving the average \(\min_{x}\frac{1}{n}\sum_{i\in[n]}f_i(x)\). In submission. Anup B. Rao. I am broadly interested in optimization problems, sometimes in the intersection with machine learning theory and graph applications. ", "Streaming matching (and optimal transport) in \(\tilde{O}(1/\epsilon)\) passes and \(O(n)\) space. Algorithms Optimization and Numerical Analysis. ", "About how and why coordinate (variance-reduced) methods are a good idea for exploiting (numerical) sparsity of data. United States. F+s9H It was released on november 10, 2017. Neural Information Processing Systems (NeurIPS), 2014. [pdf] [talk] [poster] I am an assistant professor in the department of Management Science and Engineering and the department of Computer Science at Stanford University. Efficient Convex Optimization Requires Superlinear Memory. Main Menu. Unlike previous ADFOCS, this year the event will take place over the span of three weeks. Discrete Mathematics and Algorithms: An Introduction to Combinatorial Optimization: I used these notes to accompany the course Discrete Mathematics and Algorithms. ! [pdf] In Symposium on Theory of Computing (STOC 2020) (arXiv), Constant Girth Approximation for Directed Graphs in Subquadratic Time, With Shiri Chechik, Yang P. Liu, and Omer Rotem, Leverage Score Sampling for Faster Accelerated Regression and ERM, With Naman Agarwal, Sham Kakade, Rahul Kidambi, Yin Tat Lee, and Praneeth Netrapalli, In International Conference on Algorithmic Learning Theory (ALT 2020) (arXiv), Near-optimal Approximate Discrete and Continuous Submodular Function Minimization, In Symposium on Discrete Algorithms (SODA 2020) (arXiv), Fast and Space Efficient Spectral Sparsification in Dynamic Streams, With Michael Kapralov, Aida Mousavifar, Cameron Musco, Christopher Musco, Navid Nouri, and Jakab Tardos, In Conference on Neural Information Processing Systems (NeurIPS 2019), Complexity of Highly Parallel Non-Smooth Convex Optimization, With Sbastien Bubeck, Qijia Jiang, Yin Tat Lee, and Yuanzhi Li, Principal Component Projection and Regression in Nearly Linear Time through Asymmetric SVRG, A Direct (1/) Iteration Parallel Algorithm for Optimal Transport, In Conference on Neural Information Processing Systems (NeurIPS 2019) (arXiv), A General Framework for Efficient Symmetric Property Estimation, With Moses Charikar and Kirankumar Shiragur, Parallel Reachability in Almost Linear Work and Square Root Depth, In Symposium on Foundations of Computer Science (FOCS 2019) (arXiv), With Deeparnab Chakrabarty, Yin Tat Lee, Sahil Singla, and Sam Chiu-wai Wong, Deterministic Approximation of Random Walks in Small Space, With Jack Murtagh, Omer Reingold, and Salil P. Vadhan, In International Workshop on Randomization and Computation (RANDOM 2019), A Rank-1 Sketch for Matrix Multiplicative Weights, With Yair Carmon, John C. Duchi, and Kevin Tian, In Conference on Learning Theory (COLT 2019) (arXiv), Near-optimal method for highly smooth convex optimization, Efficient profile maximum likelihood for universal symmetric property estimation, In Symposium on Theory of Computing (STOC 2019) (arXiv), Memory-sample tradeoffs for linear regression with small error, Perron-Frobenius Theory in Nearly Linear Time: Positive Eigenvectors, M-matrices, Graph Kernels, and Other Applications, With AmirMahdi Ahmadinejad, Arun Jambulapati, and Amin Saberi, In Symposium on Discrete Algorithms (SODA 2019) (arXiv), Exploiting Numerical Sparsity for Efficient Learning: Faster Eigenvector Computation and Regression, In Conference on Neural Information Processing Systems (NeurIPS 2018) (arXiv), Near-Optimal Time and Sample Complexities for Solving Discounted Markov Decision Process with a Generative Model, With Mengdi Wang, Xian Wu, Lin F. Yang, and Yinyu Ye, Coordinate Methods for Accelerating Regression and Faster Approximate Maximum Flow, In Symposium on Foundations of Computer Science (FOCS 2018), Solving Directed Laplacian Systems in Nearly-Linear Time through Sparse LU Factorizations, With Michael B. Cohen, Jonathan A. Kelner, Rasmus Kyng, John Peebles, Richard Peng, and Anup B. Rao, In Symposium on Foundations of Computer Science (FOCS 2018) (arXiv), Efficient Convex Optimization with Membership Oracles, In Conference on Learning Theory (COLT 2018) (arXiv), Accelerating Stochastic Gradient Descent for Least Squares Regression, With Prateek Jain, Sham M. Kakade, Rahul Kidambi, and Praneeth Netrapalli, Approximating Cycles in Directed Graphs: Fast Algorithms for Girth and Roundtrip Spanners. From 2016 to 2018, I also worked in 475 Via Ortega We present an accelerated gradient method for nonconvex optimization problems with Lipschitz continuous first and second . CV; Theory Group; Data Science; CSE 535: Theory of Optimization and Continuous Algorithms. pdf, Sequential Matrix Completion. With Jakub Pachocki, Liam Roditty, Roei Tov, and Virginia Vassilevska Williams. with Yang P. Liu and Aaron Sidford. Prior to that, I received an MPhil in Scientific Computing at the University of Cambridge on a Churchill Scholarship where I was advised by Sergio Bacallado. KTH in Stockholm, Sweden, and my BSc + MSc at the Congratulations to Prof. Aaron Sidford for receiving the Best Paper Award at the 2022 Conference on Learning Theory ( COLT 2022 )! stream Aaron Sidford, Gregory Valiant, Honglin Yuan COLT, 2022 arXiv | pdf. (, In Symposium on Foundations of Computer Science (FOCS 2015) (, In Conference on Learning Theory (COLT 2015) (, In International Conference on Machine Learning (ICML 2015) (, In Innovations in Theoretical Computer Science (ITCS 2015) (, In Symposium on Fondations of Computer Science (FOCS 2013) (, In Symposium on the Theory of Computing (STOC 2013) (, Book chapter in Building Bridges II: Mathematics of Laszlo Lovasz, 2020 (, Journal of Machine Learning Research, 2017 (. [pdf] [talk] [poster] sidford@stanford.edu. I am a fifth-and-final-year PhD student in the Department of Management Science and Engineering at Stanford in theses are protected by copyright. Best Paper Award. NeurIPS Smooth Games Optimization and Machine Learning Workshop, 2019, Variance Reduction for Matrix Games of practical importance. aaron sidford cvis sea bass a bony fish to eat. to be advised by Prof. Dongdong Ge. Yin Tat Lee and Aaron Sidford. Janardhan Kulkarni, Yang P. Liu, Ashwin Sah, Mehtaab Sawhney, Jakub Tarnawski, Fully Dynamic Electrical Flows: Sparse Maxflow Faster Than Goldberg-Rao, FOCS 2021 Call (225) 687-7590 or park nicollet dermatology wayzata today! 2022 - Learning and Games Program, Simons Institute, Sept. 2021 - Young Researcher Workshop, Cornell ORIE, Sept. 2021 - ACO Student Seminar, Georgia Tech, Dec. 2019 - NeurIPS Spotlight presentation. arXiv preprint arXiv:2301.00457, 2023 arXiv. which is why I created a Prof. Sidford's paper was chosen from more than 150 accepted papers at the conference. COLT, 2022. Roy Frostig, Sida Wang, Percy Liang, Chris Manning. Two months later, he was found lying in a creek, dead from . I often do not respond to emails about applications. Secured intranet portal for faculty, staff and students. Conference of Learning Theory (COLT), 2022, RECAPP: Crafting a More Efficient Catalyst for Convex Optimization University of Cambridge MPhil. 113 * 2016: The system can't perform the operation now. The ones marked, 2014 IEEE 55th Annual Symposium on Foundations of Computer Science, 424-433, SIAM Journal on Optimization 28 (2), 1751-1772, Proceedings of the twenty-fifth annual ACM-SIAM symposium on Discrete, 2015 IEEE 56th Annual Symposium on Foundations of Computer Science, 1049-1065, 2013 ieee 54th annual symposium on foundations of computer science, 147-156, Proceedings of the forty-fifth annual ACM symposium on Theory of computing, MB Cohen, YT Lee, C Musco, C Musco, R Peng, A Sidford, Proceedings of the 2015 Conference on Innovations in Theoretical Computer, Advances in Neural Information Processing Systems 31, M Kapralov, YT Lee, CN Musco, CP Musco, A Sidford, SIAM Journal on Computing 46 (1), 456-477, P Jain, S Kakade, R Kidambi, P Netrapalli, A Sidford, MB Cohen, YT Lee, G Miller, J Pachocki, A Sidford, Proceedings of the forty-eighth annual ACM symposium on Theory of Computing, International Conference on Machine Learning, 2540-2548, P Jain, SM Kakade, R Kidambi, P Netrapalli, A Sidford, 2015 IEEE 56th Annual Symposium on Foundations of Computer Science, 230-249, Mathematical Programming 184 (1-2), 71-120, P Jain, C Jin, SM Kakade, P Netrapalli, A Sidford, International conference on machine learning, 654-663, Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete, D Garber, E Hazan, C Jin, SM Kakade, C Musco, P Netrapalli, A Sidford, New articles related to this author's research, Path finding methods for linear programming: Solving linear programs in o (vrank) iterations and faster algorithms for maximum flow, Accelerated methods for nonconvex optimization, An almost-linear-time algorithm for approximate max flow in undirected graphs, and its multicommodity generalizations, A faster cutting plane method and its implications for combinatorial and convex optimization, Efficient accelerated coordinate descent methods and faster algorithms for solving linear systems, A simple, combinatorial algorithm for solving SDD systems in nearly-linear time, Uniform sampling for matrix approximation, Near-optimal time and sample complexities for solving Markov decision processes with a generative model, Single pass spectral sparsification in dynamic streams, Parallelizing stochastic gradient descent for least squares regression: mini-batching, averaging, and model misspecification, Un-regularizing: approximate proximal point and faster stochastic algorithms for empirical risk minimization, Accelerating stochastic gradient descent for least squares regression, Efficient inverse maintenance and faster algorithms for linear programming, Lower bounds for finding stationary points I, Streaming pca: Matching matrix bernstein and near-optimal finite sample guarantees for ojas algorithm, Convex Until Proven Guilty: Dimension-Free Acceleration of Gradient Descent on Non-Convex Functions, Competing with the empirical risk minimizer in a single pass, Variance reduced value iteration and faster algorithms for solving Markov decision processes, Robust shift-and-invert preconditioning: Faster and more sample efficient algorithms for eigenvector computation. 172 Gates Computer Science Building 353 Jane Stanford Way Stanford University Research interests : Data streams, machine learning, numerical linear algebra, sketching, and sparse recovery.. Department of Electrical Engineering, Stanford University, 94305, Stanford, CA, USA