Ayush Sekhari

Ayush Sekhari

Contact Details:

Email: as3663 [at] cornell [dot] edu
Office: 324, Bill and Melinda Gates Hall, Cornell University, Ithaca, NY - 14853


I am a Ph.D. student in the Computer Science department at Cornell University, where I have the great fortune to be advised by Prof. Karthik Sridharan and Prof. Robert D. Kleinberg. I am interested in theoretical aspects of Machine Learning and Computer Science. Recently, I have been working on high-dimensional stochastic optimization, and in particular learning with non-convex losses . I have also been working on theory for reinforcement learning (RL).

I completed my undergraduate from Indian Institute of Techonology Kanpur (IIT Kanpur), India in 2016 and, after that, spent a year at Google Research as a part of the Brain Residency program (now called AI Residency).

Recent News

  • Announcement: Jayadev Acharya, Gautam Kamath and I are organizing a workshop at ICML 2022 on the emerging paradigm of "Updatable Machine Learning." We are looking forward to see you in person (yay!) in Baltimore.
  • Update: I will be starting a postdoc with Prof. Sasha Rakhlin at MIT in Fall 2022. Let us chat if you are in the area!
  • Spent summer 2021 at Alberta Machine Intelligence Institute, CA working with Prof. Csaba Szepesv├íri and Dr. Johannes Kirschner (virtually).
  • We received a best student paper award at COLT 2019 for our work on finding stationary points in stochastic convex optimization.
  • Spent summer 2019 with the Learning Theory team at Google Research, NY working with Claudio Gentile and Mehryar Mohri .
  • Spent summers 2017 with FICC Macro Strats and Trading in Securities division at Goldman Sachs - Hong Kong.
  • Awarded Presidents Gold Medal 2016, IIT Kanpur for the best academic performance in the graduating batch.
  • Spent 2016 at Google Research as part of their Google Brain Residency Program 2016.

Publications and Preprints

  1. On the Complexity of Adversarial Decision Making
    Dylan J. Foster, Alexander Rakhlin and Karthik Sridharan.
    (under review)

  2. Provably Efficient Reinforcement Learning in Partially Observable Dynamical Systems
    with Masatoshi Uehara, Jason D. Lee, Nathan Kallus, Wen Sun.
    (under review)

  3. Computationally Efficient PAC RL in POMDPs with Latent Determinism and Conditional Embeddings
    with Masatoshi Uehara, Jason D. Lee, Nathan Kallus, Wen Sun.
    (under review)

  4. Guarantees for Epsilon-Greedy Reinforcement Learning with Function Approximation
    with Christoph Dann, Yishay Mansour, Mehryar Mohri, and Karthik Sridharan.
    ICML 2022 . Short version at RLDM 2022 - Reinforcement Learning and Decision Making conference.

  5. SGD: The role of Implicit Regularization, Batch-size and Multiple Epochs
    with Satyen Kale and Karthik Sridharan.
    NeurIPS 2021.

  6. Agnostic Reinforcement Learning with Low-Rank MDPs and Rich Observations
    with Christoph Dann, Yishay Mansour, Mehryar Mohri, and Karthik Sridharan.
    NeurIPS 2021. (Spotlight)

  7. Remember What You Want to Forget: Algorithms for Machine Unlearning
    with Jayadev Acharya, Gautam Kamath, and Ananda Theertha Suresh.
    NeurIPS 2021. Short version at TPDP 2021 - Theory and Practice of Differential Privacy.

  8. Neural Active Learning with Performance Guarantees
    with Pranjal Awasthi, Christoph Dann, Claudio Gentile, and Zhilei Wang.
    NeurIPS 2021.

  9. Reinforcement Learning with Feedback Graphs
    with Christoph Dann, Yishay Mansour, Mehryar Mohri, and Karthik Sridharan
    NeurIPS 2020. Short version at ICML 2020 Theoretical Foundations of RL workshop.

  10. Second-Order Information in Non-Convex Stochastic Optimization: Power and Limitations
    with Yossi Arjevani, Yair Carmon, John C Duchi, Dylan J Foster and Karthik Sridharan
    COLT 2020. Honorable mention for best talk award at NYAS ML symposium 2020.

  11. The Complexity of Making the Gradient Small in Stochastic Convex Optimization
    with Dylan Foster, Ohad Shamir, Nathan Srebro, Karthik Sridharan and Blake Woodworth
    COLT 2019. (Best Student Paper Award).

  12. Uniform Convergence of Gradients for Non-Convex Learning and Optimization
    with Dylan Foster and Karthik Sridharan
    NeurIPS 2018. Short version at ICML 2018 Nonconvex Optimization workshop.

  13. A Brief Study of in-domain Transfer and Learning from Fewer Samples using a Few Simple Priors with Marc Pickett and James Davidson
    ICML 2017 workshop: Picky Learners - Choosing Alternative Ways to Process Data.
    Awarded the second best paper prize among the workshop submissions.

Work Experience

  • Google Brain Residency Program, CA


  • Reviewing: ICLR 2019, AISTATS 2019, ICML (2019-21), COLT (2019-22), NeurIPS (2019-22), ALT (2020-21), ISIT 2020, ITCS 2020, Journal of Complexity 2021, FORC 2021.