Ayush Sekhari


Ayush Sekhari

Ayush Sekhari


Contact Details

Email: sekhari [at] mit [dot] edu


I am a postdoctoral assosicate at MIT working with Prof. Sasha Rakhlin. I am interested in all theoretical aspects of Machine Learning. Recently, I have been working on high-dimensional stochastic optimization, and reinforcement learning (RL).

I completed my Ph.D. in Computer Science at Cornell University, where I was fortunate to be advised by Prof. Karthik Sridharan and Prof. Robert D. Kleinberg. Prior to Cornell, I completed my undergraduate education from Indian Institute of Techonology Kanpur (IIT Kanpur), India, and after that, spent a year at Google Research as a part of the Brain Residency program (now called AI Residency).

PhD Thesis: Non-convex and Interactive Learning via Stochastic Optimization


Recent News

  • Two new papers Reinforcement Learning accepted at ICLR 2024. Looking forward to the conference in Vienna.
  • I have moved to MIT. Let us chat if you are in the area!
  • 5 new papers accepted at NeurIPS. See you in New Orleans!
  • Two papers on interactive learning with active querying for expert feedback up on arXiv.
  • Would love to chat together if this direction interests you!
  • New paper on ticketed learning-unlearning schemes up on arXiv.

Publications

Work Experience

  • Google Brain Residency Program, CA

Teaching Assistant at Cornell University

Professional activities

Reviewing

    • Journals: Journal of Complexity 2021, JMLR (2020-22)
    • Conferences: COLT (2019-22), NeurIPS (2019-22), ALT (2020-22), ICLR 2019, AISTATS 2019, ICML (2019-21), ISIT 2020, ITCS 2020, FORC 2021