I am a senior PhD student in the Electrical and Computer Engineering Department, Rice University. I work with Prof. Anshumali Shrivastava in the RUSHLAB. My primary research area is Extreme Scale Deep Learning using Randomized Hashing methods.
I previously interned as an Applied Scientist at Amazon Search, Palo Alto from May 2018 - Aug 2019 and again during May 2020 - Aug 2020. I worked on a myriad of problems like query to product prediction, super-fast reformulation for zero result queries, query-category prediction and fast approximate nearest neighbor search.
I received my Bachelor of Technology (B.Tech.) in Electrical Engineering (2011-2015) from Indian Institute of Technology, Bombay.
Here is my resume
.
Updates:
- Our paper A Tale of Two Efficient and Informative Negative Sampling Distributions was accepted for a long talk at ICML 2021.
- Our paper SOLAR: Sparse Orthogonal Learned and Random Embeddings was accepted to ICLR 2021.
pdf
poster
- Our paper Fast Processing and Querying of 170TB of Genomics Data via a Repeated And Merged BloOm Filter (RAMBO) was accepted to SIGMOD 2021.
- Our paper SDM-Net: A Simple and Effective Model for Generalized Zero-Shot Learning was accepted to UAI 2021.
- Received the Ken Kennedy Institute BP Fellowship for 2020-21.
- We presented our paper SLIDE : In Defense of Smart Algorithms over Hardware Acceleration for Large-Scale Deep Learning Systems at MLSys 2020, Austin.
pdf
video
package
- We presented our paper Extreme Classification in Log Memory using Count-Min Sketch: A Case Study of Amazon Search with 50M Products at NeurIPS 2019, Vancouver.
pdf
poster
video
package
- Received American Society of Indian Engineers (ASIE) scholarship for 2019.
- We presented 4 papers in NeurIPS 2019 workshops. Please refer to Publications for a list of all papers.
In the News:
- An algorithm could make CPUs a cheap way to train AI Endgadget
article
- Deep Learning breakthrough made by Rice University scientists ARS Technica
article
- SLIDE algorithm for training deep neural nets faster on CPUs than GPUs InsideHPC
article
- Hash Your Way To a Better Neural Network IEEE Spectrum
article
. - Deep learning rethink overcomes major obstacle in AI industry TechXplore
article
- Researchers report breakthrough in ‘distributed deep learning’ TechXplore
article
.
Invited Talks
- Jane Street Symposium 2020, New York on Jan 13th.
- Spotlight talk at Systems for ML workshop at NeurIPS 2019 on SLIDE : Training Deep Neural Networks with Large Outputs on a CPU faster than a V100-GPU.
video
pdf
package
- ‘Intro to Actor-Critic Methods and Imitation in Deep Reinforcement Learning’ at Houston ML Meetup, University of Houston on Dec 7th.
- ‘Intro to Imitation Learning’ at Schlumberger, Katy, TX on Nov 19th.
- Imitate like a Baby:The Key to Efficient Exploration in Deep Reinforcement Learning at BioScience Research Collaborative (BRC) in Rice Data Science Conference on Oct 14th.
video
pdf