It’s been a while since I wanted to start a blog, but after months of procrastination, it’s finally happening.
After leaving a PhD program for a while to take on a research scientist role in the industry, I realized at some point that I was spending a lot of my time summarizing and communicating about machine learning (in particular, deep learning) papers, usually to research engineers, software developers, and others broadly interested in AI, who all wanted to make sense of and build something exciting out of this “cutting-edge technology” (a term I’ve rarely used in academia). So I started regularly writing review notes and making presentation slides about popular recent papers. Then I thought, maybe I could also share these with the public to contribute to open research, which would in turn force myself to understand these papers better (Sebastian Ruder describes this as a motivation for his popular blog).
At the same time, now being outside of the academia and a research group that regularly publishes to NeurIPS, ICML, JMLR, AISTATS, ICLR, etc., I found myself missing out on recent trends that are “not on paper.” These could be insights into new topics and methods, preliminary results, failed experiments, and other things that are talked about in the aisles of CS buildings at top schools but are not written in their publications. Fortunately, some prominent researchers do take their insights to their own blogs, and part of my motivation is to (hopefully) contribute in that direction as well. I’m planning to write about some of my own take on recent trends, research ideas that didn’t quite develop into a publication, and things that don’t work but for a good reason.
So, knowing well that many blogs and Medium posts about machine learning already exist, I still decided to start my own version. My plan is to occasionally write posts that belong to one of these categories:
- Distillation: inspired by Distill, but probably more like a regular paper review or literature survey, occasionally accompanied by my two cents.
- Research: casual introductions to work from me and my collaborators, as well as ideas that didn’t fully develop into a paper.
- Journey: thoughts, lessons, and anecdotes from my “journey” as a young researcher in the area of machine learning and statistics.
- Notes: other stuff I’d like to keep notes on, so that I can look back at them later on.
Besides, it is likely that I’ll mostly write about what I currently work on – deep learning and its applications to natural language processing. Previously, I worked on topics in nonparametric and high-dimensional statistics (with applications to neuroscience), so some of my interpretations may come from that angle. If you’re interested in any of these, please check out this blog occasionally or subscribe to the feed!