Yo Joong “YJ” Choe
About Me
I am a Postdoctoral Scholar at the University of Chicago's Data Science Institute (DSI) working with Victor Veitch.
I received my Ph.D. in Statistics and Machine Learning from Carnegie Mellon University, advised by Aaditya Ramdas.
I also hold an M.S. in Machine Learning from CMU and a B.S. in Mathematics and Computer Science from UChicago.
Broadly, I work on research topics in statistics, machine learning, and natural language processing.
In recent years, I've been excited about a somewhat eclectic set of research areas and related topics:
Game-theoretic statistics: sequential inference; anytime-validity; e-values and e-processes; confidence sequences; testing by betting; and evaluation of forecasters and black-box predictors;
Science of large language models: causal representations; geometry of LLM embeddings; Transformers; (mechanistic) interpretability; and alignment.
Previously, as an industry NLP researcher (at Kakao and Kakao Brain), I worked on topics such as grammatical error correction, multilingual dataset construction, language modeling, invariant prediction, and drug discovery.
Please refer to my research page, CV, or Google Scholar for further information.
I go by YJ, short for my full first name, Yo Joong.
News
2024.04: Slides for our recent preprint, Combining Evidence Across Filtrations Using Adjusters, are now available online (link).
I will be presenting this work as a contributed talk at JSM 2024 in August and as an invited talk at ICSDS 2024 in December.
Hope to see you there!
2024.02: Our new preprint, titled Combining Evidence Across Filtrations Using Adjusters, is now available on arXiv.
This work addresses an intriguing challenge for sequential inference, when combining e-processes constructed on different information sets.
Joint work with Aaditya Ramdas.
2023.12: I attended NeurIPS 2023 to present my two most recent works listed below (1 in main conference & 1 in CRL workshop).
2023.11: Our new preprint (my first one at UChicago DSI), titled The Linear Representation Hypothesis and the Geometry of Large Language Models, is now available on arXiv.
This is joint work with Kiho Park and Victor Veitch.
The paper is also accepted for oral and poster presentations at the NeurIPS 2023 Workshop on Causal Representation Learning (CRL).
2023.09: Our recent paper, titled Counterfactually Comparing Abstaining Classifiers, is accepted to NeurIPS 2023!
(Links to poster, slides, and code.)
This work develops an interesting connection between abstaining classifiers, causal inference, and black-box evaluation in ML.
Joint work with Aditya Gangrade and Aaditya Ramdas.
2023.09: I am excited to start my new position as a Postdoctoral Scholar at the UChicago Data Science Institute!
My mentor is Victor Veitch.
2023.07: I was at ICML 2023 to give a contributed talk and a poster presentation
at the Workshop on Counterfactuals in Minds and Machines.
I presented our recent work on Counterfactually Comparing Abstaining Classifiers.
(Links to my slides and poster.)
2023.07: Our paper, Comparing Sequential Forecasters, is accepted to Operations Research!
This is joint work with Aaditya Ramdas.
2023.06: I defended my Ph.D. thesis (slides)!
I am grateful to my committee members:
Aaditya Ramdas (Chair),
Aarti Singh,
Edward Kennedy,
Johanna Ziegel, and
Alexander D'Amour.
|