I am an AI researcher living in San Francisco.

I currently work at OpenAI.

I was previously a research scientist at Google Brain, where I popularized some key ideas in large language models: chain-of-thought prompting, instruction tuning, and emergent phenomena.

Twitter / CV / Google scholar / Email

Papers (all)
2022 Oct Scaling instruction-finetuned language models. (blog)
2022 Jun Emergent abilities of large language models. (blog)
2022 Jan Chain-of-thought prompting elicits reasoning in language models. (blog)
2021 Sep Finetuned language models are zero-shot learners. (blog)
2019 Jan Easy data augmentation techniques for text classification tasks.
Talks
2024 Apr Guest lecture, Stanford CS25.
2024 Apr UMass Amherst NLP Seminar.
2023 Nov Guest lecture, Stanford CS330.
2023 Nov Guest lecture, Harvard CS249r.
2023 Nov Talk, Samsung AI Forum.
2023 Oct Talk, ML at UC Berkeley.
2023 Aug Keynote, KDD LLM day.
2023 Jun Vanderbilt ML Seminar.
2023 May Guest lecture, Dartmouth QBS 108.
2023 Apr Guest lecture, NYU CSCI-GA.2590.
2023 Mar Guest lecture, MIT MAS.S68.
2023 Jan Guest lecture, Stanford CS25.
2023 Jan USC NLG Seminar.
2022 Dec Berkeley NLP Seminar.
2022 Nov Guest lecture, Stanford CS224v.
2022 Nov Guest lecture, NYU DS-GA 1011.
2022 Nov Guest lecture, JHU CSCI 601.771.
2022 Oct Talk, Amazon AWS AI Research.
2022 Feb Talk, Princeton NLP Group.
2022 Jan Stanford NLP Seminar.