I’m currently a second year Symbolic Systems master student at Stanford University. My main area is Natural Language Processing, but I work in other areas such as deep reinforcement learning, applied machine learning, information theory as well.

I spent a year in Andrew Ng’s Stanford Machine Learning Group working on recurrent neural network regularizations, data augmentation, and sequence-to-sequence generative models.

I recently accepted the offer to become a full-time engineering research scientist at Stanford Electrical Engineering and Biomedical Data Science Department to work with James Zou. I also do consulting work at a newly founded startup (hey, they are still in stealth!).

Anyway, if you have cool ideas or need advice, send me an email!

Contact me



  • From Information Bottleneck to Activation Norm Penalty (in review). Nie, A., Mongia, M., & Zou, J., (2017). International Conference of Learning Representation (ICLR). Vancouver, BC, Canda.
  • DisSent: Semi-Supervised Sentence Representation Learning with Discourse Prediction. Nie, A., Bennett, E.D., Goodman, N.D., (2017). arXiv preprint.
  • Data Noising as Smoothing in Neural Network Sequence Models. Ziang, X., Sida W., Jiwei L., Levy, D., Nie A., Jurafsky, Dan & Ng, Andrew, (2016). 5th International Conference on Learning Representations (ICLR). Toulon, France. (Oral at BayLearn 2016)
  • Representations of Time Affect Willingness to Wait for Future Rewards. Thorstad, R., Nie, A., & Wolff, P. (2015). 37th Annual Conference of the Cognitive Science Society. Pasadena, California.
  • Computational Exploration to Linguistic Structures of Future, Classification and Categorization. Nie, A., Sheppard, J., Choi, J., Copley, B., Wolff, P., (2015). The North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Denver, Colorado.



  • 2017 Winter: CS224N Natural Language Processing
  • 2017 Spring: CS224S Spoken Language Processing