A neural circuit mechanism for context-dependent selection via population dynamics.
Pavel Tolmachev, Christopher Langdon, Tatiana Engel
Princeton University

A Quadratic Synchronization Rule for Distributed Deep Learning
Xinran Gu*,1 Kaifeng Lyu*,2 Sanjeev Arora2 Longbo Huang1 Jingzhao Zhang1
1Tsinghua University 2Princeton University

Adapting Language Models to Compress Contexts
Alexis Chevalier* Alexander Wettig* Anirudh Ajith Danqi Chen Alexis Chevalier* Alexander Wettig* Anirudh Ajith Danqi Chen
Princeton Language and Intelligence

Catastrophic Jailbreak of Open-source LLMs via Exploiting Generation
Yangsibo Huang, Samyak Gupta, Mengzhou Xia, Kai Li, Danqi Chen
Princeton University

CRISP : Curriculum-based Sequential Neural Decoders for Polar code family
S. Ashwin Hebbar*, Viraj Nadkarni*, Ashok Vardhan Makkuva, Suma Bhat, Sewoong Oh, Pramod Viswanath
Princeton University, EPFL, University of Washington

Deep speech-to-text models capture the neural basis of spontaneous speech
Haocheng Wang1*, Leonard Niekerken1*, Zaid Zada1, Bobbi Aubrey1, Tom Sheffer2, Samuel A. Nastase1, Harshvardhan Gazula1, Mariano Schain2, Aditi Singh1, Aditi Rao1, Gina Choe1, Catherine Kim1, Werner Doyle3, Daniel Friedman3, Sasha Devore3, Patricia Dugan3, Avinatan Hassidim2, Michael Brenner2, Yossi Matias2, Orrin Devinsky3, Adeen Flinker3, Ariel Goldstein1,2, Uri Hasson1
1Department of Psychology and the Neuroscience Institute, Princeton University, Princeton, NJ, USA, 2Google Research, Mountain View, CA, USA, 3New York University Grossman School of Medicine, New York, NY, USA

Detecting Pretraining Data from Large Language Models
Weijia Shi2*, Anirudh Ajith1*, Mengzhou Xia1, Yangsibo Huang1, Daogao Liu2, Terra Blevins2, Danqi Chen1, Luke Zettlemoyer2
1Princeton University, 2University of Washington

Discrete real-tme learning of quantum state subspace evolution of many-B16 body systems in the presence of tme-dependent control fields
Shaojun Gui, Tak-San Ho, Herschel Rabitz
Princeton University

DP-OPT: Make Large Language Model Your Privacy- preserving Prompt Engineer
Junyuan Hong1, Jiachen Wang2, Chenhui Zhang3, Zhangheng Li1, Bo Li4, Zhangyang Wang1
1University of Texas at Austin, 2Princeton University 3Massachusetts Institute of Technology 4University of Chicago {jyhong,zoharli, atlaswang}@utexas.edu, [email protected], [email protected], [email protected]

Evaluating Large Language Models at Evaluating Instruction Following 
Zhiyuan Zeng1, Jiatong Yu2, Tianyu Gao2, Yu Meng3, Tanya Goyal2, Danqi Chen2
1Tsinghua University 2Princeton University 3University of Illinois Urbana-Champaign

Having Your Privacy Cake and Eating it Too: Platform-supported Auditing of Social Media Algorithms for Public Interest
Basileal Imana1, Aleksandra Korolova1 and John Heidemann2
1 Princeton University 2 USC/Information Sciences Institute

Identifying latent states in decision-making from cortical inactivation data
Z. Mohammadi1 , Z. C. Ashwood1, L. Pinto2, D. W. Tank1, C. D. Brody1, J. W. Pillow1
Princeton University, Princeton, NJ; 2 Northwestern University, Chicago, IL

Learning to Reason Over Visual Objects
Shanka Subhra Mondal*1, Taylor Webb*2, Jonathan Cohen3
1 Princeton Electrical and Computer Engineering, 2 UCLA Psychology, 3 Princeton Neuroscience Institute; * Equal contribution

Mechanistic basis of data dependence and abrupt learning in an in-context classification task
Gautam Reddy
Physics & Informatics Labs, NTT Research Inc., Center for Brain Science, Harvard University, Department of Physics, Princeton University

Modeling state-dependent communication between brain regions with switching nonlinear dynamical systems
Orren Karniol-Tambour1, David M. Zoltowski1, E. Mika Diamanti1, Lucas Pinto2, Carlos D. Brody1,3, David W. Tank1, Jonathan W. Pillow1
1 Princeton Neuroscience Institute 2 Northwestern University; 3 HHMI

Privacy-Preserving In-Context Learning for Large Language Models
Tong Wu*, Ashwinee Panda*, Jiachen T. Wang*, Prateek Mittal
Princeton University

Sheared-LLaMA: Accelerating Language Model Pre-Training Via Structured Pruning
Mengzhou Xia1, Tianyu Gao1, Zhiyuan Zeng2, Danqi Chen1
1Princeton University, 2Tsinghua University

SKILL-MIX: A Flexible and Expandable Family of Evaluations for AI Models
Dingli Yu1, Simran Kaur1, Arushi Gupta1, Jonah Brown-Cohen2, Anirudh Goyal2, Sanjeev Arora1
1Princeton Language and Intelligence (PLI), Princeton University 2Google DeepMind

Task-Specific Skill Localization in Fine-tuned Language Models
Abhishek Panigrahi1*, , Nikunj Saunshi 2*, Haoyu Zhao 1, Sanjeev Arora1
1Princeton University, 2Google NYC (work done at Princeton University)

The Shaped Transformer: Attention Models in the Infinite Depth-and-Width Limit at Initialization
Lorenzo Noci∗,1 Chuning Li,2 Mufan (Bill) Li,1 Bobby He3,Thomas Hofmann1, Chris Maddison2, Daniel Roy2
1 ETH Zurich,2 Vector Institute, University of Toronto,3 Oxford University

Using autoencoders with big equations rather than big data: Compression of large dynamical equations to explicit smaller dynamical equations
Yiyou Chen, Hershel Rabitz
Princeton University