Skip to yearly menu bar Skip to main content


Poster

Stealing part of a production language model

Nicholas Carlini · Krishnamurthy Dvijotham · Milad Nasresfahani · A. Feder Cooper · Katherine Lee · Matthew Jagielski · Thomas Steinke · Daniel Paleka · Jonathan Hayase · Arthur Conmy · David Rolnick · Florian Tramer · Eric Wallace


Abstract: We introduce the first model-stealing attack that extracts precise, nontrivial information from production language models like OpenAI's ChatGPT or Google's PaLM-2. Specifically, our attack recovers the embedding projection layer (up to symmetries) of a transformer model, given typical API access. For under \\$20 USD, our attack extracts the entire projection matrix of OpenAI's Ada and Babbage models, which accounts for 13% and 7% of these models' parameters. We thereby confirm, for the first time, that these black-box models have a hidden dimension of 1024 and 2048, respectively. We also recover the exact hidden dimension size of the gpt-3.5-turbo model, and estimate it would cost under \\$2,000 to recover the entire embedding matrix.We conclude with potential defenses and mitigations, and discuss the implications of possible future work that could extend our attack.We remind reviewers that papers under submission to ICML should be treated confidentially. This paper is under a responsible disclosure period with Google and OpenAI, and should not be discussed with anyone outside of the other reviewers. We have received permission to submit this paper, but it is not going to be made public for some time. We trust the reviewers will therefore treat this paper with necessary care.

Live content is unavailable. Log in and register to view live content