As frontier artificial intelligence (AI) models—that is, models that match or exceed the capabilities of the most advanced models at the time of their development—become more capable, protecting them from theft and misuse will become more important. The authors of this report explore what it would take to protect model weights—the learnable parameters that encode the core intelligence of an AI—from theft by a variety of potential attackers.
Thank you for visiting our website. Would you like to provide feedback on how we could improve your experience?
This site does not use any third party cookies with one exception — it uses cookies from Google to deliver its services and to analyze traffic.Learn More.