As frontier artificial intelligence (AI) models—that is, models that match or exceed the capabilities of the most advanced models at the time of their development—become more capable, protecting them from theft and misuse will become more important. The authors of this report explore what it would take to protect model weights—the learnable parameters that encode the core intelligence of an AI—from theft by a variety of potential attackers.
This will help us customize your experience to showcase the most relevant content to your age group
Please select from below
Login
Not registered?
Sign up
Already registered?
Success – Your message will goes here
We'd love to hear from you!
Thank you for visiting our website. Would you like to provide feedback on how we could improve your experience?
This site does not use any third party cookies with one exception — it uses cookies from Google to deliver its services and to analyze traffic.Learn More.