Fri, Oct 17, 2025

2 PM – 3:30 PM EDT (GMT-4)

Add to Calendar

Online Event

19
Registered

Registration

Details

Abstract: The rapid evolution of LLMs and the ML field has ushered in remarkable progress, but also a new wave of security threats. Model poisoning, supply chain vulnerabilities, and the challenge of verifying model and data provenance are just a few of the risks we face.

 

We show how these attacks can impact day-to-day users of models and model providers and how strengthening the supply chain on models can alleviate most of these attacks. In particular, we have developed an efficient solution to sign models with Sigstore, at scale. We are currently integrating this into model hubs and building further protections on top of this.

 

The talk will be developed as a sequence of "attack" - "defense" live demos, with some slides for additional context and details. Attendees will learn about the benefits of model signing, the challenges of large-scale platform integration, and best practices for securing ML workflows. By sharing actionable insights, we aim to empower other model hubs to adopt similar solutions. Protecting the integrity of all ML models through widespread adoption will prevent a significant number of ML supply chain incidents.

 

Length: 45min talk + ~45min Q&A

Speakers

Peter Fackeldey's profile photo

Peter Fackeldey

Mihai Maruseac's profile photo

Mihai Maruseac

Hosted By

Research Computing | View More Events
Co-hosted with: GradFUTURES

Contact the organizers