Discussion about this post

User's avatar
Gavin Brown's avatar

If we DP fine-tune a foundation model (trained without formal privacy guarantees), I agree there may be a disconnect between the actual privacy guarantees and what people expect. However, I think that the DP guarantee is still meaningful.

I often think of it like in a survey. I come to your door and ask to use your blog posts for my fine-tuning. You say "I am worried about an adversary using the fine-tuned model to learn my private information." I tell you "Don't worry, I am using differential privacy. The final model won't be significantly more attackable if you contribute."

That guarantee is still preserved.

Expand full comment
1 more comment...

No posts