2 papers across 2 sessions
To boost both safety and innovation, regulators should mandate that large AI laboratories release small, openly accessible "analog models"—scaled-down versions trained similarly to and distilled from their largest proprietary models.
We provide a decentralized framework for collaborators to run large model computation (training/inference) without any collaborator getting full access to the model.