-
サマリー
あらすじ・解説
Elizabeth Seger is the Director of Technology Policy at Demos, a cross-party UK think tank with a program on trustworthy AI.
You can find links and a transcript at www.hearthisidea.com/episodes/seger In this episode we talked about open source the risks and benefits of open source AI models. We talk about:
- What ‘open source’ really means
- What is (and isn’t) open about ‘open source’ AI models
- How open source weights and code are useful for AI safety research
- How and when the costs of open sourcing frontier model weights might outweigh the benefits
- Analogies to ‘open sourcing nuclear designs’ and the open science movement
You can get in touch through our website or on Twitter. Consider leaving us an honest review wherever you're listening to this — it's the best free way to support the show. Thanks for listening!
Note that this episode was recorded before the release of Meta’s Llama 3.1 family of models. Note also that in the episode Elizabeth referenced an older version of the definition maintained by OSI (roughly version 0.0.3). The current OSI definition (0.0.8) now does a much better job of delineating between different model components.