• Futurist Gerd Leonhard - AGI By 2030? Think Again!

  • 2024/07/22
  • 再生時間: 41 分
  • ポッドキャスト

Futurist Gerd Leonhard - AGI By 2030? Think Again!

  • サマリー

  • This is the full version of my special livestreamed event on Artificial General Intelligence / AGI on July 18 and 19, 2024 You can watch it on YouTube here https://www.youtube.com/watch?v=W3dRQ7QZ_wc Watch the edited (Q&A) version with @LondonFuturists David Wood on YouTube here https://www.youtube.com/watch?v=yYyTIky2MLc&t=0s In this special livestreamed event I outlined my arguments that while IA (Intelligent Assistance) and some forms of narrow AI may well be quite beneficial to humanity, the idea of building AGIs, i.e., 'generally intelligent digital entities' (as set forth by Sam Altman / #openai and others) represents an existential risk that IMHO should not be undertaken or self-governed by private enterprises, multinational corporations or venture-capital funded startups. I believe we need an AGI-Non-Proliferation-Agreement. I outline what the difference between IA/AI and AGI or ASI (superintelligence) is, why it matters and how we could go about it. IA /AI yes *but with clear rules, standards, and guardrails. AGI: NO, unless we're all on the same page. Who will be Mission Control for humanity?
    続きを読む 一部表示
activate_samplebutton_t1

あらすじ・解説

This is the full version of my special livestreamed event on Artificial General Intelligence / AGI on July 18 and 19, 2024 You can watch it on YouTube here https://www.youtube.com/watch?v=W3dRQ7QZ_wc Watch the edited (Q&A) version with @LondonFuturists David Wood on YouTube here https://www.youtube.com/watch?v=yYyTIky2MLc&t=0s In this special livestreamed event I outlined my arguments that while IA (Intelligent Assistance) and some forms of narrow AI may well be quite beneficial to humanity, the idea of building AGIs, i.e., 'generally intelligent digital entities' (as set forth by Sam Altman / #openai and others) represents an existential risk that IMHO should not be undertaken or self-governed by private enterprises, multinational corporations or venture-capital funded startups. I believe we need an AGI-Non-Proliferation-Agreement. I outline what the difference between IA/AI and AGI or ASI (superintelligence) is, why it matters and how we could go about it. IA /AI yes *but with clear rules, standards, and guardrails. AGI: NO, unless we're all on the same page. Who will be Mission Control for humanity?

Futurist Gerd Leonhard - AGI By 2030? Think Again!に寄せられたリスナーの声

カスタマーレビュー:以下のタブを選択することで、他のサイトのレビューをご覧になれます。