『Enough About AI』のカバーアート

Enough About AI

Enough About AI

著者: Enough About AI
無料で聴く

このコンテンツについて

Enough about the key tech topic of our time for you to feel a bit more confident and informed. Dónal & Ciarán discuss AI, and keep you up to date.

© 2025 Enough About AI
政治・政府
エピソード
  • Alignment Anxieties & Persuasion Problems
    2025/05/13

    Dónal and Ciarán continue the 2025 season with a second quarterly update that looks at some recent themes in AI development. They're pondering doom again, as we increasingly grapple with the evidence that AI systems are powerfully persuasive and full of flattery at the same time as our ability to meaningfully supervise them seems to be diminishing.

    Topics in this episode

    • Can we see how reasoning models reason? If AI is thinking, or sharing information and it's not in human language, how can we check that it's aligned with our values.
    • This interpretability issue is tied to the concept of neuralese - inscrutable machine thoughts!
    • We discuss the predictions and prophetic doom visions of the AI-2027 document
    • Increasing ubiquity and sometimes invisibility of AI, as it's inserted into other products. Is this more enshittification?
    • AI is becoming a persuasion machine - we look at the recent issues on Reddit's r/ChangeMyView, where researchers skipped good ethics practice but ended up with worrying results
    • We talk about flattery, manipulation, and Eli Yudkowsky's AI-Box thought experiment

    Resources & Links

    • The AI-2027 piece, from Daniel Kokotajlo et al. is a must-read!
    • Dario Amodei's latest essay, The Urgency of Interpretability
    • T.O.P.I.C. - A detailed referencing model for indicating the use of GenAI Tools in academic assignments.
    • Yudkowsky's AI-box Experiment, described on his site.
    • "The Worst Internet-Research Ethics Violation I Have Ever Seen" - coverage of the University of Zurich / Reddit study, by Tom Barlett for The Atlantic
    • ChatGPT wants us to buy things via our AI conversations (reported by Reece Rogers, for Wired)

    You can get in touch with us - hello@enoughaboutai.com - where we'd love to hear your questions, comments or suggestions!

    続きを読む 一部表示
    47 分
  • Political Upheaval & Reasoning Models
    2025/02/24

    Dónal and Ciarán start a new season for 2025, with a slight change in format to bring you roughly quarterly updates on the themes and topics required to help you know enough about AI. This first episode of 2025, recorded in mid February, gives a summary of what's happened since last November and answers some of your submitted questions. (Thanks for those!)

    Topics in this episode

    • We can't avoid talking about them: Musk & Trump and some of the effects they've both had on the AI space in the last three months
    • Regulatory Developments, and the EU's AI Act starting to come into force
    • US - EU relations, and the continued innovation vs regulation chatter
    • DeepSeek and China's bold entry into the contemporary AI model space
    • What are Reasoning Models and how do they work?
    • Some history and concepts related to machine logic
    • AI Benchmarks - a quick primer as we move closer to "Humanity's Last Exam"

    Resources & Links

    • JD Vance's Speech from the Paris AI Conference is here (The American Presidency Project)
    • Details on HLE (Humanity's Last Exam) here, including some sample questions. Let us know how you did!
    • Details on the ARC-AGI here.
    • A graph of OpenAI model performance (and discussion) is here (80,000 hours)

    You can get in touch with us - hello@enoughaboutai.com - where we'd love to hear your questions, comments or suggestions!

    続きを読む 一部表示
    56 分
  • 06 Doom of Humanity?
    2024/11/26

    Dónal and Ciarán discuss the ways - both real and imagined in fiction - that AI could bring about civilization-ending doom for us all. What can we learn from how sci-fi has treated this topic? What are the distant and nearer potential dooms, and what can we do now, apart from saying thanks to ChatGPT? Oh, and note that listening to this episode may drastically affect your life and cause a future powerful AI to punish you in a psychic prison!

    Topics in this episode

    • What is p(Doom) and why are we hearing about it from AI researchers and investors?
    • How has AI doom been dealt with in Sci-fi and can this teach us anything useful?
    • What is Dead Internet Theory and why might AI contribute to the Enshitification of the internet?
    • Why has the religious concept of Paschal's Wager found a new form in AI discussions that started on internet forums?

    Resources & Links

    • More on the history of p(Doom) on wikipedia here.
    • An interesting article on Dead Internet Theory & AI Walter, Y. Artificial influencers and the dead internet theory. AI & Society (2024).
    • Read about Roko's Basilisk (if you dare)
    • More on Roko's Basilisk on the LessWrong forum where the though experiment emerged in 2009

    You can get in touch with us - hello@enoughaboutai.com - where we'd love to hear your questions, comments or suggestions!

    続きを読む 一部表示
    44 分

Enough About AIに寄せられたリスナーの声

カスタマーレビュー:以下のタブを選択することで、他のサイトのレビューをご覧になれます。