エピソード

  • AI bubble, Sam Altman’s Manifesto and other fairy tales for billionaires (Ep. 272)
    2024/11/20

    Welcome to Data Science at Home, where we don’t just drink the AI Kool-Aid. Today, we’re dissecting Sam Altman’s “AI manifesto”—a magical journey where, apparently, AI will fix everything from climate change to your grandma's back pain. Superintelligence is “just a few thousand days away,” right? Sure, Sam, and my cat’s about to become a calculus tutor.

    In this episode, I’ll break down the bold (and often bizarre) claims in Altman’s grand speech for the Intelligence Age. I’ll give you the real scoop on what’s realistic, what’s nonsense, and why some tech billionaires just can’t resist overselling. Think AI’s all-knowing, all-powerful future is just around the corner? Let’s see if we can spot the fairy dust.

    Strap in, grab some popcorn, and get ready to see past the hype!

    Chapters

    00:00 - Intro

    00:18 - CEO of Baidu Statement on AI Bubble

    03:47 - News On Sam Altman Open AI

    06:43 - Online Manifesto "The Intelleigent Age"

    13:14 - Deep Learning

    16:26 - AI gets Better With Scale

    17:45 - Conclusion On Manifesto

    Still have popcorns? Get some laughs at https://ia.samaltman.com/

    #AIRealTalk #NoHypeZone #InvestorBaitAlert

    続きを読む 一部表示
    19 分
  • AI vs. The Planet: The Energy Crisis Behind the Chatbot Boom (Ep. 271)
    2024/11/13

    In this episode of Data Science at Home, we dive into the hidden costs of AI’s rapid growth — specifically, its massive energy consumption. With tools like ChatGPT reaching 200 million weekly active users, the environmental impact of AI is becoming impossible to ignore. Each query, every training session, and every breakthrough come with a price in kilowatt-hours, raising questions about AI’s sustainability.

    Join us, as we uncovers the staggering figures behind AI's energy demands and explores practical solutions for the future. From efficiency-focused algorithms and specialized hardware to decentralized learning, this episode examines how we can balance AI’s advancements with our planet's limits. Discover what steps we can take to harness the power of AI responsibly!

    Check our new YouTube channel at https://www.youtube.com/@DataScienceatHome

    Chapters

    00:00 - Intro

    01:25 - Findings on Summary Statics

    05:15 - Energy Required To Querry On GPT

    07:20 - Energy Efficiency In BlockChain

    10:41 - Efficicy Focused Algorithm

    14:02 - Hardware Optimization

    17:31 - Decentralized Learning

    18:38 - Edge Computing with Local Inference

    19:46 - Distributed Architectures

    21:46 - Outro

    #AIandEnergy #AIEnergyConsumption #SustainableAI #AIandEnvironment #DataScience #EfficientAI #DecentralizedLearning #GreenTech #EnergyEfficiency #MachineLearning #FutureOfAI #EcoFriendlyAI #FrancescoFrag #DataScienceAtHome #ResponsibleAI #EnvironmentalImpact

    続きを読む 一部表示
    22 分
  • Love, Loss, and Algorithms: The Dangerous Realism of AI (Ep. 270)
    2024/11/06

    Subscribe to our new channel https://www.youtube.com/@DataScienceatHome

    In this episode of Data Science at Home, we confront a tragic story highlighting the ethical and emotional complexities of AI technology. A U.S. teenager recently took his own life after developing a deep emotional attachment to an AI chatbot emulating a character from Game of Thrones. This devastating event has sparked urgent discussions on the mental health risks, ethical responsibilities, and potential regulations surrounding AI chatbots, especially as they become increasingly lifelike.

    🎙️ Topics Covered:

    AI & Emotional Attachment: How hyper-realistic AI chatbots can foster intense emotional bonds with users, especially vulnerable groups like adolescents.

    Mental Health Risks: The potential for AI to unintentionally contribute to mental health issues, and the challenges of diagnosing such impacts. Ethical & Legal Accountability: How companies like Character AI are being held accountable and the ethical questions raised by emotionally persuasive AI.

    🚨 Analogies Explored:

    From VR to CGI and deepfakes, we discuss how hyper-realism in AI parallels other immersive technologies and why its emotional impact can be particularly disorienting and even harmful.

    🛠️ Possible Mitigations:

    We cover potential solutions like age verification, content monitoring, transparency in AI design, and ethical audits that could mitigate some of the risks involved with hyper-realistic AI interactions. 👀 Key Takeaways: As AI becomes more realistic, it brings both immense potential and serious responsibility. Join us as we dive into the ethical landscape of AI—analyzing how we can ensure this technology enriches human lives without crossing lines that could harm us emotionally and psychologically. Stay curious, stay critical, and make sure to subscribe for more no-nonsense tech talk!

    Chapters

    00:00 - Intro

    02:21 - Emotions In Artificial Intelligence

    04:00 - Unregulated Influence and Misleading Interaction

    06:32 - Overwhelming Realism In AI

    10:54 - Virtual Reality

    13:25 - Hyper-Realistic CGI Movies

    15:38 - Deep Fake Technology

    18:11 - Regulations To Mitigate AI Risks

    22:50 - Conclusion

    #AI#ArtificialIntelligence#MentalHealth#AIEthics#podcast#AIRegulation#EmotionalAI#HyperRealisticAI#TechTalk#AIChatbots#Deepfakes#VirtualReality#TechEthics#DataScience#AIDiscussion #StayCuriousStayCritical

    続きを読む 一部表示
    24 分
  • VC Advice Exposed: When Investors Don’t Know What They Want (Ep. 269)
    2024/10/28

    Ever feel like VC advice is all over the place? That’s because it is. In this episode, I expose the madness behind the money and how to navigate their confusing advice!

    Watch the video at https://youtu.be/IBrPFyRMG1Q

    Subscribe to our new Youtube channel https://www.youtube.com/@DataScienceatHome

    00:00 - Introduction

    00:16 - The Wild World of VC Advice

    02:01 - Grow Fast vs. Grow Slow

    05:00 - Listen to Customers or Innovate Ahead

    09:51 - Raise Big or Stay Lean?

    11:32 - Sell Your Vision in Minutes?

    14:20 - The Real VC Secret: Focus on Your Team and Vision

    17:03 - Outro

    続きを読む 一部表示
    18 分
  • AI Says It Can Compress Better Than FLAC?! Hold My Entropy 🍿 (Ep. 268)
    2024/10/21

    Can AI really out-compress PNG and FLAC? 🤔 Or is it just another overhyped tech myth? In this episode of Data Science at Home, Frag dives deep into the wild claims that Large Language Models (LLMs) like Chinchilla 70B are beating traditional lossless compression algorithms. 🧠💥

    But before you toss out your FLAC collection, let's break down Shannon's Source Coding Theorem and why entropy sets the ultimate limit on lossless compression.

    We explore: ⚙️ How LLMs leverage probabilistic patterns for compression 📉 Why compression efficiency doesn’t equal general intelligence 🚀 The practical (and ridiculous) challenges of using AI for compression 💡 Can AI actually BREAK Shannon’s limit—or is it just an illusion?

    If you love AI, algorithms, or just enjoy some good old myth-busting, this one’s for you. Don't forget to hit subscribe for more no-nonsense takes on AI, and join the conversation on Discord!

    Let’s decode the truth together. Join the discussion on the new Discord channel of the podcast https://discord.gg/4UNKGf3

    Don't forget to subscribe to our new YouTube channel

    https://www.youtube.com/@DataScienceatHome

    References

    Have you met Shannon? https://datascienceathome.com/have-you-met-shannon-conversation-with-jimmy-soni-and-rob-goodman-about-one-of-the-greatest-minds-in-history/

    続きを読む 一部表示
    21 分
  • What Big Tech Isn’t Telling You About AI (Ep. 267)
    2024/10/12

    Are AI giants really building trustworthy systems? A groundbreaking transparency report by Stanford, MIT, and Princeton says no. In this episode, we expose the shocking lack of transparency in AI development and how it impacts bias, safety, and trust in the technology. We’ll break down Gary Marcus’s demands for more openness and what consumers should know about the AI products shaping their lives.

    Check our new YouTube channel https://www.youtube.com/@DataScienceatHome and Subscribe!

    Cool links

    1. https://mitpress.mit.edu/9780262551069/taming-silicon-valley/
    2. http://garymarcus.com/index.html
    続きを読む 一部表示
    19 分
  • Money, Cryptocurrencies, and AI: Exploring the Future of Finance with Chris Skinner [RB] (Ep. 266)
    2024/10/08

    We're revisiting one of our most popular episodes from last year, where renowned financial expert Chris Skinner explores the future of money. In this fascinating discussion, Skinner dives deep into cryptocurrencies, digital currencies, AI, and even the metaverse. He touches on government regulations, the role of tech in finance, and what these innovations mean for humanity.

    Now, one year later, we encourage you to listen again and reflect—how much has changed? Are Chris Skinner's predictions still holding up, or has the financial landscape evolved in unexpected ways? Tune in and find out!

    続きを読む 一部表示
    41 分
  • Kaggle Kommando’s Data Disco: Laughing our Way Through AI Trends (Ep. 265) [RB]
    2024/10/01

    In this episode, join me and the Kaggle Grand Master, Konrad Banachewicz, for a hilarious journey into the zany world of data science trends. From algorithm acrobatics to AI, creativity, Hollywood movies, and music, we just can't get enough. It's the typical episode with a dose of nerdy comedy you didn't know you needed. Buckle up, it's a data disco, and we're breaking down the binary!

    Sponsors
    • Intrepid AI is an AI assisted all-in-one platform for robotics teams. Build robotics applications in minutes, not months.
    • Learn what the new year holds for ransomware as a service, Active Directory, artificial intelligence and more when you download the 2024 Arctic Wolf Labs Predictions Report today at arcticwolf.com/datascience

    🔗 Links Mentioned in the Episode:

    1. Generative AI for time series: TimeGPT Documentation
    2. Lag-llama: GitHub (Note: The benchmark results on this one are pretty horrible)
    3. Open source LLM: Olmo Blog Post
    4. Quantization for LLM: Hugging Face Guide

    And finally, don't miss Konrad's Substack for more nerdy goodness! (If you're there already, be there again! 😄)

    続きを読む 一部表示
    43 分