
Deepseek Debrief, Tokenomics, & Zuck's Big Bet on AI
カートのアイテムが多すぎます
カートに追加できませんでした。
ウィッシュリストに追加できませんでした。
ほしい物リストの削除に失敗しました。
ポッドキャストのフォローに失敗しました
ポッドキャストのフォロー解除に失敗しました
-
ナレーター:
-
著者:
このコンテンツについて
In this episode, we dive into the high-stakes world of AI, starting with the impact of DeepSeek R1, a Chinese LLM that initially disrupted the market by undercutting leading models by over 90% on output token pricing. We'll explore the intriguing shift in user behavior, where DeepSeek's own web app and API service lost market share to third-party hosts, despite its low price. This shift illuminates the crucial role of tokenomics, revealing how a model's price per token is an output of key performance indicators like latency, interactivity, and context window, which DeepSeek actively trades off to save compute for its AGI research goals rather than user experience. Then, we pivot to Meta's aggressive pursuit of superintelligence, a strategy spurred directly by losing its open-weight model lead to DeepSeek. Discover how Mark Zuckerberg is personally driving this effort, reinventing Meta's datacenter strategy to prioritize speed with new "Tents" and building multi-gigawatt AI training clusters such as Prometheus and Hyperion. We'll also uncover the technical missteps behind the "epic fail" of Meta's Llama 4 model, including issues with chunked attention, expert choice routing, and data quality, and how Meta is addressing its talent gap by offering unprecedented compensation to top AI researchers.