-
How Numenta Builds Neural Networks Inspired by Sparsity in the Human Brain
- 2022/06/21
- 再生時間: 25 分
- ポッドキャスト
-
サマリー
あらすじ・解説
Our brains only use about 30-40 watts of power, yet are more powerful than neural networks which take extensive amounts of energy to run. So what can we learn from the brain to help us build better neural networks? Join Michael McCourt as he interviews Subutai Ahmad, VP of Research at Numenta, about his latest work.
In this episode, they discuss sparsity, bioinspiration, and how Numenta is using SigOpt to help them build better neural networks and save on training costs.
1:31 - Background on Numenta
2:31 - Bioinspiration
3:47 - Numenta's three research areas
4:06 - What is sparsity and how does it function in the brain?
7:15 - Training costs, Moore's Law, and how deep learning systems are on a different curve
9:58 - Mismatch between hardware and algorithms today in deep learning
11:04 - Improving energy usage and speed with sparse networks
14:10 - Sparse networks work with different hyperparameter regimes than dense networks
14:18 - How Numenta uses SigOpt Multimetric optimization
15:48 - How Numenta uses SigOpt Multitask to constrain costs
18:06 - How Numenta chose their hyperparameters
19:40 - What's next from Numenta
Learn more about Numenta at numenta.com and follow them on YouTube at www.youtube.com/c/NumentaTheory
Read Jeff Hawkin's book, A Thousand Brains: A New Theory of Intelligence
Learn more about SigOpt at sigopt.com and follow us on Twitter at twitter.com/sigopt
Subscribe to our YouTube channel to watch Experiment Exchange interviews at www.youtube.com/channel/sigopt