• Liquid AI: Redefining AI with Liquid Foundation Models

  • 2024/11/24
  • 再生時間: 20 分
  • ポッドキャスト

Liquid AI: Redefining AI with Liquid Foundation Models

  • サマリー

  • Liquid AI, an MIT spin-off, has launched its first series of generative AI models called Liquid Foundation Models (LFMs). These models are built on a fundamentally new architecture, based on liquid neural networks (LNNs), that differs from the transformer architecture currently underpinning most generative AI applications.

    Instead of transformers, LFMs use "computational units deeply rooted in the theory of dynamical systems, signal processing, and numerical linear algebra". This allows them to be more adaptable and efficient, processing up to 1 million tokens while keeping memory usage to a minimum.

    LFMs come in three sizes:

    LFM 1.3B: Ideal for highly resource-constrained environments.

    LFM 3B: Optimised for edge deployment.

    LFM 40B: A Mixture-of-Experts (MoE) model designed for tackling more complex tasks.

    These models have already shown superior performance compared to other transformer-based models of comparable size, such as Meta's Llama 3.1-8B and Microsoft's Phi-3.5 3.8B. LFM-1.3B, for example, outperforms Meta's Llama 3.2-1.2B and Microsoft’s Phi-1.5 on several benchmarks, including the Massive Multitask Language Understanding (MMLU) benchmark.

    One of the key advantages of LFMs is their memory efficiency. They have a smaller memory footprint compared to transformer architectures, especially for long inputs. LFM-3B requires only 16 GB of memory compared to the 48 GB required by Meta's Llama-3.2-3B.

    LFMs are also highly effective in utilizing their context length. They can process longer sequences on the same hardware due to their efficient input compression.

    While not open source, users can access LFMs through Liquid's inference playground, Lambda Chat, or Perplexity AI. Liquid AI is also optimising its models for deployment on various hardware, including those from NVIDIA, AMD, Apple, Qualcomm, and Cerebras.

    続きを読む 一部表示

あらすじ・解説

Liquid AI, an MIT spin-off, has launched its first series of generative AI models called Liquid Foundation Models (LFMs). These models are built on a fundamentally new architecture, based on liquid neural networks (LNNs), that differs from the transformer architecture currently underpinning most generative AI applications.

Instead of transformers, LFMs use "computational units deeply rooted in the theory of dynamical systems, signal processing, and numerical linear algebra". This allows them to be more adaptable and efficient, processing up to 1 million tokens while keeping memory usage to a minimum.

LFMs come in three sizes:

LFM 1.3B: Ideal for highly resource-constrained environments.

LFM 3B: Optimised for edge deployment.

LFM 40B: A Mixture-of-Experts (MoE) model designed for tackling more complex tasks.

These models have already shown superior performance compared to other transformer-based models of comparable size, such as Meta's Llama 3.1-8B and Microsoft's Phi-3.5 3.8B. LFM-1.3B, for example, outperforms Meta's Llama 3.2-1.2B and Microsoft’s Phi-1.5 on several benchmarks, including the Massive Multitask Language Understanding (MMLU) benchmark.

One of the key advantages of LFMs is their memory efficiency. They have a smaller memory footprint compared to transformer architectures, especially for long inputs. LFM-3B requires only 16 GB of memory compared to the 48 GB required by Meta's Llama-3.2-3B.

LFMs are also highly effective in utilizing their context length. They can process longer sequences on the same hardware due to their efficient input compression.

While not open source, users can access LFMs through Liquid's inference playground, Lambda Chat, or Perplexity AI. Liquid AI is also optimising its models for deployment on various hardware, including those from NVIDIA, AMD, Apple, Qualcomm, and Cerebras.

Liquid AI: Redefining AI with Liquid Foundation Modelsに寄せられたリスナーの声

カスタマーレビュー:以下のタブを選択することで、他のサイトのレビューをご覧になれます。