Logo

Forecasting AI blog

Blog

>

News

>

Liquid neural networks, the new deal

Liquid neural networks, the new deal

By G. H.

|

October 14, 2023

|

News

|

MIT's CSAIL artificial intelligence laboratory recently unveiled a new deep learning architecture called "liquid neural networks". This innovation promises to shake up the AI landscape, particularly in the fields of robotics and autonomous vehicles.


With the success of large-scale language models (LLMs) such as ChatGPT, the race was on to create ever-larger neural networks. However, these massive models require computing power and memory that are often inaccessible for certain applications.

CSAIL's answer: Liquid Neural Networks


To overcome these limitations, MIT's CSAIL came up with liquid neural networks. This new approach offers a compact, scalable and efficient solution to a variety of AI problems, providing an alternative to traditional deep learning models.

Understanding Liquid Neural Networks


According to CSAIL director Daniel Rus, the main idea was to create accurate and efficient neural networks that can run on robots' on-board computers without requiring a cloud connection. Inspired by biology, in particular the neurons of the C. Elegans worm, these networks are intended to be adaptive and high-performance.

Liquid neural networks are characterized by the use of dynamically adjustable differential equations, offering a unique ability to adapt to new situations after training.

The secret of their effectiveness: Dynamically adjustable differential equations


Simplifying the concept, these networks enhance a neuron's representational learning capacity through two insights. The first insight is a "state-space model" that enhances neuron stability during learning. Subsequently introduced non-linearities enhance the expressiveness of the model.

The use of a different wiring architecture enables lateral and recurrent connections within the same layer, facilitating model learning in continuous time.

Key benefits of Liquid Neural Networks


Compactness is one of the key advantages of liquid neural networks. Whereas a conventional deep network requires around 100,000 artificial neurons for a given task, the CSAIL researchers succeeded in accomplishing the same task with just 19 neurons.

This compactness offers several significant advantages, including the ability to run on robot on-board computers and greater interpretability. Liquid networks also seem to understand causal relationships better, overcoming a frequent difficulty of traditional deep learning systems.

Potential Revolution for Robotics and Autonomous Vehicles


Liquid neural networks offer efficient support for continuous data streams, making them particularly well-suited to applications demanding computing power or requiring robust safety, such as robotics or autonomous vehicles.

MIT researchers have already tested liquid neural networks on single-robot configurations, with promising results. They plan to extend their tests to multi-robot systems and other types of data to further explore the capabilities and limits of this new architecture.