Hugging Face Introduces Compact Versions of SmolVLM Vision Language Model That Can Run on Consumer Laptops

Hugging Face introduced two new variants to its SmolVLM vision language models last week. The new artificial intelligence (AI) models are available in 256 million and 500 million parameter sizes, with the former being claimed as the world’s smallest vision model by the company. The new variants focus on retaining the efficiency of the older two-billion parameter model while reducing the size significantly. The company highlighted that the new models can be locally run on constrained devices, consumer laptops, or even potentially browser-based inference.

Hugging Face Introduces Smaller SmolVLM AI Models

In a blog post, the company announced the SmolVLM-256M and SmolVLM-500M vision language models, in addition to the existing 2 billion parameter model. The release brings two base models and two instruction fine-tuned models in the abovementioned parameter sizes.

Hugging Face said that these models can be loaded directly to transformers, Machine Learning Exchange (MLX), and Open Neural Network Exchange (ONNX) platforms and developers can build on top of the base models. Notably, these are open-source models available with an Apache 2.0 licence for both personal and commercial usage.

With the new AI models, Hugging Face aims to bring multimodal models focused on computer vision to portable devices. The 256 million parameter model, for instance, can be run on less than one GB of GPU memory and 15GB of RAM to process 16 images per second (with a batch size of 64).

Andrés Marafioti, a machine learning research engineer at Hugging Face told VentureBeat, “For a mid-sized company processing 1 million images monthly, this translates to substantial annual savings in compute costs.”

To reduce the size of the AI models, the researchers switched the vision encoder from the previous SigLIP 400M to a 93M-parameter SigLIP base patch. Additionally, the tokenisation was also optimised. The new vision models encode images at a rate of 4096 pixels per token, compared to 1820 pixels per token in the 2B model.

Notably, the smaller models are also marginally behind the 2B model in terms of performance, but the company said this trade-off has been kept at a minimum. As per Hugging Face, the 256M variant can be used for captioning images or short videos, answering questions about documents, and basic visual reasoning tasks.

Developers can use transformers and MLX for inference and fine-tuning the AI model as they work with the old SmolVLM code out-of-the-box. These models are also listed on Hugging Face.

Join WhatsApp

Join Now

---Advertisement---