Ollama's New Multimodal Engine: Local Inference for Vision Models

2025-05-16
Ollama's New Multimodal Engine: Local Inference for Vision Models

Ollama has launched a new engine supporting local inference for multimodal models, starting with vision models like Llama 4 Scout and Gemma 3. Addressing limitations of the ggml library for multimodal models, the engine improves model modularity, accuracy, and memory management for reliable and efficient inference with large images and complex architectures (including Mixture-of-Experts models). This focus on accuracy and reliability lays the foundation for future support of speech, image generation, and longer contexts.