PSA for local AI x VLM fans:
Attention 📣: llama.cpp server and Web UI are now compatible with VLMs ‼️ meaning the llama.cpp now has eyes 👁️ Shout out to Georgi Gerganov and the GGML and HF teams, and in particular to the great Xuan-Son Nguyen, for shipping this much-awaited feature! 🤗 Give it a try today (upgrade your local binary), example commands in the snippet below: the GGML team shared a bunch of pre-quantized models, ready to be used, from: - Google Gemma - Mistral's Pixtral - Qwen VL - SmolVLM