Repository logo
Institutional Digital Repository
Shreenivas Deshpande Library, IIT (BHU), Varanasi

mBLIP: Efficient Bootstrapping of Multilingual Vision-LLMs

dc.contributor.authorGeigle G.; Jain A.; Timofte R.; Glavaš G.
dc.date.accessioned2025-05-23T11:13:18Z
dc.description.abstractModular vision-language models (Vision-LLMs) align pretrained image encoders with (frozen) large language models (LLMs) and post-hoc condition LLMs to ‘understand’ the image input. With the abundance of readily available high-quality English image-text data as well as strong monolingual English LLMs, the research focus has been on English-only Vision-LLMs. Multilingual vision-language models are still predominantly obtained via expensive end-to-end pretraining, resulting in comparatively smaller models, trained on limited multilingual image data supplemented with text-only multilingual corpora. We present mBLIP, the first Vision-LLM leveraging multilingual LLMs, which we obtain in a computationally efficient manner on consumer-level hardware. To this end, we re-align an image encoder previously tuned to an English LLM to a new, multilingual LLM using only a few million multilingual training examples derived from a mix of vision-and-language tasks, which we obtain by machine-translating high-quality English data to 95 languages. On the IGLUE benchmark and XM3600, mBLIP yields results competitive with state-of-the-art models and it greatly outperforms strong English-only Vision-LLMs like Llava 1.5. We release our model, code, and train data at https://github.com/gregor-ge/mBLIP. © 2024 Association for Computational Linguistics.
dc.identifier.doiDOI not available
dc.identifier.urihttp://172.23.0.11:4000/handle/123456789/5707
dc.relation.ispartofseriesALVR 2024 - 3rd Workshop on Advances in Language and Vision Research, Proceedings of the Workshop
dc.titlemBLIP: Efficient Bootstrapping of Multilingual Vision-LLMs

Files

Collections