Understanding the visual knowledge of language models
LLMs trained primarily on text can generate complex visual concepts through code with self-correction. Researchers used these illustrations to train an image-free computer vision system to recognize real photos. You’ve likely heard that a picture is worth a thousand words, but can a large language model (LLM) get the picture if it’s never seen images before?As it turns out, language models that are trained purely on text have a solid understanding of the visual world. They can write image-rendering code to generate complex scenes with intriguing objects and compositions — and even when that knowledge is not used properly, LLMs can refine their images. Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) observed this when prompting language models to self-correct their code for different images, where the systems improved on their simple clipart drawings with each query.https://news.mit.edu/2024/understanding-visual-knowledge-language-models-0617
6
0 comments
Mehmet Güner
5
Understanding the visual knowledge of language models
Prompt Monkey
skool.com/promptmonkey
Stay Ahead in 2024. Join us to transform learning into success. Master AI, apply skills to real-world tasks, and unlock side hustles and careers.
Leaderboard (30-day)
powered by