AI Agents can check their outputs on Google first - is this full circle?
Google Deepmind, Stanford and University of Illinois at Urbana-Champaign propose a Google search based system to factually validate LLM generated outputs to decrease LLMs tendency to confabulate. I do think this is a cool idea and will make AI agents factually more reliable, but I hope the irony doesn’t escape you:
a) After we have now spent many billions of dollars on the development of LLMs and RAG systems, vector stores, data centers and hardware, etc. AI agents now go and check their outputs on Google. All this effort to go back to a Google search …
b) I suspect it’s not coincidence that Google co-authored this research, looking to deeply integrate search into the AI toolbox, a technology many have argued that is going to upend their dominance and business model.
In reality, I’d say this quickly gets a bit tricky though, because the answer your system proposes that is then fact-checked via Google search may well include information from your proprietary RAG system, which you might not want to send into a Google search.
2
2 comments
Mathias Bock
4
AI Agents can check their outputs on Google first - is this full circle?
Generative AI
Public group
Learn and Master Generative AI, tools and programming with practical applications at work or business. Embrace the future – join us now!
powered by