Embedding Models and Similarity Search
So, I could be doing something wrong here. I woke up this morning to a video about a longer context embedding model, Jina Embeddings v2, apparently rivaling with OpenAI embedding model, text-embedding-ada-002.
So, I put it to the test. For reference, Embeddings can be created for a variety of different use cases, but from my reading, the main idea is that they are supposed to encode the overall meaning of words or a string of words.
I am currently testing the above model, Jina, vs a commonly used Open source model bge-small-en-v1.5 by BAAI, on a youtube use case, seeing how well it can get me the K-Nearest-Neighbors based on a particular query. The actual use case, that will be tested during work hours, is related to Legal documents and making them more accessible to 'Laymen'.
Anyhow, I fired up the good ol notebook, imported Llamaindex Embeddings, Initialized my llm, created service context for both embeddings, loaded my documents from the directory, created 2 indexes from those documents, created a retriever on both of the indexes, then attempted to retrieve similar documents to the query 'What is the value equation?". See the screenshots of the scores returned.
It seems that the Jina Embeddings model out of the box isn't as good as the other, bge-small-en-v1.5. When i printed the content it retrieved, it isn't very relevant compared to the bge model.
So, what do you all think? Did i do something wrong here? Has anyone done testing with embeddings before?
Let me know!
6
5 comments
Brandon Phillips
7
Embedding Models and Similarity Search
Data Alchemy
skool.com/data-alchemy
Your Community to Master the Fundamentals of Working with Data and AI — by Datalumina®
Leaderboard (30-day)
powered by