Activity
Mon
Wed
Fri
Sun
Dec
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
What is this?
Less
More

Memberships

Gen AI Mastermind

Public • 168 • Free

AI Productivity Hub

Private • 124 • $39/m

1 contribution to Gen AI Mastermind
"Multi-Candidate Needle Prompting" for large context LLMs (Gemini 1.5)
Gemini 1.5's groundbreaking 1M token context window is a remarkable advancement in LLMs, providing capabilities unlike any other currently available model. With its 1M context window, Gemini 1.5 can ingest the equivalent of 10 Harry Potter books in one go. However, this enormous context window is not without its limitations. In my experience, Gemini 1.5 often struggles to retrieve the most relevant information from the vast amount of contextual data it has access to. The "Needle in a Haystack" benchmark is a well-known challenge for LLMs, which tests their ability to find specific information within a large corpus of text. This benchmark is particularly relevant for models with large context windows, as they must efficiently search through vast amounts of data to locate the most pertinent information. To address this issue, I have developed a novel prompting technique that I call "Multi-Candidate Needle Prompting." This approach aims to improve the model's ability to accurately retrieve key information from within its large context window. The technique involves prompting the LLM to identify 10 relevant sentences from different parts of the input text, and then asking it to consider which of these sentences (i.e. candidate needles) is the most pertinent to the question at hand before providing the final answer. This process bears some resemblance to Retrieval Augmented Generation (RAG), but the key difference is that the entire process is carried out by the LLM itself, without relying on a separate retrieval mechanism. By prompting the model to consider multiple relevant sentences from various parts of the text, "Multi-Candidate Needle Prompting" promotes a more thorough search of the available information and minimizes the chances of overlooking crucial details. Moreover, requiring the model to explicitly write out the relevant sentences serves as a form of intermediate reasoning, providing insights into the model's thought process. The attached screenshot anecdotally demonstrates the effectiveness of my approach.
5
6
New comment Apr 13
"Multi-Candidate Needle Prompting" for large context LLMs (Gemini 1.5)
Thanks for sharing! Works great in ChatGPT as well.
1-1 of 1
Nicolás Mladinic Dragucievic
1
3points to level up
@nicolas-mladinic-dragucievic-6500
Creative Industries Advisor and AI Art Enthusiast living in Chile.

Active 6d ago
Joined Mar 19, 2024
powered by