Activity
Mon
Wed
Fri
Sun
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
What is this?
Less
More

Memberships

New Society

Private • 577 • $97/m

Agent Artificial

Public • 2 • $500/m

No-Coder Academy

Public • 94 • Free

Teaching & Learning With A.I.

Private • 1.5k • Free

You Probably Need a Robot

Private • 1.7k • Free

PRISM AI Family

Private • 232 • Free

AI Synthesizers FREE

Private • 89 • Free

Generative AI

Public • 236 • Free

Quantum AI Society

Private • 77 • $8/m

1 contribution to Gen AI Mastermind
"Multi-Candidate Needle Prompting" for large context LLMs (Gemini 1.5)
Gemini 1.5's groundbreaking 1M token context window is a remarkable advancement in LLMs, providing capabilities unlike any other currently available model. With its 1M context window, Gemini 1.5 can ingest the equivalent of 10 Harry Potter books in one go. However, this enormous context window is not without its limitations. In my experience, Gemini 1.5 often struggles to retrieve the most relevant information from the vast amount of contextual data it has access to. The "Needle in a Haystack" benchmark is a well-known challenge for LLMs, which tests their ability to find specific information within a large corpus of text. This benchmark is particularly relevant for models with large context windows, as they must efficiently search through vast amounts of data to locate the most pertinent information. To address this issue, I have developed a novel prompting technique that I call "Multi-Candidate Needle Prompting." This approach aims to improve the model's ability to accurately retrieve key information from within its large context window. The technique involves prompting the LLM to identify 10 relevant sentences from different parts of the input text, and then asking it to consider which of these sentences (i.e. candidate needles) is the most pertinent to the question at hand before providing the final answer. This process bears some resemblance to Retrieval Augmented Generation (RAG), but the key difference is that the entire process is carried out by the LLM itself, without relying on a separate retrieval mechanism. By prompting the model to consider multiple relevant sentences from various parts of the text, "Multi-Candidate Needle Prompting" promotes a more thorough search of the available information and minimizes the chances of overlooking crucial details. Moreover, requiring the model to explicitly write out the relevant sentences serves as a form of intermediate reasoning, providing insights into the model's thought process. The attached screenshot anecdotally demonstrates the effectiveness of my approach.
5
6
New comment Apr 13
"Multi-Candidate Needle Prompting" for large context LLMs (Gemini 1.5)
3 likes • Apr 10
@Sunil Ramlochan Thanks, I hope it works well for you. I don't have many examples but I did give one in the screenshot above. Here is the prompt that I used: "You are a historical research assistant. You answer questions about historical documents. In particular, you answer questions about the autobiography of Marvin Bush, "As I Remember It". When formulating an answer, begin by listing out 10 relevant sentences from different parts of the text. Then, consider which sentences are most relevant, and formulate your answer. Here is your first question: What is the most dangerous thing that happened to Marvin during his deployment, in which he came close to losing his life?"
1-1 of 1
Benjamin Bush
2
12points to level up
@benjamin-bush-7904
PhD in Systems Science, SUNY Binghamton (2017) Graduate Certificate in Complex Systems (2013) https://www.youtube.com/watch?v=SzbKJWKE_Ss

Active 12d ago
Joined Mar 28, 2024
ISFP
Los Alamitos, CA
powered by