Understanding Context Token Calculation and Its Impact Beyond the Threshold
How is the context token calculated, and what happens when it exceeds the threshold? For instance, if I request GPT to utilize knowledge from five different books and the expertise of five different professors, the accumulated information would likely exceed 100,000 pages, which surpasses the 32K token limit of GPT-4-32K. I'm curious if this is how it operates or if there is a way for it to handle such a task. If GPT-4-32K cannot accommodate the task due to the context limit, I would like to know what will happen to the final output, and how to identify that it reached the limit, such that i can make sure i'm given context within its limit.