Activity
Mon
Wed
Fri
Sun
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
What is this?
Less
More

Memberships

AI Developer Accelerator

Public • 3.9k • Free

3 contributions to AI Developer Accelerator
Challenges with Exceeding Context Window Limits in LLMs
I have a question regarding handling data that exceeds the context window limit. I once worked on an application to generate unit tests. As you know, writing a good unit test for a function requires access to the definitions of all dependencies used by the function. While I, as a human, can access the entire codebase, this is not possible with an LLM. If I don’t provide the definitions of all the dependencies, the LLM will make assumptions, which can result in incorrect mocks for other classes or external tools used by the function I’m trying to test. Do you have any suggestions for addressing this issue? Summarization or advanced chunking solutions won't work in this case because the LLM must have full access to all dependencies to generate accurate unit tests.
0
2
New comment 16h ago
0 likes • 16h
Here’s a clearer and more polished version of your message: Hi Paul, Thank you for your reply. I’ve been working on writing a unit test for a Game application. To generate the unit test, I used GPT-4 and LangChain. I should mention that I did not use an agentic approach. This was my first project using an LLM. The main issue I faced was that the code relied heavily on complex objects. Without knowing the definitions of these objects, it was challenging for the LLM to accurately mock them during testing. To address this, I had to pass a large amount of context—essentially providing several Python files—to ensure the LLM had the necessary object definitions for mocking. In many cases, the LLM failed to mock the objects correctly, and in some cases, I ran out of context because the function I planned to test depended on too many external objects. Here’s the general approach I followed: 1. Determine if the function is testable using the LLM. 2. Extract all dependencies required for testing the function. 3. Include the code for all external dependencies in the LLM prompt. 4. Write test scenarios based on the function behavior. 5. Generate unit tests from the scenarios. Despite these steps, I encountered challenges when working with functions requiring many complex dependencies. Below is an example of the code I attempted to write a unit test for, along with the source code. *---------------------------------------------< source code >----------------------------------------------------* from dataclasses import dataclass from typing import List from dataclasses_json import dataclass_json from domain import BotAction, BoardReference, BotActionOutcome from games.common.actions import DamageAttackOutcome, BotwarAction @dataclass_json @dataclass class RangeAttackOutcome(DamageAttackOutcome): energy_spent: int class RangeAttackBotAction(BotwarAction): @staticmethod def _do_range_attack(board: BoardReference, attacks: List[BotAction]) -> List[BotActionOutcome]:
Going from PDF to Chunks the smart way
I got asked on yesterdays call about how to take a PDF into a more consistent way into chunks for RAG. The first challenge you have with converting any PDF file is dealing with the unique underlying way that the PDF document may be formated. Much of that formating has no impact on the printed output but does have an impact if you are using python to extract with Langchain making the output often inconsistent with sections often being wrongly aggregated for the chunking process. A better approach that has worked consistantly for me is to first convert the PDF into Markdown then convert the Markdown into chunking see: Step One: import pymupdf4llm import pathlib # Convert PDF to markdown md_text = pymupdf4llm.to_markdown("input.pdf") # Save markdown to file (optional) perhaps just save as a string pathlib.Path("output.md").write_bytes(md_text.encode()) Step Two: from langchain_text_splitters import MarkdownHeaderTextSplitter # Define headers to split on headers_to_split_on = [ ("#", "Header 1"), ("##", "Header 2"), ("###", "Header 3"), ] # Initialize splitter markdown_splitter = MarkdownHeaderTextSplitter(headers_to_split_on) # Split the markdown text md_header_splits = markdown_splitter.split_text(md_text)
2
3
New comment 18d ago
0 likes • 18d
@Paul Miller Thanks for sharing.
Question related to LangChain Master Class
I’m watching the LangChain Master Class video: https://www.youtube.com/watch?v=yF9kGESAi3M&ab_channel=codewithbrandon, and I’m a bit confused about how we create rag_chain using rag_chain = create_retrieval_chain(history_aware_retriever, question_answer_chain) in 7_rag_conversational.py. Couldn’t we just use create_history_aware_retriever to retrieve the information?
0
0
Question related to LangChain Master Class
1-3 of 3
Shima Nikfal
1
5points to level up
@shima-nikfal-5160
Experienced software developer and engineer specializing in machine learning, data engineering, and computer vision.

Active 4h ago
Joined Oct 29, 2024
powered by