Activity
Mon
Wed
Fri
Sun
Dec
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
What is this?
Less
More

Memberships

Data Alchemy

Public • 22.3k • Free

12 contributions to Data Alchemy
I am seriously confused about this
Wondering if anyone can explain something to me. I am going through the Introduction to LangChain and looking at this page and see what llm models can be used https://python.langchain.com/docs/integrations/llms/ for example my preferred model is here OllamaLLM-langchain-ollama. On this page https://python.langchain.com/docs/tutorials I see some interesting things that I want to work with Build a Simple LLM Application with LCEL Build a Chatbot Build vector stores and retrievers Build an Agent Now, if I take a look at building a chatbot here https://python.langchain.com/docs/tutorials/chatbot/ I can see instructions how to set and install etc,,, For example pip install -qU langchain-xxxxx Available options: OpenAI Anthropic Azure Google Cohere NVIDIA FireworksAI Groq MistralAI Togethe import getpass import os os.environ["ANTHROPIC_API_KEY"] = getpass.getpass() from langchain_anthropic import ChatAnthropic model = ChatAnthropic(model="claude-3-5-sonnet-20240620") My question is this. If I want to use ollama is just a case of changing the code like this: os.environ["OLLAMA_API_KEY"] = getpass.getpass() from langchain_ollama import ChatAnthropic model = ChatOllama(model="model_name") And finally, if this is the case, how do i go about getting the Ollama API Key? Thanks for your help :o)
3
11
New comment 20d ago
0 likes • Oct 23
fantastic, thanks for this. I will take a look 👍
0 likes • 29d
Hi, yes I have reviewed them but have not had any time to attack them. I am again fighting with vs code just to get it to install requirements. Its driving me insane.
Scratching my head about this for a long time
The Introduction to Langchain (LLM Applications) was very interesting and it is touching on some things that I have been struggling with. My question about this video goes like this. My great battle is using my preferred LLM in this case Ollama and although most frameworks say they are compatible with local Ollama, I have never been able to get a straight answer from anyone about how to make that happen. I found it relatively easy creating working models on my local machine with an AGU so I run Ollama locally with something like Anythingllm in a Docker container, but when I attempt to host live for the world, things start going wrong. Most cloud hosts that allow Ollama to run are pretty expensive, and if one decides to use a paid-for product, there are other restrictions like cost and number of requests. I my mind it would be perfect to run lightweight agents just like in this video and run a private LLM somewhere, or use and endpoint at huggingface, but that is still a mystery to me. So, oh yes, my question. If I want to replicate what David is doing in this video, how would I reference either a local or remote hosted Ollama installation?
2
5
New comment Oct 18
1 like • Oct 17
I made a big decision to abandon my live bots. I wasted too much time on them. In the end the straw that broke the camels back was https. Can not believe is was impossible to get SSL to was on the subdomain and the host would not help. So, back to studying. I will do my project the old fashion way be using the coding I learn here.
0 likes • Oct 18
Yeah my local install is fine. I wanted to migrate to a live server. That's when it all went wrong.
Hmm, that seems contradictory...
Ok, so I think I have mentioned this before and I have had an answer but my query goes a little futher now. The thing is, I don´t want to pay for the chatGPT API while I am studying, so I chose to use a local installation of Ollama. So far that´s all good and I have success running chatbots on my local machine. But, and this is a big but... In the Building Applications with LLMs section , video "OpenAI - Function Calling" In the video at minute 1.55 David says "If you want to follow along you need an openAI key, no access to GPT is requeried" Okay, what does that mean? The openAI key is a paid for services giving access to openAI. I don´t get it. Also, I wish someone would make a video for those of us who wish to use free open source LLM in our frameworks. Anyway, that´s my thought for the day. Cheers :o)
1
0
Please please can someone help?
Dear Team, I don't know where to start. I have been stuck on section 2 forever because I can not get VS code to load dependencies etc... Let me explain. After failure after failure and having chagpt sending me around in a loop I am at my wits end, and I hope someone can offer some suggestions. The first thing I need to understand is while running vs code on windows I can do 2 things. 1. Set up vs code to connect via WSL. 2. Use the windows native directory structure. Question. What option should I opt for? Because I have issues with both options In wsl I can not run html files, (not found) In windows I can run python and html files but struggle to load dependencies. they are underlined in red. chatgtp say " command palette. Type Python: Select Interpreter" If I do, there is nothing to select. Also chatgpt says "pip install Flask torch diffusers" Is this done in the terminal of the individual project? Is it done in windows or wsl? I have run that install on both. XXXX wsl Runs and updated but can not run html XXX windows=error LocalCache\local-packages\Python311\Scripts' which is not on PATH. Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location. To update, run: C:\Users\carls\AppData\Local\Microsoft\WindowsApps\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\python.exe -m pip install --upgrade pip PS C:\Users\carls\Projects\Repos\to-video> -m pip install --upgrade pip -m : The term '-m' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. At line:1 char:1 + -m pip install --upgrade pip + ~~ + CategoryInfo : ObjectNotFound: (-m:String) [], CommandNotFoundException + FullyQualifiedErrorId : CommandNotFoundException OUTCOME. I now have 2 repositories, 1 for windows and 1 for wsl. Both of which have issues and all I am doing as I follow instruction is load more and more code on to my machine.
1
11
New comment Sep 13
1 like • Sep 12
@Anaxareian Aia I have deleted everything and started again. Now looking like the vs code is working as a windows installation. Thanks
1 like • Sep 12
@Nick Young yes i was. thanks for clearing that up. 😁
Keep banging my head against a brick wall.
I wonder if anyone has been looking at the Agent-Zero framework? I have been trying to get it running without openai api key because it allows the use on locally install LLMs. Been trying to fix errors but al AI tools are sending me around in circles. Would be interested if anyone has had any experience and or success with this?
1
0
1-10 of 12
Carl Scutt
3
6points to level up
@carl-scutt-7219
In between tech jobs, so looking to recurve into AI but not sure which direction.

Active 52m ago
Joined Jul 29, 2024
powered by