Scratching my head about this for a long time
The Introduction to Langchain (LLM Applications) was very interesting and it is touching on some things that I have been struggling with.
My question about this video goes like this.
My great battle is using my preferred LLM in this case Ollama and although most frameworks say they are compatible with local Ollama, I have never been able to get a straight answer from anyone about how to make that happen.
I found it relatively easy creating working models on my local machine with an AGU so I run Ollama locally with something like Anythingllm in a Docker container, but when I attempt to host live for the world, things start going wrong. Most cloud hosts that allow Ollama to run are pretty expensive, and if one decides to use a paid-for product, there are other restrictions like cost and number of requests.
I my mind it would be perfect to run lightweight agents just like in this video and run a private LLM somewhere, or use and endpoint at huggingface, but that is still a mystery to me.
So, oh yes, my question. If I want to replicate what David is doing in this video, how would I reference either a local or remote hosted Ollama installation?
2
5 comments
Carl Scutt
3
Scratching my head about this for a long time
Data Alchemy
skool.com/data-alchemy
Your Community to Master the Fundamentals of Working with Data and AI — by Datalumina®
Leaderboard (30-day)
powered by