I am seriously confused about this
Wondering if anyone can explain something to me. I am going through the Introduction to LangChain and looking at this page and see what llm models can be used https://python.langchain.com/docs/integrations/llms/ for example my preferred model is here OllamaLLM-langchain-ollama. On this page https://python.langchain.com/docs/tutorials I see some interesting things that I want to work with Build a Simple LLM Application with LCEL Build a Chatbot Build vector stores and retrievers Build an Agent Now, if I take a look at building a chatbot here https://python.langchain.com/docs/tutorials/chatbot/ I can see instructions how to set and install etc,,, For example pip install -qU langchain-xxxxx Available options: OpenAI Anthropic Azure Google Cohere NVIDIA FireworksAI Groq MistralAI Togethe import getpass import os os.environ["ANTHROPIC_API_KEY"] = getpass.getpass() from langchain_anthropic import ChatAnthropic model = ChatAnthropic(model="claude-3-5-sonnet-20240620") My question is this. If I want to use ollama is just a case of changing the code like this: os.environ["OLLAMA_API_KEY"] = getpass.getpass() from langchain_ollama import ChatAnthropic model = ChatOllama(model="model_name") And finally, if this is the case, how do i go about getting the Ollama API Key? Thanks for your help :o)