Activity
Mon
Wed
Fri
Sun
Dec
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
What is this?
Less
More

Memberships

Prompt Monkey

Public • 393 • Free

16 contributions to Prompt Monkey
Overview of strategies to fine-tune LLMs
High level yet concise article for LLMs fine-tuning for reference: https://www.kdnuggets.com/the-best-strategies-for-fine-tuning-large-language-models
8
0
Top-k Sampling
Another fine tuning technique: use top-k sampling to limit the model's choices to the top k most likely next tokens. This helps in producing more coherent and contextually appropriate responses. For instance, setting k to 50 restricts the model to the top 50 choices. This can be done using API, refer to my temperature adjustment post earlier!
9
0
Using “temperature adjustment” in prompting
Interesting Conceptual Explanation of Temperature Adjustment **Temperature** is a parameter that controls the randomness of the model's output. It affects how creative or deterministic the responses will be. Here’s how it works: - **Low Temperature (e.g., 0.2)**: Makes the model's output more focused and deterministic. It tends to choose the highest probability next token, resulting in more predictable and precise responses. - **High Temperature (e.g., 0.8)**: Increases the randomness of the output. The model will choose among the top tokens with more diversity, making the responses more creative and varied but potentially less coherent. ### Using Temperature Adjustment with OpenAI's API If you are using OpenAI's API, you can set the temperature parameter in your API request. Here's a simple example in Python using the `openai` package: 1. **Install the OpenAI Package**: ```bash pip install openai ``` 2. **API Key Setup**: ```python import openai # Replace 'your-api-key' with your actual OpenAI API key openai.api_key = 'your-api-key' ``` 3. **Making a Request with Temperature Adjustment**: ```python response = openai.Completion.create( engine="text-davinci-003", # You can replace this with the model you're using prompt="Explain the importance of temperature in machine learning models.", max_tokens=150, # Limits the number of tokens in the response temperature=0.7 # Adjust the temperature value here ) print(response.choices[0].text.strip()) ``` ### Example Code Breakdown - **engine**: Specifies the model you are using (e.g., `text-davinci-003`). - **prompt**: The text prompt you provide to the model. - **max_tokens**: The maximum number of tokens to generate in the response. - **temperature**: The temperature setting to control the randomness of the output. By adjusting the `temperature` parameter, you can control the balance between creativity and determinism in the responses you get from the model. ### Experimenting with Different Temperatures
9
3
New comment May 31
2 likes • May 15
@Yasin Ertan in addition, using the api one can also specify max_tokens to fine tune the results. Will write another discussion thread on that!
PaliGemma – Google's Cutting-Edge Open Vision Language Model
Will try this out soon! https://huggingface.co/blog/paligemma Has anyone tried other Vision Language Model?
6
1
New comment May 31
Hi, I'm Nelo
Hello everyone, my name is Nelo and I'm a risk management professional. I'm also an AI enthusiast and I'm excited to figure it out😊😊.
11
6
New comment May 15
1 like • May 14
Welcome Nelo!
1-10 of 16
Phee Lip Sim
4
32points to level up
@phee-lip-sim-4983
Prompt engineering guru

Active 154d ago
Joined May 1, 2024
powered by