User
Write something
Video writing assistant prompt
The way I use the video writing assistant is by giving it a transcript, which it converts into structure. Here is the system prompt I'm testing at the moment. I usually continue ideating each individual section of the video structure it creates. Of course modify the prompt as you see fit The prompt: You are a video script writer assistant. Your task is to take the transcripts I give you and turn them into well-structured video scripts ready for production. Here are the key requirements for the video scripts: Introduction: Begin with a brief, attention-grabbing introduction that sets up the topic and hooks the viewer.Mention the actual key points, specific themes that will be covered in the video. Main Content: Divide the main content into 3-5 sections or chapters based on the natural flow of ideas from the transcript. For each section, provide a concise description of what will be covered. Use transcript excerpts verbatim when appropriate, but paraphrase, reorganize, and expand the content for better clarity and flow as a video script. Add speaker notes, visual cues (e.g., "show b-roll"), or suggestions for graphics/animations where relevant. Conclusion: Summarize the key takeaways or learnings from the video. Provide a clear call-to-action (e.g., subscribe, like, comment) and end with an engaging closing line. Please maintain a fun, inspiring and conversational tone appropriate for my YouTube channel's target audience, which are creators and storytellers. Aim for scripts that are engaging, informative, and easy to follow in video format. I will provide you with the text transcripts. Your task is to use this information to generate a structured video script following the above guidelines.
4
3
New comment Jun 7
Image distance with chat GPT
Everytime I try to create images using Chat GPT/DALL E 3, the images are very zoomed in, how can I control the image distance, so it's further from the camera?
1
1
New comment May 2
"Multi-Candidate Needle Prompting" for large context LLMs (Gemini 1.5)
Gemini 1.5's groundbreaking 1M token context window is a remarkable advancement in LLMs, providing capabilities unlike any other currently available model. With its 1M context window, Gemini 1.5 can ingest the equivalent of 10 Harry Potter books in one go. However, this enormous context window is not without its limitations. In my experience, Gemini 1.5 often struggles to retrieve the most relevant information from the vast amount of contextual data it has access to. The "Needle in a Haystack" benchmark is a well-known challenge for LLMs, which tests their ability to find specific information within a large corpus of text. This benchmark is particularly relevant for models with large context windows, as they must efficiently search through vast amounts of data to locate the most pertinent information. To address this issue, I have developed a novel prompting technique that I call "Multi-Candidate Needle Prompting." This approach aims to improve the model's ability to accurately retrieve key information from within its large context window. The technique involves prompting the LLM to identify 10 relevant sentences from different parts of the input text, and then asking it to consider which of these sentences (i.e. candidate needles) is the most pertinent to the question at hand before providing the final answer. This process bears some resemblance to Retrieval Augmented Generation (RAG), but the key difference is that the entire process is carried out by the LLM itself, without relying on a separate retrieval mechanism. By prompting the model to consider multiple relevant sentences from various parts of the text, "Multi-Candidate Needle Prompting" promotes a more thorough search of the available information and minimizes the chances of overlooking crucial details. Moreover, requiring the model to explicitly write out the relevant sentences serves as a form of intermediate reasoning, providing insights into the model's thought process. The attached screenshot anecdotally demonstrates the effectiveness of my approach.
5
6
New comment Apr 13
"Multi-Candidate Needle Prompting" for large context LLMs (Gemini 1.5)
Anthropic Prompt Library
Anthropic has released a prompt library. While it's OK, and largely centered around coding tasks. Let's analyse them and see if there's anything we can learn from them and make them better. https://docs.anthropic.com/claude/prompt-library
1
1
New comment Mar 22
AI Reads Personalities Like a Book
This report from Nature magazine, published this week, is very interesting. It answers the question: Can large language models (LLMs) predict how people perceive the personalities of known/public figures? Here's an article that explains the study: https://promptengineering.org/groundbreaking-nature-study-reveals-ais-ability-to-predict-public-figures-perceived-personalities/ Here is the study: https://www.nature.com/articles/s41598-024-57271-z The study found that GPT-3's could predict human-rated personality traits and likability for public figures with high accuracy. Accuracy exceeded the predictive power of individual human raters and was higher for more popular figures. Models showed strong face validity based on the personality-descriptive adjectives at the extremes of predictions. This highlights something I've found in LLM models from years of testing: LLMs respond well to personalities/skills/archetypes. This is even more powerful in GPT-4 and I can only imagine it becoming more powerful in subsequent languages. Let's Hear Your Experiences
1
0
1-8 of 8
Gen AI Mastermind
skool.com/generative-minds-prompts-6761
Ideas into Impact: Business, Art, AI Prompts and News Hub
Leaderboard (30-day)
powered by