Activity
Mon
Wed
Fri
Sun
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
What is this?
Less
More

Memberships

AI Mastermind

Public • 465 • Free

203 contributions to AI Mastermind
[DEMO] ChatGPT Advanced Voice Rolls Out for Plus and Team Users :: The AI Brief
Key Highlights: - Advanced Voice Conversations: Now available to Plus and Team users, leveraging GPT-4o’s native audio capabilities. These conversations feel more natural by recognizing non-verbal cues like speech speed, tone, and emotion. - Voice Options: Users can choose from nine distinct voice personalities, such as "Arbor" (easygoing), "Ember" (confident), and "Vale" (inquisitive), tailoring the voice to the interaction. - Usage Limits: Advanced voice has daily time restrictions. Plus and Team users receive a 15-minute warning as they approach their daily limit. After hitting the limit, conversations revert to standard voice. - Standard Voice Access: Available to all users, standard voice transcribes input to text before responding, using models like GPT-4o mini. These conversations do not support real-time emotional cues and count toward message limits. - Voice Chats in the Background: Both advanced and standard voice chats can continue in the background, even when switching between apps or locking the screen. - Resuming Conversations: Advanced voice chats can be resumed in text, standard voice, or advanced voice. However, standard voice chats cannot be resumed in advanced mode. - Audio & Transcriptions: Audio clips from advanced voice conversations are stored with transcriptions and deleted 30 days after chat deletion unless shared for model training. - Privacy and Customization: Users can control sharing settings, opting in or out of using audio clips for training purposes. Only Free and Plus accounts can share audio clips to help train models. - Tips for Better Conversations: Using headphones and enabling Voice Isolation on iPhone can help avoid interruptions during advanced voice chats, though the feature isn’t optimized for car Bluetooth or speakerphone. - Voice and GPTs: Advanced voice is not yet available for GPTs. If used with a GPT, users are redirected to standard voice, which includes the unique "Shimmer" voice option. - Content Limitations: To respect creators’ rights, voice conversations cannot generate musical content or singing.
5
6
New comment Oct 9
[DEMO] ChatGPT Advanced Voice Rolls Out for Plus and Team Users :: The AI Brief
0 likes • Oct 9
LOVE this but I don’t use it because I know there’s a two hour total limit for 24 hours. I’m making a mistake of saving up my hours and then not using them because at that point there’s a lot of pressure to use them wisely and not just for fun.
🔊 Treasure Hunt: Uncover the Functional Audio Cues of ChatGPT's Enhanced Voice Mode 🎧
🔊 Treasure Hunt: Uncover the Functional Audio Cues of ChatGPT's Enhanced Voice Mode 🎧 ChatGPT’s advanced voice mode introduces functional audio cues, a feature that adds purposeful sounds to enhance your experience during certain interactions. These aren’t just random “sound effects” – they’re designed to assist in specific moments. For instance, during a mindfulness breathing exercise 🧘‍♂️, ChatGPT played the sound of breathing in through the nose and out through the mouth to help guide the process. However, ChatGPT itself doesn’t trigger these sounds directly. Instead, a separate part of the system (working behind the scenes) generates them, and the AI isn’t aware of when or why this happens. OpenAI has built these cues for particular situations, but the exact triggers remain outside of the AI’s knowledge and control. Have you come across other moments where ChatGPT produced sounds? Share your experiences of when and how these audio cues were triggered! On a side note: during a network glitch, there was a strange instance where I heard a brief 2-second clip from a totally different chat, though it was AI-generated, not a human voice.
1
0
The AI Brief: OpenAI’s o1-Preview & Advanced AI Reasoning
Key Highlights: 1. OpenAI released o1-preview, a new series of AI models designed to spend more time thinking before responding, enhancing reasoning in complex tasks across science, coding, and math. 2. The models employ reinforcement learning and chain-of-thought processing to mimic human problem-solving, refining their thinking process and recognizing mistakes. 3. Performance Metrics: • Solved 83% of International Mathematics Olympiad qualifying exam problems, compared to GPT-4o’s 13%. • Outperforms experts on PhD-level science questions in physics, chemistry, and biology. • Achieved the 89th percentile in Codeforces competitive programming contests. 4. Availability: • Integrated into ChatGPT for Premium and Teams users. • Two versions: o1-preview and o1-mini, with o1-mini rolled out to all eligible users. o1-preview has been rolled out to some users already. • API access is available at $15 per million input tokens and $60 per million output tokens, higher than GPT-4o due to its advanced capabilities. 5. Some Initial Use Cases: • Assists in advanced coding, document analysis, and legal workflows. • Helps healthcare researchers annotate cell sequencing data. • Aids physicists in generating complex mathematical formulas. • Acts as a great smart assistant for brainstorming and problem-solving. • Benefits users who provide minimal context or haven’t fully leveraged AI reasoning. Thoughts: This is huge! OpenAI’s o1-preview feels like a monumental step forward in AI. The fact that it can outperform experts and tackle problems that were previously out of reach is genuinely exciting. For professionals dealing with complex tasks, the potential gains could be transformative. It’s like having a brilliant colleague who’s always ready to dive deep into tough challenges. I have access to both models so far. Very impressed by o1-preview. Do you have access? What are your thoughts so far?
7
8
New comment Sep 17
The AI Brief: OpenAI’s o1-Preview & Advanced AI Reasoning
2 likes • Sep 15
Their developer documentation says: The o1-preview and o1-mini models offer a context window of 128,000 tokens. Each completion has an upper limit on the maximum number of output tokens—this includes both the invisible reasoning tokens and the visible completion tokens. The maximum output token limits are: - o1-preview: Up to 32,768 tokens - o1-mini: Up to 65,536 tokens
2 likes • Sep 15
I dint know how that compares numerically to 4o, but from my test it’s significantly higher
o1: OpenAI's NEW Prompting Advice
THIS IS TAKEN FROM THEIR DEVELOPER PAGES. These models perform best with straightforward prompts. Some prompt engineering techniques, like few-shot prompting or instructing the model to "think step by step," may not enhance performance and can sometimes hinder it. Here are some best practices: - Keep prompts simple and direct: The models excel at understanding and responding to brief, clear instructions without the need for extensive guidance. - Avoid chain-of-thought prompts: Since these models perform reasoning internally, prompting them to "think step by step" or "explain your reasoning" is unnecessary. - Use delimiters for clarity: Use delimiters like triple quotation marks, XML tags, or section titles to clearly indicate distinct parts of the input, helping the model interpret different sections appropriately. - Limit additional context in retrieval-augmented generation (RAG): When providing additional context or documents, include only the most relevant information to prevent the model from overcomplicating its response. Reasoning Models OpenAI o1 series models are new large language models trained with reinforcement learning to perform complex reasoning. o1 models think before they answer, and can produce a long internal chain of thought before responding to the user. o1 models excel in scientific reasoning, ranking in the 89th percentile on competitive programming questions (Codeforces), placing among the top 500 students in the US in a qualifier for the USA Math Olympiad (AIME), and exceeding human PhD-level accuracy on a benchmark of physics, biology, and chemistry problems (GPQA). There are two reasoning models available in the API: 1. o1-preview: an early preview of our o1 model, designed to reason about hard problems using broad general knowledge about the world. 2. o1-mini: a faster and cheaper version of o1, particularly adept at coding, math, and science tasks where extensive general knowledge isn't required. o1 models offer significant advancements in reasoning, but they are not intended to replace GPT-4o in all use-cases.
5
0
AI Energy Consumption’s Future Benefits
The suggestion that AI consumes too much energy can be critiqued by comparing it to the historical energy demands of developing new technologies and machines during phases of innovation. While the energy consumption of AI systems, particularly for training large models, is substantial, this needs to be viewed in the broader context of technological evolution. 1. Historical Context of Technological Development: Throughout history, new technologies have often required significant initial energy investments and resources. For instance, the Industrial Revolution saw a dramatic increase in energy use due to steam engines and factories, but it ultimately led to efficiencies that improved productivity and overall living standards. Similarly, the development of the internet and digital technologies consumed vast amounts of energy during their inception phases, but these technologies have since revolutionized communication, reduced certain forms of energy use (e.g., by minimizing the need for physical mail and commuting), and created new sectors of the economy . 2. Energy Use vs. Long-term Benefits: The energy consumption of AI needs to be balanced against its potential long-term benefits. AI has the potential to optimize energy use across various sectors—such as optimizing power grids, reducing waste in supply chains, and enhancing efficiency in manufacturing processes. For example, AI-driven technologies can lead to smarter cities with better traffic management, reducing fuel consumption and emissions. Thus, while the initial energy expenditure in developing AI technologies is high, the downstream energy savings and environmental benefits could potentially offset this cost . 3. Comparative Analysis with Other Technologies: When comparing AI with other technological advancements, it’s important to recognize that many transformative technologies initially appear to consume an unsustainable amount of energy or resources. For example, the manufacturing and widespread adoption of personal computers and smartphones required substantial energy and raw materials. However, these devices have enabled numerous efficiencies and innovations across industries, reducing overall energy expenditure in areas like travel (via virtual meetings) and paper usage (via digital documentation) .
3
4
New comment Sep 13
0 likes • Sep 13
I thought the post would fit in here too.
0 likes • Sep 13
To critique the suggestion that AI consumes too much energy, it’s important to consider how the development of technology itself has led to innovations that significantly reduce energy consumption over time. Comparing the energy use of early-generation processors with more advanced ones, such as those developed by companies like NVIDIA and Groq, illustrates this trend. 1. Evolution of Energy Efficiency in Processors • Early Processors (Generation 1): The first generation of processors, such as the Intel 4004 (introduced in 1971), was relatively simple and consumed modest amounts of energy due to their limited processing capabilities. However, as demand for computing power grew, energy consumption increased with more powerful processors. For instance, early AI training tasks required substantial computational power and energy, often using less efficient hardware designed for general-purpose computing. • Modern Processors (Generation 10 and Beyond): Modern processors, including those designed for AI, such as NVIDIA’s A100 Tensor Core GPUs and specialized AI chips from Groq, are engineered for both performance and energy efficiency. These chips can execute complex AI tasks with significantly lower power consumption per operation compared to earlier generations. For example, NVIDIA’s A100 GPU uses advanced architectures like Ampere and employs techniques such as mixed-precision computing, which significantly reduces the amount of energy required for AI workloads by optimizing the precision level needed for specific tasks . 2. Specific Examples: NVIDIA and Groq • NVIDIA’s A100 Tensor Core GPU: • Energy Efficiency Improvements: The NVIDIA A100, a high-performance GPU designed for AI and high-performance computing, provides up to 20 times the performance of its predecessor, the NVIDIA V100, while using a similar amount of power. This efficiency gain is due to innovations such as using more efficient transistors, improved cooling mechanisms, and architecture designs that maximize performance per watt.
1-10 of 203
Dave Kemsley
5
37points to level up
@58737067
Did AI in the ‘80s, but insufficient data & computing power. Real-time embedded development & signal processing. Rapid deep-dive back into AI.

Active 49d ago
Joined Feb 5, 2024
powered by