In a stunning announcement reverberating through the tech world, Kyutai introduced Moshi, a revolutionary real-time native multimodal foundation model. This innovative model mirrors and surpasses some of the functionalities showcased by OpenAI’s GPT-4o in May.
Moshi is designed to understand and express emotions, offering capabilities like speaking with different accents, including French. It can listen and generate audio and speech while maintaining a seamless flow of textual thoughts, as it says. One of Moshi’s standout features is its ability to handle two audio streams simultaneously, allowing it to listen and talk simultaneously.
This real-time interaction is underpinned by joint pre-training on a mix of text and audio, leveraging synthetic text data from Helium, a 7 billion parameter language model developed by Kyutai.