Microschool Software

Mira Murati's Thinking Machines Unveils Real-Time AI Interaction

BREAKING DEEP DIVE BULLISH
Mira Murati's Thinking Machines Unveils Real-Time AI Interaction

Former OpenAI CTO [[mira-murati|Mira Murati]]'s new AI venture, [[thinking-machines|Thinking Machines]], has announced its focus on developing 'interaction…

Summary

Former OpenAI CTO [[mira-murati|Mira Murati]]'s new AI venture, [[thinking-machines|Thinking Machines]], has announced its focus on developing 'interaction models.' These models are designed to move beyond the current, sequential way humans interact with AI, which often involves waiting for the AI to finish processing before providing new input. Instead, Thinking Machines aims for AI that can continuously process audio, video, and text in real-time, allowing for more natural, collaborative exchanges, akin to human-to-human communication. The company plans a limited research preview in the coming months, with a wider release targeted for later in **2025**.

Key Takeaways

  • Mira Murati's new company, Thinking Machines, is developing 'interaction models' for AI.
  • These models aim for real-time, multi-modal collaboration, breaking current AI limitations.
  • The goal is to make AI interaction as natural as human collaboration.
  • A limited research preview is planned for the coming months, with a wider release later in 2025.
  • The venture faces competition and has already seen key personnel departures.

Balanced Perspective

Thinking Machines is developing a new paradigm for AI interaction, moving from discrete, turn-based exchanges to continuous, multi-modal processing. The core innovation lies in addressing the 'bandwidth bottleneck' between human input and AI output. While the concept is compelling, the practical implementation and the actual performance of these 'interaction models' remain to be seen. The company has announced plans for a limited research preview and a wider release later in **2025**, suggesting that widespread availability is still some time away.

Optimistic View

This represents a significant leap forward in human-AI collaboration, potentially unlocking unprecedented levels of productivity and creativity. By breaking the 'single thread' bottleneck, [[thinking-machines|Thinking Machines]] could enable AI to understand human intent and context far more deeply, leading to more intuitive and powerful tools. The ability for AI to 'think, respond, and act in real time' across multiple modalities could redefine how we work, learn, and even interact with our physical environment, as seen in examples like real-time translation and posture correction.

Critical View

The ambitious goal of real-time, multi-modal AI interaction faces considerable technical hurdles and potential ethical quandaries. The continuous processing of audio, video, and text raises significant privacy concerns, especially if not managed with robust safeguards. Furthermore, the history of AI development is littered with ambitious projects that struggled to deliver on their promises. With key personnel already departing [[thinking-machines|Thinking Machines]] for competitors like [[meta|Meta]] and [[openai|OpenAI]], the company's ability to execute on Murati's vision is far from guaranteed.

Source

Originally reported by The Verge