OpenAI's $20,000/mo AI Models: Are They Worth It?

OpenAI's plans to release high-cost AI models, from $22k/mo for low-end agents to $20k/mo for PhD-level research agents. Plus, updates on Google's AI search, Duck Duck Go's AI chatbots, and the latest advancements in text-to-speech, AI video generation, and more.

2025年3月22日

party-gif

Discover the latest advancements in AI, from OpenAI's high-end AI agents to Google's new search-powered AI mode. Explore the benefits of these cutting-edge technologies and how they can enhance your productivity and creativity.

How OpenAI is Pricing Their AI Services

According to the article, OpenAI is planning to release more expensive AI models and services. Specifically:

  • They plan to sell "low-end agents" for $22,000 per month, targeted at high-income knowledge workers.
  • They will offer "mid-tier agents" for software development, potentially costing $10,000 per month.
  • They will also provide "high-end agents" for PhD-level research, which could cost $20,000 per month.

The idea behind these more expensive agents is that they will be able to do more autonomous work compared to the $200 per month ChatGPT plan. However, the pricing is not yet fully confirmed, as OpenAI is still testing different price points.

The article notes that some see these prices as high, with the "running joke" being that one could hire a good human researcher or coder for a similar monthly cost. Nonetheless, OpenAI seems to be positioning these agents as premium services for enterprises and high-income individuals.

GPT-4.5 Now Available to Plus Users

According to the transcript, OpenAI has made GPT-4.5 available to all Plus users of ChatGPT. Previously, the new model was only accessible to Pro members on the $200 per month plan, but OpenAI has now expanded access to the $20 per month Plus plan.

The transcript notes that when the author first started using GPT-4.5, they were underwhelmed and unimpressed, as the model did not perform well on tasks like spelling and benchmarks. However, after using it for over a week, the author's opinion changed, and they now find GPT-4.5 to be the best model for more creative tasks like writing and summarization.

The availability of GPT-4.5 to Plus users is seen as a positive development, as it provides more affordable access to the improved language model capabilities.

New AI Features in Google, DuckDuckGo, and Microsoft Products

Google has been rolling out several new AI-powered features across its products:

  • Google is introducing a new AI-powered search mode that can search the web and provide AI-generated responses, similar to Anthropic's Perplexity. This feature will be available to Google One Premium subscribers.
  • Google Shopping now has AI-generated fashion items that it tries to match with real products. Google also has a virtual makeup try-on feature.
  • Google Sheets has new AI-powered data analysis capabilities, allowing users to generate insights like correlations and outliers.
  • Google is bringing screen sharing and camera sharing to its mobile search, allowing users to show the AI what they're looking at.
  • Google is also rolling out a new feature on Android to help detect potential scam text messages.

DuckDuckGo has also started integrating AI features:

  • DuckDuckGo offers a private, anonymous AI chatbot experience with options to use different language models.
  • DuckDuckGo's regular search engine now has AI-powered overviews for certain queries, like how to change a tire.

In Microsoft news:

  • Microsoft's Co-Pilot AI assistant in Windows has received a UI update.

Overall, these tech giants are rapidly integrating AI capabilities across their product suites, providing users with new ways to leverage AI for search, analysis, and creative tasks.

New Open-Source AI Models from Alibaba and Others

In the world of open-source large language models, there was a new model this week from Alibaba's Qin with QWQ 32B. It's a new reasoning model with only 32 billion parameters, and they claimed that it's pretty much as good if not better than DeepSeeR1 on the AIM24 benchmark, which tests mathematical problem-solving abilities. It scored right about the same as DeepSeeR1, which was a 671 billion parameter model. This is a much smaller, much lighter model that performs just about as well at math and logical reasoning as DeepSeeR1 did.

On the live codebench, it slightly underperformed R1, but just very slightly. This live bench also tests things like coding and logic and reasoning, and it once again outscored DeepSeeR1 on the IF-eval, which tests how well the model follows instructions. It seems to be right up there with the top of the other models like GPT-1 mini and DeepSeeR1.

The QWQ 32B model also beat out all the rest on the BFCL, which is a function calling benchmark, testing how good the model is at language execution, including things like Python, Java, JavaScript, and other languages.

Overall, this is a super impressive model for being so small, performing right in there with the best of the best models currently available. Alibaba has made the code and the model weights available on Hugging Face, so if you've got a strong enough computer or you want to run it in your own cloud, you can do that if you know what you're doing. They also made a Hugging Face demo available where you can test it out directly.

Impressive AI Voice Assistants and Text-to-Speech Tools

One of the most impressive AI voice assistants discussed is Sesame's conversational AI tool. The way the voice speeds up and slows down, and has a very natural cadence, is described as the closest thing we've seen to the movie "Her" with Joaquin Phoenix. The voice sounds incredibly human-like and natural.

Another notable tool is Podcastle's new text-to-speech model with over 450 AI voices. While the voices may sound a bit robotic initially, the company claims the quality will improve over time. This tool allows users to train their own voice samples to create AI-generated audio, similar to 11 Labs.

The article also mentions Teleperformance, a large call center operator, using AI "accent translation" technology to make staff sound less Indian. This could be helpful for customer service, but raises concerns about potentially being used for scams to hide accents.

Additionally, Amazon Prime is testing AI dubbing to translate TV shows and movies into different languages, and Meta has new research to generate avatars using only four selfie images, which could be used for VR/AR applications.

Overall, the advancements in AI-powered voice assistants and text-to-speech tools are impressive, with examples showcasing very natural-sounding and human-like capabilities. However, there are also concerns about potential misuse of these technologies.

AI in Video Creation and Manipulation

The world of AI has seen some exciting developments in the realm of video creation and manipulation. Here are a few key highlights:

  1. Tencent's Hunan Video i2v: Tencent has released a new open-source AI video model called Hunan Video i2v. The videos showcased so far are quite impressive, with examples of a penguin wearing a dragon suit and a soldier in a snowy scene with an aurora borealis. The code is available on GitHub, but a public testing platform has not yet been announced.

  2. Hedera Labs' Character 3: Hedera Labs has unveiled Character 3, a new generation of AI-native video creation tools. Character 3 is described as the "first omni-modal model in production" and is capable of jointly reasoning across image, text, and audio for more intelligent video generation. The demo video showcases features like text-to-video, audio-to-video, character creation, and dynamic background generation.

  3. Tavis AI Models: Tavis has introduced a family of state-of-the-art AI models, including Phoenix 3, Raven 0, and Sparrow 0. These models work together to provide a conversational video interface, with Phoenix 3 handling real-time full-face rendering and emotion generation, Raven 0 for visual understanding and spatial insights, and Sparrow 0 for natural conversation.

  4. Using Super Mario to Benchmark AI: Researchers have been using the classic Super Mario game to benchmark the performance of various AI models. The findings suggest that Claude 3.7 performed exceptionally well in navigating the game, outperforming models like GPT-4 and Gemini.

  5. Opera's AI Browser Operator: The Opera browser is rolling out a new feature called the AI Browser Operator, which is an AI agent built into the browser. This agent can perform tasks on the user's behalf, such as finding and purchasing items online or booking tickets.

These advancements in AI-powered video creation and manipulation showcase the rapid progress being made in this field. As these technologies continue to evolve, we can expect to see even more impressive and innovative applications in the near future.

Apple's M3 Ultra Chip and AI-Focused Devices

Apple revealed their new M3 Ultra chip, which is a powerful addition to their lineup of Apple Silicon. This chip boasts an impressive 96GB of memory, configurable up to 512GB of unified memory. This makes the M3 Ultra an AI powerhouse, capable of running large language models with over 600 billion parameters entirely in its memory.

In addition to the M3 Ultra chip, Apple also introduced several new devices focused on AI capabilities:

  • The iPad Air, which will support Apple Intelligence.
  • The Mac Studio, which can be configured with either the M3 Ultra or the M4 Max chip. The Mac Studio is described as a "Powerhouse for AI", able to run large language models with over 600 billion parameters in its memory.
  • The new MacBook Air, also featuring the M4 chip and Apple Intelligence.

These new Apple devices, powered by their latest and greatest silicon, demonstrate the company's focus on delivering hardware optimized for AI workloads. The high memory capacities and advanced processing capabilities of the M3 Ultra and M4 chips will enable users to run sophisticated AI models locally on their Apple devices, rather than relying on cloud-based solutions.

This investment in AI-centric hardware aligns with Apple's broader strategy of bringing powerful, privacy-preserving AI capabilities directly to their customers' devices.

AI-Powered Coding Tools and Standards

Codium just released Windsurfed Wave 4, which includes several exciting new features for AI-powered coding:

  • Preview App Functionality: You can now preview what your apps will do directly within Windsurfed.
  • Suggested Actions: After the AI responds, it will suggest what it thinks you should do next.
  • MCP Discoverability: Windsurfed now supports the Model Context Protocol (MCP), a new standard developed by Anthropic for seamless integration between large language models and external tools.

MCP is becoming a hot topic in the AI community, as it provides a standardized way for language models to interact with third-party applications. Unlike traditional APIs, MCP aims to make it easier for large language models to take actions on your behalf across various tools and platforms.

The adoption of MCP is a significant step towards more seamless integration of AI capabilities into the software development workflow. By standardizing the way language models communicate with other applications, MCP has the potential to unlock new possibilities for AI-assisted coding, debugging, and project management.

As AI continues to advance, tools like Windsurfed and the emergence of standards like MCP will play a crucial role in empowering developers to leverage the power of AI in their day-to-day work.

Innovative AI Hardware and Neuroscience Advancements

Cortical Labs has created the world's first biological computer, the CL1, which fuses human brain cells with silicon hardware to create fluid neural networks. This approach aims to better capture the nuance of the brain, potentially replacing significant areas of animal testing. By leveraging neurons as a computational substrate, the CL1 has the potential to exhibit more organic and natural forms of intelligence compared to traditional silicon-based AI.

In another development, a fully driverless Maserati MC20 reached a speed of 197.7 mph at the Kennedy Space Center, breaking the previous autonomous speed record of around 192 mph. This achievement showcases the rapid advancements in autonomous vehicle technology.

Additionally, a robot from PnD Biotics demonstrates impressive natural fluid motion, with dexterous hands, six degrees of freedom, and 12 joints. This level of robotic movement and manipulation mimics human capabilities, highlighting the progress in developing more human-like robots.

These advancements in AI hardware, neuroscience-inspired computing, and robotic engineering represent exciting steps forward in the field of artificial intelligence and its applications. As the technology continues to evolve, we can expect to see even more innovative solutions that push the boundaries of what is possible.

Conclusion

The world of AI is rapidly evolving, with a flurry of new developments and announcements in recent weeks. From OpenAI's plans to offer more expensive AI models to Google's upcoming AI search features, the industry is pushing the boundaries of what's possible.

Key highlights include:

  • OpenAI's plans to offer low-end, mid-tier, and high-end AI agents at monthly costs ranging from $10,000 to $20,000.
  • The rollout of GPT-4.5 to ChatGPT Plus users, offering improved performance in creative tasks.
  • Google's upcoming AI search mode that combines web search with AI-generated responses.
  • New AI features in Google Sheets, Google Shopping, and Google's Android operating system.
  • The launch of Alibaba's 32B parameter QWQ model, which performs competitively with larger language models.
  • Impressive advancements in AI-generated audio, video, and avatars from companies like Sesame, Podcastle, and Meta.

The pace of innovation shows no signs of slowing down, and it will be exciting to see how these new AI capabilities are leveraged and integrated into our daily lives. As always, it's important to stay informed and approach these developments with a critical eye, considering both the benefits and potential risks.

FAQ