Navigating the OpenAI Proposal: Banning DeepSeek, Promoting Democratic AI

Navigating the OpenAI Proposal: Exploring the Controversial Calls for Banning DeepSeek, Promoting Democratic AI, and Reshaping Copyright Laws.

March 21, 2025

party-gif

Unlock the power of AI and stay ahead of the curve. This blog post delves into the latest developments in the AI landscape, exploring the intriguing dynamics between leading players and the potential impact on the future of technology. Discover the insights that can propel your understanding and guide your strategic decisions in the ever-evolving world of artificial intelligence.

The Growth of AI: Impressive Statistics and Economic Opportunities

More than 7 in 10 parents in the US believe children today will be worse off financially than they are. However, 9 in 10 US parents think it's important for their kids to learn how to use AI for their future jobs. Unfortunately, 8 in 10 say this is not happening or they don't know if it is happening.

Additionally, 3 in 4 college-aged AI users want to use AI in their education, but the formal education system is not keeping up with the pace of AI advancement.

Open AI's analysis also highlights the impressive scaling laws and cost reductions in AI development. The intelligence of an AI model roughly equals the log of resources used to train and run it. Recent models have shown that intelligence can scale from increased compute power as well. Furthermore, the cost of training and running these models falls by about 10x every 12 months, leading to much more widespread use.

The time it takes for AI to surpass human performance on benchmarks has also dramatically decreased, from 20 years to just 1-2 years in some cases. This rapid progress underscores the significant economic opportunities presented by the growth of AI technology.

Scaling Laws and Cost Reductions in AI Training and Inference

The report highlights several interesting insights regarding the scaling laws and cost reductions in AI training and inference:

  1. Intelligence Scales with Resources: The report states that the intelligence of an AI model roughly equals the log of the resources used to train and run it. Until recently, scaling progress was primarily driven by increases in training compute and data. However, recent models like GPT-R1 have shown that intelligence can also scale from inference compute.

  2. Precise Scaling Laws: The scaling laws that predict these gains in intelligence are incredibly precise, holding true over many orders of magnitude. This is why the report encourages more investment in AI, as increased spending on training and test-time compute will result in more capable models.

  3. Rapid Cost Reductions: The cost of achieving a given level of AI capability falls by around 10x every 12 months. This rapid decrease in prices leads to much more widespread use of AI. The report cites the example of a 150x cost reduction in token costs between GPT-4 in early 2023 and GPT-4.5 in mid-2024.

  4. Accelerating Benchmark Improvements: The typical time it takes for a computer to beat humans at a given benchmark has fallen from 20 years after the benchmark's introduction to just 1-2 years. The report sees no reason why this trend of rapid advancements will stop in the near future, citing the example of GPT-R1's dramatic improvement on the AGI benchmark.

In summary, the report highlights the impressive scaling laws, rapid cost reductions, and accelerating benchmark improvements in the field of AI, which it argues justify increased investment and a more proactive approach to shaping the future of AI development.

The Rapid Advancement of AI Benchmark Performance

The report highlights the rapid progress in AI benchmark performance over time. It notes that the typical time it takes for a computer to beat humans at a given benchmark has fallen dramatically, from 20 years after the benchmark was introduced, to just 5 years, and now to 1-2 years. This trend is expected to continue in the near future.

The report cites the example of the AGI benchmark, where OpenAI's GPT-3 model showed dramatic improvement in performance, although at a higher compute cost during test time. This rapid advancement in AI capabilities is seen as a key driver of the competition and race towards Artificial General Intelligence (AGI).

The report argues that this rapid progress is enabled by the scaling laws that predict the gains in intelligence as a function of the resources used to train and run the AI models. It suggests that continued investment in training compute and test-time compute will result in more capable AI models, with a significant social and economic impact.

OpenAI's Concerns About DeepSea and Chinese AI Development

OpenAI has expressed concerns about the recent release of DeepSea's R1 model, not because of its capabilities, which they claim are on par with several US models, but as a gauge of the state of the AI competition and the race towards Artificial General Intelligence (AGI).

They argue that the DeepSea R1 model is a concern because DeepSea is a state-subsidized and state-controlled entity, and there are potential risks of the Chinese government compelling DeepSea to manipulate its models to cause harm. OpenAI is intentionally confusing the DeepSea R1 model, which is open-source and can be deployed independently, with the APIs provided by DeepSea.

OpenAI also believes that China has several advantages, including its ability to marshal resources, data, energy, technical talent, and the enormous sums needed to build out its domestic chip development. They argue that US regulations, even at the state level, may weaken the quality and level of training data available to American entrepreneurs and the usefulness of downstream consumers and businesses.

Furthermore, OpenAI claims that the People's Republic of China is unlikely to respect the IP regimes of other nations, and may already have access to the same data, putting American AI labs at a comparative disadvantage. They propose a holistic approach that enables voluntary partnerships between the federal government and the private sector to neutralize the potential benefits the PRC may gain from American AI companies having to comply with state laws.

Conclusion

Open AI's proposal presents a concerning vision for the future of AI development. Their stance against models from China, desire for unfettered access to copyrighted materials, and push for closer government collaboration raise significant ethical and practical concerns.

The proposal's emphasis on "Democratic AI" and framing the issue as a US vs. China competition is troubling. AI development should prioritize the wellbeing of all people, not geopolitical dominance. Restricting access to models and data based on national origin is antithetical to the collaborative, open nature that has driven AI progress.

Open AI's copyright strategy, which seeks to override creators' rights in the name of "freedom to learn," undermines the very principles of intellectual property that enable innovation. Allowing AI systems to freely extract and leverage copyrighted content without permission sets a dangerous precedent.

The call for deeper government integration, including with defense agencies, further blurs the line between private industry and state interests. This could lead to the development of AI systems that prioritize national security over broader societal benefits.

In summary, Open AI's proposal reflects a worrying shift towards techno-nationalism, corporate consolidation of power, and the erosion of fundamental rights. The AI community must vigilantly defend principles of openness, ethics, and the public good in the face of such concerning trends.

FAQ