2025 kicked off with an earthquake in the tech industry. It wasn’t a new iPhone or an OpenAI announcement—it was DeepSeek-R1, an open-source artificial intelligence model that became the center of attention in just a few days. Shares of industry giants like Nvidia, Microsoft, and Google plummeted when investors realized that DeepSeek could deliver powerful, efficient AI at a fraction of the cost of closed-source models such as ChatGPT.
How did a Chinese startup achieve this? Is it truly a threat to OpenAI? And what makes DeepSeek different? Read more from RTVE
In this article, we’ll explore:
- What DeepSeek is and why it’s causing such a stir
- How it compares to ChatGPT: key differences, advantages, and limitations
- An intuitive explanation of how it was trained and how it works
- The potential impact on the future of AI and technology in general
If you’re curious about the future of artificial intelligence, read on. DeepSeek could be a game-changer.
What Is DeepSeek and Why Is Everyone Talking About It?
DeepSeek is an AI created by a Chinese startup, designed to compete directly with models like ChatGPT and Claude by Anthropic. The big difference? DeepSeek is open-source and, in theory, far cheaper to operate. While there are other open-source models, such as LLaMA from Meta and BLOOM, DeepSeek-R1 stands out because it not only makes its weights public, but also provides fully accessible training infrastructure, allowing collaborative development and effectively democratizing access to AI.[Reference: Bamania, A. (2025)]
Only weeks into 2025, DeepSeek became one of the most talked-about AIs in the world—partly because it managed to outperform OpenAI on several key metrics. But it’s not just the model’s performance that’s causing the uproar: its existence challenges the closed, expensive model used by companies like OpenAI.
Some of the reasons DeepSeek has caught everyone’s attention:
- It’s open-source (anyone can use and modify it).
- It’s designed to be more efficient and cost-effective.
- It excels at math, coding, and logical reasoning.
- It has proven to be a serious rival to ChatGPT in advanced tasks.
But is it really the next “ChatGPT killer”? Let’s look at the good, the bad, and the ugly of DeepSeek.
The Good, the Bad, and the Ugly of DeepSeek
Like any disruptive technology, DeepSeek brings impressive advantages, areas for improvement, and unexpected challenges. Here’s the breakdown:
The Good
- Open access and low cost: As noted above, DeepSeek is open-source with significantly lower operational costs compared to closed-source models like GPT. This approach reduces barriers for smaller companies and threatens the dominance of tech giants.
- Specialist in math and programming: If you need an AI that excels at complex tasks—ranging from solving mathematical problems to generating precise code—DeepSeek is a robust choice. It has demonstrated better performance than models like ChatGPT in these key areas.
- Independence from big corporations: There’s no need to pay high subscription fees or adhere to conditions set by companies like OpenAI. You can run DeepSeek locally and tailor it to your specific needs without barriers.
The Bad
- Weakness in natural conversation: DeepSeek shines in tasks where logic and precision are crucial, such as math and programming. However, its performance in natural conversation and creativity still lags behind models like ChatGPT. This is because its training prioritized logical reasoning over human interaction, making it more of a technical than a social tool.
- Limited support and documentation: Being relatively new and open-source, DeepSeek does not yet have a large, robust community or extensive guides, which can make implementation difficult for some users.
- It’s a work in progress: Despite its potential, DeepSeek is still under development. Its adoption and maturity will depend on how global interest evolves around its open-source model.
The Ugly
- Geopolitical concerns: Because DeepSeek originates in China, it faces political and economic tensions. Some countries have expressed reservations due to security and censorship concerns, highlighted by its inability to respond to sensitive topics such as Taiwan and Tiananmen.[Reference: The Guardian (2025, January 28)]
- Risk of misuse: Its open-source nature fosters innovation, but also allows for custom versions without ethical safeguards—potentially used for disinformation or fraud. For instance, the model could be tweaked to ignore restrictions on harmful content, amplifying global risks.
- Market impact: DeepSeek’s rise has affected major tech companies, cutting demand for high-cost hardware and creating financial market turbulence, which could in turn disrupt the entire industry.[Reference: RTVE (2025, January 27)]
So, Is It Worth Trying DeepSeek?
My short answer is yes, but let me explain why. If you need powerful, cost-effective AI focused on logic, math, and programming, DeepSeek is a solid option. If you’re looking for conversational interaction or creativity, ChatGPT remains your best bet.
One thing is certain: DeepSeek is redefining the AI landscape, forcing a transformation in how we understand and democratize access to artificial intelligence.
How Does DeepSeek Work, and Why Is It Different?
DeepSeek isn’t just another AI chatbot. Its approach to learning and reasoning differs significantly from models like ChatGPT.
While ChatGPT uses Supervised Fine-Tuning (SFT) and Reinforcement Learning from Human Feedback (RLHF), DeepSeek opts for a more autonomous, cost-efficient approach.[Reference: Ahmed, S. (2025, January). The Math Behind DeepSeek: A Deep Dive into Group Relative Policy Optimization (GRPO). Medium.]
How Was DeepSeek Trained?
In AI, most large models follow a similar recipe:
- Pre-training – The AI is fed huge amounts of text to learn language patterns.
- Supervised Fine-Tuning (SFT) – It’s taught to respond better on specific tasks using human examples.
- Reinforcement Learning from Human Feedback (RLHF) – Humans rate responses, and the AI adjusts based on those ratings.
DeepSeek, on the other hand, leaped straight into reinforcement learning without relying as heavily on human examples. Instead, it used a more efficient technique called Group Relative Policy Optimization (GRPO).
What Does GRPO Do?
Unlike algorithms like Proximal Policy Optimization (PPO)—used by OpenAI’s o1 and GPT-4 during reinforcement learning—GRPO removes the need for a “critic” model to evaluate each response. While PPO relies on this critic to calculate advantages and tweak the policy, GRPO directly optimizes responses within a group by comparing them to each other, automatically selecting the best ones and thus reducing computational costs.[Reference: Ahmed, S. (2025, January)]
This cut in costs speeds up learning, because instead of waiting for humans to review answers, DeepSeek self-evaluates and adjusts its reasoning process accordingly.
How Does DeepSeek Work in Practice?
DeepSeek’s trick lies in how it reasons before responding.
- If ChatGPT is like a student who learned from textbooks and teachers’ examples,
- DeepSeek is like a curious child who learned through trial and error, without relying on prior explanations.
DeepSeek generates multiple possible answers, compares them, and picks the one that makes the most sense. Over time, it discovers more effective reasoning patterns—without needing someone to tell it what’s right or wrong. This is why it performs exceptionally well in math, coding, and logic, where there’s an objective right answer.
But What About Everyday Conversations or Creativity?
Here’s where DeepSeek still doesn’t shine as brightly as ChatGPT. Since its training focused on logical reasoning rather than human interaction, it’s not as strong at improvising stories, holding fluent conversations, or producing empathetic responses.
DeepSeek Is Like a Kid Solving Puzzles
- ChatGPT (supervised learning)Think of a child watching a tutorial video, with an adult explaining step-by-step how to solve the puzzle until it’s complete.Result: The child does it well but only knows how to solve that specific puzzle.
- DeepSeek (reinforcement learning with GRPO)Imagine a child trying different puzzle pieces over and over—some fit, some don’t—gradually recognizing shapes that work well together. The child initially makes lots of mistakes, but with each attempt, learns how to solve any puzzle, not just that particular one.
That’s how DeepSeek learns to tackle problems without relying on human-provided examples, giving it an advantage in logical and mathematical tasks.
Why Does All This Matter?
DeepSeek shows that AI can learn without humans providing all the answers, making development cheaper, faster, and more scalable—opening the door for more people to access advanced AI without relying on tech giants.
Key takeaways:
- More cost-effective and efficient training thanks to GRPO.
- Less need for human corrections—it learns on its own.
- DeepSeek proves there’s a new way to train AI that could reshape the future of artificial intelligence.
A Final Thought
Whenever a new technology appears and disrupts the market, the first reaction is often uncertainty or even fear. DeepSeek’s debut was no exception: it caused stock drops for major tech companies like Nvidia, Microsoft, and Google, and it threatened OpenAI’s business model. But if we look beyond the initial chaos, we see some of the best news possible for the future of artificial intelligence.
Why? Because competition fuels innovation.Every time a strong new competitor arrives, others are forced to improve. We’ve seen it before:
- When Linux emerged as an open-source operating system, Microsoft had to enhance Windows, and Apple made macOS more robust.
- When Android challenged the iPhone, smartphones evolved faster and became more accessible.
- When Tesla bet on electric cars, traditional manufacturers accelerated their shift to cleaner energy.
Now, with DeepSeek demonstrating that an open-source AI can compete with the most advanced closed models, OpenAI, Anthropic, and Google can’t afford to rest. They’ll need to improve, cut costs, and be more transparent if they want to stay relevant.
A More Accessible AI Future
DeepSeek isn’t just another alternative to ChatGPT—it’s the symbol of a shift in how we understand AI. Until now, this field was dominated by a few companies with colossal budgets and closed models. But DeepSeek has shown there’s another path: a more open, accessible, and affordable AI, without sacrificing performance.
What does this mean for the future?
- Fewer monopolies, more options: If DeepSeek continues to grow, companies like OpenAI will need to offer better services at more competitive prices.
- More innovation: Competition and open-source collaboration will drive faster development.
- Affordable AI for startups and developers: DeepSeek may inspire models that don’t depend on expensive servers or restrictive licenses.
- Global impact: With more available models, researchers, industries, and countries can leverage advanced AI without economic barriers.
This isn’t just OpenAI versus DeepSeek—it’s the dawn of a freer, more powerful era of AI.
Now it’s your turn: Do you think open-source models like DeepSeek will dominate the future of AI, or will there be a balance between accessibility and private control? Leave a comment and share your vision of where this technological revolution might take us. Your insights could spark the debate and help us imagine the next big step in artificial intelligence.
Cheat Sheet: ChatGPT vs. DeepSeek – Which One Should You Use?
Use DeepSeek if you need…
- Advanced logical reasoning
- Mathematical problem-solving
- Accurate code generation
- An open-source, cost-effective model
- AI that can run on more accessible servers
Use ChatGPT if you want…
- Fluid and natural conversations
- Creative text generation
- Detailed, easy-to-understand explanations
- A private model with extensive documentation and dedicated support
- Access to a model with broader support and documentation
References
- Ahmed, S. (2025, January). The Math Behind DeepSeek: A Deep Dive into Group Relative Policy Optimization (GRPO). Medium.
- Bamania, A. (2025, January). DeepSeek-R1 Beats OpenAI’s o1, Revealing All Its Training Secrets Out In The Open. Level Up Coding.
- RTVE. (2025, January 27). La irrupción de DeepSeek sacude las bolsas: Nvidia pierde 440.000 millones de dólares. rtve.es.
- The Guardian. (2025, January 28). We tried out DeepSeek: It works well until we asked it about Tiananmen Square and Taiwan. theguardian.com.