DeepSeek is a catchy name for the Chinese startup that recently announced the launch of its AI “reasoning” model, R1. The name conveys the promise of AI to use “deep learning” to analyse large blocks of data to help solve a vast array of problems.
Some commentators have dubbed the release of the AI as “the Sputnik moment” – referencing the first artificial Earth satellite launched in 1957 by the Soviet Union, which triggered the space race – conveying the momentous impact of the venture.
Many people have been quick to draw conclusions about the significance of the announcement. However, it’s important to critically assess the claims made about the alleged breakthrough, including the language used to frame related news stories.
In my own research, I’ve found that media coverage of new and emerging technologies – such as medical genetics, cloning, nanotechnologies, and digital health technologies – is rich with metaphors and analogies drawn from popular culture, which has been evident in news reporting DeepSeek’s AI.
Metaphors and analogies are used by journalists and scientists to help the public understand the nature and implications of technology. They serve to simplify complex issues for lay audiences.
However, their use may mislead the public by obscuring the complexities and raise people’s expectations and fears to a level not warranted by the evidence.
Media framing context
The media coverage of DeepSeek’s AI needs to be understood in historical and socio-political context. This context shapes how stories are framed, or the “schemata of interpretation”, or models of what should demand our attention.
In analysing media frames, what is left out of the picture is as important, if not more important, than what is portrayed. The early framing of events, when their significance is still uncertain and highly contested, can profoundly shape subsequent public responses and policies.
This was apparent with the announcement of the cloning of Dolly the sheep in 1997, when initial fears that the technology might be used to clone humans — despite there being no evidence that this could occur— soon led to legislation to ban human cloning in many jurisdictions.
Other examples that illustrate the significance of the early media framing of tech announcements for subsequent responses can be cited, including news media coverage of GM crops and nanotechnologies in the early 2000s.
The ‘breakthrough’ question
In early media coverage of DeepSeek’s AI, much debate has focused on the question of whether the technology represents a genuine “breakthrough”, as assessed by technical questions such as the efficiency of the model and the number of chips used to “power” the technology.
For example, it’s been reported that DeepSeek’s AI model “is believed to be nearly as powerful as American rivals, but far more efficient”, using 2000 Nvidia chips, which is far fewer than the 16,000 used by leading US counterparts.
If this claim can be verified – and doubts have been raised about both this and the actual investment costs – this would appear to represent a disruption to the established business model of US big tech companies.
Early coverage of the claimed breakthrough is underpinned by the assumption that there’s a generally agreed definition of AI, which is not the case.
Clearly, there’s much at stake in the quest to frame the newly-announced model of AI as either a “breakthrough” or not, particularly in regard to AI becoming increasingly “human-like”, sentient, or “intelligent”, as seen in the field of “affective computing”.
The ability to scale innovations and demonstrate efficiencies is of crucial importance, since a technology that does not represent a significant advance in terms of “intelligence” (however this is measured) and efficiency will fail to find a market, and hence will not generate profits and other promised benefits.
Interestingly, DeepSeek’s AI announcement was made soon after US President Donald Trump’s announcement of $500 billion US investment in AI infrastructure.
![AI chatbot – artificial Intelligence concept](https://res.cloudinary.com/cognitives-s3/image/upload/c_limit,dpr_auto,f_auto,fl_lossy,q_75/v1/cog-live/n/1271/2025/Feb/04/h5uIo3rSgDBEanHfPcVR.jpg)
Coincidence, or strategic?
Is this just coincidence, or could it be that DeepSeek, a company that is ultimately accountable to the Chinese Communist Party (and is reported to censor answers on sensitive Chinese topics such as Taiwan), timed the news release to emphasise the country’s technological (and by implication, military) superiority over the US?
As an item in Time notes: “In 2023, China issued regulations requiring companies to conduct a security review and obtain approvals before their products can be publicly launched.”
Indicative of concerns regarding the app’s data collection, Australian public servants have now been ordered to delete DeepSeek from all government-issued devices.
DeepSeek’s announcement of the release of its AI as an “open-source product” – meaning that the system is freely available to study, use and share – has also attracted much media attention.
Access to the “black box”, or inner workings of AI (that is, “open-source”), is portrayed as part of the alleged innovation – which is implicitly a threat to the US’ lead and monopolisation of AI research and intellectual property.
As the history of AI makes clear, there’s been growing rivalry between the US and China in their efforts to gain the edge in the “AI arms race”, on the one hand, and between big tech companies that aim to create and dominate a market in “human-level AI” (in a “winner-takes-all” scenario), on the other.
Until the announcement of DeepSeek’s most recent R1 model, North American big tech companies had been assumed to “lead the race”.
But the company's new models (‘v3’ in December 2024 and ‘R1’ in January 2025, respectively) brought that into question, with reports that "they wiped around a trillion dollars off the market capitalisation of America’s listed tech firms" and that Nvidia, a chipmaker had seen its value fall by $600bn.
In recent years, the US has sought to prevent China from attaining the capacity to manufacture chips, both through “banning exports of the necessary equipment and threatening penalties for non-American firms that might help, too”.
AI’s uncertain path
Regardless of the veracity of the various claims about DeepSeek’s model, the future path of AI development will remain uncertain. Many early promises will fail to be fulfilled, and innovations will develop in unimagined ways.
The hype cycle is a well-known phenomenon in the field of technological development, with initial hopes and expectations for innovations plateauing then declining, followed by a “trough of disillusionment” as technologies fail to evolve or become “normalised” as they begin to find application (assuming they prove their benefits), or are repurposed, perhaps with more modest applications than originally envisaged.
There’s already evidence that the AI investment boom may be nearing its end. Some commentators have begun to question the benefits of huge AI investment in data centres, chips and other infrastructure, with at least one writer arguing that “this spending has little to show for it so far”.
Hype can only be sustained under certain conditions, and the dynamics of technological expectations are likely to wax and wane over time in response to changing socio-political contexts.
Increasing public concerns about the role of AI in daily life, including intrusions on privacy, identity theft, the creation of deepfakes, and so on – along with the growing shortage of data, and the huge environment and financial costs of storing and analysing data – may quickly derail what’s been depicted as inexorable progress towards an imagined future artificial general intelligence.
In short, one should be wary of hype in media coverage of “breakthroughs” in AI such as DeepSeek, and seek guidance from the history of technology developments before subscribing to either gloomy or optimistic predictions about a future that is inherently uncertain.