Just as the release of Jaws 50 years ago sparked fear and hysteria around sharks without much basis in reality, the media’s predominantly negative coverage of AI has cast the technology in a similarly poor light. But is that perception justified?
At Day Two of Tech Show London, Propeller’s Associate Director of Group Operations Hilary Gray moderated a panel that explored the factors influencing people’s fear of AI - and discussed how the narrative might be shifted. The Jaws Effect: How Bad Storytelling is Feeding AI’s Fear Machine featured:
- Serena Haththotuwa - Senior Reporter, FutureWeek
- Archie Cobb - AI Practice Lead, Sullivan & Stanley
- Mark Rodseth - CTO, MSQ DX
Dispelling myths and misconceptions
In the world of reporting, Serena Haththotuwa argued that AI being viewed as a singular concept was one of the main issues.
“News stories often frame AI as something that is threatening jobs, negatively impacting the planet and could potentially become more intelligent than humans. Although these risks deserve attention, it’s important to find the nuance and acknowledge that AI has a myriad of use cases.”
Archie Cobb added that many of the misunderstandings stem from the perception that AI is purely a technological challenge. “In reality, AI is accessible to almost everyone - provided they’re willing to upskill and learn. With lower barriers to entry, we have to ask ourselves: what makes us different as humans? These qualities include critical thinking, curiosity and the ability to control these systems.”
For Mark Rodseth, the confusion is exacerbated by the sheer range of opinions circulating in the media. “Some within the community say they’re no longer interested in language models, while wildcard characters are warning that we should be preparing for the end of the world. The fact that the narrative swings across a spectrum of extremes makes it difficult to discern what’s true.”
Overcoming our fears
Other technological developments haven’t struck fear into people in the same way that AI has. In terms of why that is, Mark drew inspiration from Jaws.
“In the same way that Spielberg left things to the human imagination, so too is much of the worry related to unseen and amorphous concepts and capabilities. The best way to address that is to draw on people’s real-world experiences.”
So, what is the best way to shift the narrative within the media? Serena advocated for balance: “The stories about how scary AI can be aren’t necessarily wrong; there are genuine concerns. But it’s important for those stories to be balanced. The media should also shine a light on the positive consequences of AI - such as improved work-life balance, or the ability to detect cancer earlier in the medical sector.”
Returning to the Jaws analogy, Archie spoke about ramifications of AI’s presence in our society. “In Jaws, the residents of Amity Island didn’t ask the shark to be in their waters. In the same way, the ‘AI shark’ is already swimming - so we have to shift our attention downstream. That means building the right guardrails so that models don’t spew out AI slop.”
The transparency conundrum
The panel were also challenged on whether brands and storytellers have a responsibility to disclose when AI is used.
Serena emphasised that above all else, accuracy is paramount - a principle that extends beyond journalist circles. “There’s also a responsibility on social media platforms to vet the content that’s being published. Because these platforms are being used by everyday individuals, as opposed to solely reputable outlets, it’s difficult to monitor.”
Mark suggested that the need for transparency is influenced by the context of the situation. “If you’re presenting something as fact, it’s important to attribute AI use accordingly. There’s more license for freedom in the creative space. However, part of the problem is that the AI industry is moving so fast it’s becoming too difficult to pull back on the momentum.”
Archie framed the issue through the lens of trust. “Trust is the new currency. You have to be authentic with customers. For example, if you interact with a banking app and receive the same assistance from an agent as you would a human, you wouldn’t mind - provided the output is acceptable in quality. In journalism, expectations are different. If people believe they’re receiving authentic content only to later find out that it’s artificially generated, damage is significantly damaged.”
How to navigate the hype
The panellists closed the session with their perspectives on how organisations can cut through the noise surrounding AI.
For Serena, the key is identifying genuine innovation. “I don’t necessarily push back on the AI hype - but I am always looking for differentiation. What is AI enabling that was impossible five years ago, be it technically or physically? Too often, AI is shoehorned into comms. That’s why I’m asking if it’s exciting, meaningful or moves the needle.”
Archie acknowledged that the pace of development means finding the answers is nigh-on impossible. “From a consultancy perspective, no one is truly on top of everything. But what keeps us competitive is our curiosity. We look at what’s in our control and use realism to guide our clients. Our approach is built on three principles: authenticity, agnostic and the A-team. That means relying on the experience of our team, emphasising transparency and using data to navigate the future.”
Mark brought things full circle by discussing the rise of AI slop, recently named Merriam-Webster’s Word of the Year 2025. “People are developing an allergy to AI slop; it immediately causes them to lose trust in brands. As our awareness of this slop grows, we will also see a rise in the number of people who understand how to effectively use AI - versus those that are lazy and ignorant. This is why quality control matters. We still need individuals to review outputs and integrate AI in interesting ways.
.jpg)