Cambridge Disinformation Summit (2025)

Jun 27, 2025Blog0 comments

Everyone involved with PR services must understand the findings from the 2025 Cambridge Disinformation Summit, which revealed how deliberate disinformation campaigns—driven by financial and power incentives—increasingly target women, minorities, and public figures, forcing them from public discourse through doxing and harassment. The summit highlighted how social media algorithms manipulate engagement rather than foster genuine connection, creating a crisis of truth that undermines democratic discourse.

 

Communications professionals should integrate these insights by developing disinformation threat assessments for clients, implementing early detection monitoring systems, creating crisis response protocols for coordinated attacks, and leveraging AI tools with ethical guardrails to identify false narratives. Most importantly, PR services must prioritize transparent, evidence-based messaging and community engagement strategies that build authentic trust and can withstand the scrutiny of an information environment where algorithms amplify sensational content over truth.

 

A 5-minute read that’s worth your time!

 

Cambridge Disinformation Summit (2025): Breakdown

 

The 2025 Cambridge Disinformation Summit was held in April at the Cambridge Judge Business School, University of Cambridge. The gathering brought together global thought leaders to discuss research on the efficacy of potential interventions aimed at mitigating the harms caused by disinformation.

 

Researchers examined policies, regulations, enforcement, fact-checking, algorithmic issues, and the balance between free speech and other considerations. The conference was organized on the following core themes:

 

  • Can specific interventions mitigate the harms of disinformation campaigns, whether they are scalable and might impose unintended consequences?

 

  • Can laws be developed or enforced against disinformation actors who have acquired outsized wealth and power?

 

  • How influencers leverage legitimate and perceived grievances around economic or social conditions to build communities that support exploitative movements in finance, crypto, and politics.  

 

The alarming and sad fact is that some individuals have financial or power incentives to intentionally disseminate false or misleading narratives to targeted audiences through select channels. Two examples are financial fraud and character assassination. 

 

At this point, we must clarify that disinformation and misinformation are distinct, although they tend to blend into each other, becoming fuzzy at their edges. Disinformation is a deliberate lie, while misinformation is more about sharing inaccurate information by mistake. So, for our purposes, we set the following definitions:  

 

  • Misinformation = unintentional spread of false information.
  • Disinformation = intentional creation and spread of false information to deceive.

 

The summit was held under the auspices of the University of Cambridge’s code on free speech, which has freedom of thought and expression as core values. The university, as stated in the code, “encourages staff, students and visitors to engage in robust, challenging, evidence-based and civil debate as a core part of academic inquiry…. even if you find the viewpoints expressed to be disagreeable, unwelcome or distasteful.”

 

It was reiterated throughout the summit that it is imperative to question the veracity of narratives before they are shared or even believed. Additionally, the summit was scientifically based and devoid of any particular political bias, as tomorrow’s disinformation actors may have completely different incentives and objectives. Keeping the focus on the underlying core architecture that applies to all forms of information-fueled harm campaigns is seen as a benefit for the future.

 

Many issues were discussed at the summit; here, we will focus on the issues of free speech and the dangers associated with disinformation. 

 

Free Speech & Algorithms

 

Nora Benavidez, a civil rights and free speech attorney, moderated a discussion on freedom of access to information, during which it was pointed out that even the scientists who study disinformation are now under attack. Some downplay the harm of disinformation to maintain or gain power. They reshape the narrative to say that those who research disinformation are attacking free speech.

 

Adam Price, a member of the Welsh Senedd, explained that we’re now in a world where algorithms are protected from free speech. We must ask, for example, whether choosing the information people see during an election is a form of protected free speech. Algorithms restrict people’s freedom of access to information. The hedge, he went on to elucidate, against the Nazis in Europe surrounding World War Two was to make it a criminal offense to spread false rumors. Finland, Czechoslovakia, and Switzerland were the only countries that passed these kinds of laws — and they did not succumb internally to fascism. 

 

His point was that we must defend democracy from the enemies of democracy by making the spreading of deliberate deception for ideological reasons a criminal offense. His example was that there are laws that criminalize lies in advertising and laws that hold a company director accountable for lying when they’re in the process of selling shares, but with politics, it’s been a free-for-all.

 

EU MP Alexandra Geese talked about mechanisms, such as like-engagement ranking, and how they are limiting what people choose. The shocking stuff gets a lot of views, but is it freely chosen when it’s presented as the only option for viewing on social media platforms? 

 

Former UK MP Damian Collins shared that, unfortunately, today, some things aren’t true being pushed on social media feeds to capture attention, as money is being made from capturing that attention. His essential point was that interfering in other countries’ elections is widely illegal, so shouldn’t platforms that do this be held responsible? Without truth, you can’t have a national discourse. The crisis of truth is undermining the very fabric of democracy.  

 

Threats & Harm

 

Dr Julia Ebner explained that it’s a diverse landscape of threats. When it comes to anti-vaxxers, white supremacists, and jihadists, there is not one solution for all of these types of threats. That said, it is women who stand out. Disinformation campaigns disproportionately target them. This affects women in journalism, research, politics, and other fields. 

 

For women, the scale and the nature of the threat are different, and if you’re also from a minority background, then you get even more hatred. Male supremacist trends have surged in recent years, and there’s a substantial uptick in youth radicalization. Young boys are increasingly vulnerable and susceptible to becoming part of some of these radical groups. 

 

Over the last decade, numerous disinformation campaigns have targeted researchers and journalists. They have been perceived as political opponents. There has been a substantial rise in doxing, which involves posting personal information such as home addresses, office addresses, phone numbers, and bank details. Over the last few years, both the scale and nature of these threats have undergone significant changes. 

 

UNC Professor Alice Marwick explained how online harassment often prompts targeted individuals to go offline. The online threats can push people out of the public eye. The threats have been so hurtful and harmful that even high-profile public figures have left online discourse because they do not want to risk putting themselves or their families in danger. The result is that particular voices (i.e., minority voices) are pushed out of the public sphere.

 

Conclusion

 

One of the themes that arose most often in the summit was how social media algorithms don’t enhance human connection. As substacker Mike Brock explained, algorithms replace human connection with “engineered engagement designed to maximize screen time rather than genuine relationships. The algorithms don’t help us make better choices—they manipulate our choices through carefully calibrated dopamine hits that bypass conscious decision-making entirely.” So, meeting and talking to people is essential. For, as BBC Social Media Investigations correspondent Marianna Spring discussed, social media can distort the way people behave and notions of what’s acceptable.

 

Several methods for combating disinformation were outlined, including laws that prohibit disinformation, transparency in disinformation research, and community engagement (e.g., organizing an event to promote a banned book). But one of the most effective ways discussed to combat disinformation is through the use of AI. Although automation, AI, and bots have been exacerbating the disinformation problem by amplifying disinformation campaigns worldwide, driving up the volume of disinformation, and micro-targeting it, AI can also be designed to mitigate some of the worst aspects of polarization and identify disinformation. 

 

AI can be both a danger and an effective mechanism for responding to that danger. But that relies on the people who train it, maintain it, and use it. And so, yes, AI can play a critical role, but if we launch AI in the same way we launch social media, absent any guardrails to mitigate misuse, the problem could end up magnifying.

 

=================================================================

 

Resources:

 

For a more in-depth analysis of disinformation and how to combat it, see the Cambridge Summit’s YouTube Channel.

 

For an excellent explanation of how to do a disinformation threat assessment and figure out how it might affect your business, read this guide written by Matt Gorham, Managing Director at the Cyber and Privacy Innovation Institute at PwC, on understanding the vulnerability to false and fraudulent information in your organization.

 

To understand the effects of harassment, read about how international chess master Divya Deshmukh had to deal with online sexism.

 

For an intense and heart-wrenching look at the effects of disinformation on individuals, families, and communities, see the Netflix series Adolescence

Blogs

Latest Blogs

We’ve designed a culture that allows our stewards to assimilate with our clients and bring the best of who we are to your business. Our culture drives our – and more importantly – your success.

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *