Even though there are no confirmed cases of coronavirus infection in Montenegro, the institutions in charge ordered the work stoppage of disco and night clubs, hospitality businesses (except for those which deliver food and provide taking of food and meals), closure of playrooms for children, fitness centers, shopping malls, gyms and betting agencies, for 15 days at least.

Montenegrin health institutions have tested 88 persons so far and all tests were negative for coronavirus, while 783 persons are under medical supervision, inform the representatives of the epidemiological services of Montenegro.

Along with the pandemic of coronavirus, media, social media and individuals are flooded with disinformation on this phenomenon and wrong pieces of advice on how to protect from it. The Western Balkans societies are not spared from this global trend either.

The data that we came to indicate that 59 000 articles and posts, directly related to coronavirus, have been posted during a short period from March 10 to March 16 in online environment of Serbia and Montenegro.

World Health Organization also warned that the spread of fake news is as dangerous as the spread of the virus. At the end of February, they informed that coronavirus is infodemia – a term that implies the spread of conspiracy theories and information which are not scientifically-based.

Follow the information and announcements from relevant adress

World Health Organization – WHO offers everyday news on pandemic, instructions and information on spread of disease. Their live map which is showing the confirmed cases on the world map is useful for understanding the spread of this virus on a global level.

Institute for Public Health and Ministry of Health of Montenegro – On the sites of IPH and MH, you can find all the information about COVID-19, advice, recommendations, data from hour to hour on Montenegro and even more than that. They are very prompt on their official Twitter accounts, therefore, we recommend that you follow their posts and ask questions if you have them.

European Centre for Disease Prevention and Control – On their site you can also find relevant information on virus, but also the measures that EU is taking when it comes to this issue.

Reliable addresses

You should follow data, announcements and analyses coming from fact-checking addresses. We are recommending the following sites and organizations checking the truthfulness of information on COVID-19 on global and regional level:

How to check information by yourself

FirstDraft Resources for Reporters – On their blog, you can find useful information and pieces of advice, the archive of already debunked disinformation on this subject. On this link, you can find five ways to double-check the information served to us.

Open-source sourceHere you can find numerous useful and categorized tools which can serve to check the information if you are questioning its truthfulness.

WHO vs COVID-19 myths – on the official site of World Health Organization, you can find the myths on the new virus that their experts debunked on the global level and compare them with the information that you have.

Do not post it if you are not sure

If you are not sure whether the information is true or not, do not share it on social media. Fake news spreading is as dangerous as the situation that we are in. Therefore, use the abovementioned sources and links in order to check the credibility of information, and if you are still not sure, ask DFC team. You can contact us on Facebook, Twitter or by mail: [email protected].

The modern age has brought us into a new reality where social networks and media create and promote narratives and information that undermine the legitimacy of not only Montenegrin but also societies around the world.

The current situation regarding the Law on Freedom of Religion has deepened the crisis in relations between the Serbian Orthodox Church and the Montenegrin state. This has resulted in an intense disinformation campaign, and consequently a lack of trust in the media, with the crucial question being – who to trust?

In such conditions, working with young people, high school pupils and students is crucial in the fight against disinformation. Therefore, Digital Forensic Center of the ACM organized a two-day workshop for young people who were interested in training. The aim was to teach participants more about the pervasive phenomenon of disinformation – how to combat this phenomenon, what psychological aspects are behind the disinformation campaigns, what is the modus operandi of campaigns seen across Europe and the world etc.

Lecturers Darko Brkan from Raskrinkavanje and Milan Jovanovic from DFC, after providing participants of this two-day training with theoretical knowledge, demonstrated practical exercises. The participants were educated on how to find information in order to check the accuracy of texts through open-source tools, how to determine geolocation, and finally how to write text. After that, the students did specific exercises on their own.

Participants were also instructed how to make a good lie by taking information that is not easy to verify. Media types (public and private agencies, public, commercial and online media, social networks) were highlighted. Phenomena of bots and trolls, misinterpretations of terms, and distinctions among them were also discussed.

Also, concrete examples were presented on how the creation of disinformation can be grounded in political interests, and often in money.

Conclusions and recommendations:

  • Emerging forms of media manipulation are: fake news, spreading fake news, spin, disinformation, fact manipulation, pseudoscience, conspiracy theory, biased reporting, censorship
  • In the domain of credibility, professional media must have a published impressum, contact, data, as well as ownership and publisher information, form, content and must be transparent.
  • Participants showed a high level of media literacy, answering correctly to over 83% answers of the given quiz.
  • Participants demonstrated the ability to put theory into practice through a series of successfully completed exercises in various fields.
  • According to the participants, the approach of focusing on the practical and interactive part, along with teamwork, is what really raised the workshop to a higher level
  • A long-term process of raising awareness in the field of media literacy among the youngest, as well as strengthening digital literacy, as a resistance to disinformation is needed in Montenegro.
  • It is agreed that the subject Media Literacy should be an integral and obligatory part of the secondary school curriculum.
  • The participants expressed their interest in further workshops on various topics, such as the impact on the media when writing and publishing text, the detailed presentation of open-source tools, or a workshop that would address security and safety on the Internet.

At the press conference, held in the premises of the Atlantic Council of Montenegro on January 28, 2020, Milan Jovanovic from the Digital Forensic Center presented the new research concerning the current disinformation campaign. The goal of the presentation was to display quantitative and qualitative results, with reference to the key examples of disinformation, media narrative analysis but also to modus operandi of these campaigns.

According to him, during the last three months, 35000 articles and posts concerning directly the Law on Freedom of Religion were posted on social media.

„The campaign is coming from Russia and Serbia. It is completely indicative that during the last three months, out of 35 000 of articles which concern this Law, 20 000 is coming from Serbia, therefore it can be concluded that someone is intentionally forcing particular content which is gaining in popularity“, Jovanovic said and added that around 9 000 pieces of information came from Bosnia and Herzegovina.

Jovanovic said that it is crucial to determine the key groups and individuals who are posting fake news on this subject through social media analysis.

As he said, in their research the focus was on the concrete examples of disinformation in order to present everything that a disinformation campaign implies, not just fake news and disinformation.

Jovanovic explained that this includes calls for violence, radicalization of comments, using of children for political and propaganda purpose.

He stated that the presentation represents a summary of everything that has been done during the last three months.

„I think that no platform, media or social network is excluded from this campaign. Facebook, as the biggest social network, together with the media represents the largest source of information but, unfortunately, the disinformation as well“, said Jovanovic.

Jovanovic said that the analyisis of the modus operandi of disinformation campaign in Montenegro indicates that the same was used in the SEE countries, Ukraine and Western countries

„If the campaign in the above-mentioned countries is initiated by Russia using the same modus operandi, we can conclude that this campaign is coming from the same address“, Jovanovic said.

As he explained, the problem of disinformation cannot be easily solved, otherwise, this problem would be neutralized in the much more developed countries and societies.

“I think that we should always work on ourselves, this is the best answer to the question „how to be resilient to disinformation? “. We need to check the sources of information and not to believe in everything that is being posted“, Jovanovic said.

He said that the protests organized due to the Law on Freedom of Religion were of a spiritual character at the beginning, but meanwhile, they started receiving political dimensions.

„At the beginning, the protest had a spiritual character which is reflected in the lack of any propaganda material, flags or other nationalistic landmarks. Now, everything that we see implies that the litanies are getting political character“, Jovanovic said.

According to his words, it is necessary to devote more attention to the education of children, so, the state authorities should make more effort in that sense. He also added that the arrests cannot be used in a fight against disinformation.

Words: Marjam KIPAROIDZE for Coda Story

Eyewitnesses who point their finger at innocent defendants are not liars, for they genuinely believe in the truth of their testimony, Dr. Elizabeth Loftus, a cognitive psychologist and memory scientist said decades ago. Our memories can be changed, inextricably altered, and that what we think we know, what we believe with all our hearts, is not necessarily the truth.

Together with scientists from the U.S. and Ireland, Dr. Loftus  published a new study in the journal Psychological Science in which the researchers discovered that after being exposed to fake news, we might create false memories. 

The scientists presented several news pieces – some of them false- to more than 3,000 voters the week before the 2018 abortion referendum in Ireland and found that many voters claimed to have memories of the [made-up] stories they had read about. It’s worth noting that participants were more likely to create false memories after they came across stories consistent with their beliefs. 

Dr. Elizabet LOFTUS, cognitive psychologist and memory scientist

There’s an abundance of evidence suggesting the fallibility of our memory. Researchers have found that most of us hold false memories for many things, ranging from our own personal preferences and choices to memories of events from earlier in our lives, writes Kendra Cherry, a psychological rehabilitation specialist. 

Why it matters?

With disinformation campaigns aplenty expected in 2020, the research team warns that false memories triggered by false news are likely to be a factor in an upcoming election near you.

The Irish abortion referendum was conducted amid heightened fears about the potential for bad actors to hijack the campaign with misleading content Facebook and Google took steps to block foreign advertisers from running ads in Ireland. There was a lot of suspicion about fake news and yet people still fell for a lot of it, Ciara M Greene, one of the researchers, told Coda’s Isobel Cockerell.

The other noteworthy revelation in the study was voters’ reluctance to reject their recollections, even after being told their memories are predicated on fiction.

No need to fast forward to 2020

If you have been a bit suspicious that there may be a growing army of blinkered people remembering experiences that never happened and who wander the earth recruiting others into their fog, consider yourself vindicated. This is basically what Julia Shaw, a memory scientist at University College London, had suggested in her TEDxBergen talk back in 2017. She said we can form false memories as easily as through suggestion, noting: the social influence that we can have on other people’s memories is phenomenal.

In another speech that same year, she discussed how memories make the personal political. Memory creates our reality, and that defines our identity. The news we consume isn’t just filtering our reality; it’s responsible for influencing our perception of who we are as individuals.

Shaw says false memory might play a significant role in the global rise of the make our country great again sentiment, tapping into what psychologists call Rosy Retrospection. People are missing the good old days that never existed.

And then there’s social media

In the late 1970s, researchers found that a piece of information seems truthful to us if it feels familiar. And we deem a story familiar if we hear it regularly.

And here is when social media comes. What might initially be a fringe sentiment, rapidly gets disseminated through various social media channels until we are exposed to it all day every day. The algebra is simple: Familiarity = Believability. Remedies will require more advanced trigonometry.

Password checkup is an extension for Chrome browser that checks if some of the passwords you have entered earlier has been revealed or used during a cyber attack or information leak known to Google.

The issue of security has never been of greater significance, both for users and providers of various online services. Due to more frequent cyber-attacks and frauds, we experience in the digital ecosystem, both directly or indirectly, companies such as Facebook, Twitter, Google, and other giants are dealing with the increasing pressures on a daily basis in order to raise their users’ security on a higher level.

One of the results of such efforts is the Password checkup introduced in March this year. What is it about?

Password checkup is an extension for Chrome browser, which checks if some of the passwords you have entered earlier has been revealed or used. It may not be necessarily about your account, but about someone else’s who has a similar password as you, which is very common on the Internet during the cyber attack and information leak known to Google. If you open Chrome, log into any website, and enter the password that is no more secure for use (because it emerged in Google database among 4 million insecure passwords), you will get a warning.

An example of explanation in case the password is insecure

Installation is simple – it is necessary to download the extension Password checkup from Google Chrome Web Store. Once you add it, it will perpetually monitor every time you log into website or service. If it finds the password insecure it will display a red warning framework suggesting password change.

As stated in Google, within the first month of the extension launch they have searched over 21 million passwords, out of which 316.000 were on their list.

By Alex Romero, for Disinfo Portal

This article is part two of a series by Alto Data Analytics about the workings of disinformation in the digital ecosystem.

While there are no easy fixes, there are many opportunities to mitigate and build resilience against digital disinformation. Too often, politicians, regulators, and technology companies focus their efforts on combatting fake news rather than finding ways to mend vulnerabilities in the whole digital ecosystem. The past few years have demonstrated how the internet is an incredibly effective environment through which disinformation can be embedded in the public digital sphere.

Although much of the current debate around disinformation focuses on content – the lies and confusion spread by fake news – it is the ability of those with malign intent to leverage the entire digital ecosystem which is the real challenge.

Only through understanding of the vulnerabilities inherent in the digital ecosystem will effective responses become apparent. Elucidating and addressing those vulnerabilities will require Herculean efforts from politicians, regulators, tech companies and everyone who enjoys the freedom and convenience of the digital universe. There are, however, steps that can be taken now to kickstart those efforts.

Transparency – Technology companies and regulators must push for and cooperate to increase the visibility of data flows occurring on a multi-platform level by opening up analysis of public data for the public good. In February 2019, for example, Mozilla issued an open letter to Facebook to demand transparency in the political advertising that occurred on the platform ahead of the 2019 EU elections. Several civil society organizations, groups, and companies, including Alto Analytics, supported this open letter. Facebook responded to this call by announcing that they would open up their political advertising archive in March 2019. Unfortunately, the quality and accessibility of the Facebook data is far from ideal. New laws and regulations to mandate active transparency would be helpful.

Restrictions – Restrictions and regulations on explicit hate speech and violent extremist content need to be strengthened and actively enforced. In 2017 Germany enacted a law widely known as NetzDG which requires social media sites to react quickly in removing hate speech, fake news, and illegal material in order to avoid potentially hefty fines. The vagueness of the criteria for what falls under this law and the concern for overreach of censorship have been consistent criticisms, exemplifying the complexities of regulating online speech. But as a start for what new regulations could look like, the law is a good start.

Regulation – Regulations stipulating the accountability of digital platforms need to be clearer and more enforceable. In the wake of the 2016 US presidential election, Facebook, Twitter, and other tech giants have faced increasing scrutiny. Politicians from across the globe have called for regulations ranging from imposing platform “duty of care” obligations to totally dismantling the platforms themselves. Although regulation is only one piece of the puzzle, clarity and enforceability are essential to any real accountability.

Education – Multi-layered investments in awareness and education programs which encourage individual responsibility and safeguarding across all age and demographic groups are essential. For example, the News Literacy Project, launched in 2013, works with journalists and educators to provide students with the critical assessment skills necessary to ask the right questions and discern fact from fiction. These types of capacity-building projects must be active and robust at each level of the digital ecosystem, from the consumers of information to journalists, lawmakers, technology professionals, and others.

Redesign – A reformulation of digital business models and shareholder incentives are crucial to redesigning the attention economy. One of NiemanLab’s 2019 predictions for the future of journalism points to a shift (or return) to a model in which the prime currency of consumer value is quality journalism backed by subscriptions and not only clicks and measurements seeking time and attention.

Many media companies such as Bloomberg, Wired, BuzzFeed News, Medium, Business Insider, Quartz, and others have successfully moved to either paywalls or premium service models. The attention economy is proving to be harmful to both users and the digital environment, especially given the ease with which vulnerable users and entities can be exploited. A move toward models less dependent on time spent online and clicks could offer creative incentives for the digital economy to focus on “long-term value instead of short-term gratification,” as Gideon Lichfield of the MIT Technology Review states.

Journalism – Capacity building and support for both legitimate fact-checking organizations and entities that track companies’ public good accountability need increased attention and investment. A community media initiative known as the Listening Post Collective aims to provide journalists, newsrooms, and non-profits the tools and advice to create meaningful conversations in their communities. Facilitating these conversations involves listening in order to engage in journalism in ways that respond to communities’ informational needs, reflect their lives, and enable them to make informed decisions.

Another global project known as First Draft fights misinformation and disinformation through fieldwork, research, and educational initiatives. First Draft hosts a global verification and collaborative investigation network through their CrossCheck initiative. CrossCheck connects journalists, academia, business, and civil society worldwide in order to further dialogue and develop solutions for effective journalism in a digital ecosystem grappling with constantly evolving challenges. These are powerful examples of local and global initiatives that are fundamental to achieving collaboration and consensus on journalism’s role in mediating the disinformation landscape.

Active Defense – An effective strategy to disseminate disinformation has been to attack the legitimacy of the institutions that represent the established and authoritative consensus. The attacks on experts have been particularly corrosive to science-based areas such as climate change and vaccines.

Without support, expert institutions such as universities, academia, and in some cases government and other public institutions will struggle to maintain their reputation as trusted authorities. As a consequence, the loudest voices in public debate – whatever the facts – will prevail.

Limitations – The tactics used to target individuals and communities through advertisements, publicity, and other communications, such as those Russia successfully used in the 2016 US election, need to be better understood and actively moderated. Today, algorithmic acceleration of hateful and divisive content is more powerful than ever, making users of social platforms increasingly vulnerable to content that is intentionally targeted for their consumption.

When companies, individuals, and other entities pay to target and reach individuals with laser-sharp precision, the digital environment suffers from serious problems and imbalances.

In early 2018, Unilever and Procter and Gamble took a step in the right direction by threatening to pull ads from major digital platforms if the social media companies failed to address “toxic” online content head-on. Limiting the targeting of individuals and communities requires a multi-layered solution on the part of regulators, the business community, and the technology platforms.

The action points outlined here are a guide to what policy-makers, technology companies, academia, and other key stakeholders can do today to address some of the key issues around digital disinformation.

Some of the solutions suggested here will be complex, at least as complex as the intricate problems inherent in today’s digital ecosystem. In an environment in which all component pieces have interconnected causes and effects, it is imperative that solutions are thought of and implemented in a transversal manner, with the cooperation of many different stakeholders.

The third 360 / OS[1] organized by the DFRLab of the Atlantic Council brought together journalists, activists, innovators, and leaders from six continents bound together fighting for objective truth as a foundation of democracy.  

Digital Forensic Center’s team joined the Digital Sherlocks in London at the end of June for two days in open source research, as well as in the combat against fake news and other phenomena which today, unfortunately, shape the world we live in.

If you thought that the recently ended European elections would be a perfect opportunity for other countries to interfere through disinformation campaigns, you would not be the only one to think so. Many experts and people around the world saw it coming, taking into account numerous statements of the officials. That particular belief was coming from the fact that many recent elections around the world (US, Brazil, India, Columbia, Sweden..) always came in pair with some sort of information campaign coming from the East, on bigger or lesser scale influenced the final ratio of votes.

European elections in May were a very tempting target for somebody who wanted to interfere in our democratic processes, Sir Julian King, European Commissioner for the Security Union, said, on the first panel of the first working day at 360/OS.  But thanks to increased measures to protect its citizens from disinformation, he added, the EU did not see any kind of spectacular attack.

The EU got all the member states together to work on election security, as well as set up a rapid alert system, which allowed experts, both from civil society and the administrations to all look out for attempts of organized disinformation spreading and share that information with others.

Social networks must take actions against spreading disinformation

What we also learned from his speech, the EU also sat down with the big social media platforms’ representatives and together they agreed on a new code of practice on tackling disinformation on social media platforms. The issues of identification and deletion of false and misleading information, as well as the empowerment of consumers and the research community to identify instances of disinformation were incorporated in the newly created codex.

Time will tell if the new instrument will give results, but one thing is for sure – social networks, especially Facebook, must act as they are under growing criticism for facilitating the spread of disinformation.

People in Facebook are well aware of that. That is why  Nathaniel Gleicher, Head of Facebook Cybersecurity policy, gave us better understanding of how the platform perceives and addresses information operations. Disinformation is not the term in use at Facebook. He presented the audience with future plans for developing the software on the platform that will take down fake accounts and deceptive pages even faster.

On the panel entitled Open Source: Witness to a Crime, where the speaker was Eliot Higgins, founder of Bellingcat – investigative journalism network that specializes in fact-checking. Thanks to its researches (MH17, the Skripal poisonings, bombings in Syria, Yemen and Iraq) the network set up foundations of researches using open source. Together with GLAN (Global Legal Action Network) Bellingcat launched a project, the aim of which is to increase trust in open-source evidence.

One of the panels Deepfakes: It’s not What it Looks Like! tackled the phenomenon of creating fake videos. Sam Gregory, Program Director at WITNESS spoke about the evolution of technologies and deepfake and showcased, step by step, the process of creating one unique deepfake video. Having in mind the scope of disinformation and fake news threats which actively seek to destabilize democratic societies, all participants agreed that the constant fight is necessary against this modern calamity.

Keypoints & recommendations

  • All the countries threatened by disinformation campaigns and digital manipulation need to work together on countering this issue and on strengthening their democratic institutions showing the unity, prosperity and stability.
  • Information warfare requires communication among countries in order to raise critical awareness.
  • Working together with the platforms, such as Google, Facebook, Twitter, Mozilla etc., is necessary with a view to increase transparency, identification and deletion of fake and deceptive accounts and posts, and to empower the users and the research community to identify instances of disinformation.
  • Propaganda on the rise in authoritarian regimes: it is not about stopping people’s access to digital services but nudging their ideals & beliefs through automation, data filtering and data surveillance.
  • The power rivalries that we are seeing playing out in cyberspace have implications for our democratic systems.
  • Technology is a tool and authoritarian leaders have learned how to use that tool.
  • Information operations are defined as any coordinated effort to manipulate or corrupt public debate to achieve a strategic goal.
  • Bad actors don’t need to use super sophisticated techniques to target and manipulate the public.
  • Big news organizations should look more at the bigger picture, understanding who the actors are and how disinformation spreads – not just state what is true or not. It is necessary to educate their viewers how to be skeptical.
  • Three trends of online authoritarians: intimidation, quelling dissent, online harassment to create fear.
  • Strength is in: numbers, digital literacy, healthy skepticism, knowing how to be safe on the Internet, civil engagement and digital resilience.
  • Fighting disinformation should be a civic duty just as voting.

[1] Open Source

By Alex Romero, for Disinfo Portal

This article is part one of a two-part series by Alto Data Analytics about the workings of disinformation in the digital ecosystem.

In the months leading up to May’s European Union parliamentary elections, Alto Data Analytics researched the latest disinformation strategies in Europe’s digital public sphere. Data was collected from a wide range of public digital sources including social media, public forums, blogs, digital communities, discussion boards, news, video, wiki sites, and other sites, from mid-December 2018 to the end of May 2019 in France, Germany, Italy, Poland, and Spain.

Between December and January, the data lake included more than 4.7 billion data points indexed from over 200 million results from 20 million authors and continued to grow each month leading up to the elections.

The following series of articles will explore some of the findings and insights from the research, including analysis of the key issues, digital communities and relevant media across the networks identified, in addition to strong signals of coordinated patterns of disinformation across countries and languages.

But first, it will be useful to outline a broader view of disinformation in the digital ecosystem by outlining some of the key problems and suggesting potential remedies.

Fake news and the digital ecosystem

Much of the current debate around disinformation focuses on content – the lies and confusion spread by fake news. The Edelman Trust Barometer reported that nearly seven in 10 respondents among the general population worried about fake news or false information being used as a weapon to spread distrust.

The issue here is not fake news; it is the whole digital ecosystem. Sometimes it seems that social networks like Facebook and Twitter are the issue and that the problem will disappear once they are fixed. Unfortunately, the problem is more complex. The internet was not supposed to be used with malignant intent; as a communications platform, it has serious design flaws. The fact is that today’s digital ecosystem presents possibilities and incentives to lie at scale with lightning speed. The past few years have shown the digital ecosystem to be an incredibly effective environment through which a variety of actors can embed disinformation in the public digital sphere.

Through analysis of numerous social, political, and economic debates in Europe and the Americas over the past few years, four broad areas emerge as key to successful disinformation campaigns in the digital sphere.

Vulnerabilities and freedom of expression

The first is the soft underbelly of liberal democracy. Freedom of debate and expression is a core democratic value enabling and encouraging anyone and everyone to contribute views and debate on the broadest of political, social, economic, and cultural issues. In its European elections research, Alto Data Analytics discovered that on average, less than 0.1 percent of all users generated more than 10 percent of the public digital conversation. The vast dataset collected for the research acted as a powerful proxy of the public debate and the free expression in the digital sphere, and it helped to identify the social vulnerabilities most often exploited by those users with disproportionate abnormal activity. These users selectively focused on a reduced number of polarizing issues such as immigration or the role of multilateral organizations. Disinformation works effectively in vulnerable contexts, and if individuals lose trust in content on the internet then one of the key goals of disinformation warfare has been achieved. Finding or creating a social or economic vulnerability through active polarization of the debate is just the first stage.

Strategically-framed narratives

The next area is the ability to frame certain narratives and issues to suit particular viewpoints, often within localized or cultural contexts. This can range from genuine discussion and exchange of opinion to strategically reframing or distorting views or reality to encourage polarization. The recent European elections research shows how malicious actors relentlessly exploited anti-migrant and immigration themes to spread disinformation as a way to attack those classed as political elites and the wider EU establishment.

New agenda-driven media

A key tool for framing the narrative is the multitude and diversity of digital publications, ranging from government-backed media houses to emerging start-ups and local sites. Such domains are often designed to look like established media outlets. Upon closer inspection, however, they are actually content blocks aimed at distortion.

Our European elections analysis uncovered a list of influential sites aimed at polarizing an issue or spreading disinformation. Some of these were not familiar to journalists covering the EU elections for the better-known media brands in the countries analyzed. They were out of sight of the mainstream media because they proliferated with high intensity within siloed digital communities insulated from most political reporters.

Many of the sites which spread disinformation are funded through advertising networks that rely on programmatic advertising. Algorithms decide which ads go to which sites in real-time, and Alto Data Analytics’ research found that both global and local companies are unwittingly funding those sites. In addition, those sites rely on traffic from social media platforms such as Twitter or Facebook and sometimes receive funding from legitimate, though opaque, crowd-funding platforms. In other words, the entire digital ecosystem can be leveraged to augment disinformation’s spread from one website to larger and more diverse audiences.

Coordination across languages and geographies

The final area is the potential for massive coordinated distribution across languages, geographies, and numerous digital touchpoints that helps create the siloed digital communities which propagate disinformation. This incorporates an array of digital tools including automation, targeted advertising, Facebook, WhatsApp or Telegram Groups, or alternative social networks such as Gab.ai, to name just a few.

Techniques such as these are being used to exploit the digital ecosystem and shape the public agenda. Due to the inherent vulnerabilities and the current composition of such an ecosystem, digital disinformation presents a powerful set of possibilities with multi-level incentives and is a real threat to everyone who values democracy.

Technology can make it look as if anyone has said or done anything. Is it the next wave of (mis)information warfare?

In May 2018, a video appeared on the internet of Donald Trump offering advice to the people of Belgium on the issue of climate change. “As you know, I had the balls to withdraw from the Paris climate agreement,” he said, looking directly into the camera, “and so should you.”

The video was created by a Belgian political party, Socialistische Partij Anders, or sp.a, and posted on sp.a’s Twitter and Facebook. It provoked hundreds of comments, many expressing outrage that the American president would dare weigh in on Belgium’s climate policy.

But this anger was misdirected. The speech, it was later revealed, was nothing more than a hi-tech forgery.

Sp.a claimed that they had commissioned a production studio to use machine learning to produce what is known as a “deep fake” – a computer-generated replication of a person, in this case Trump, saying or doing things they have never said or done.

Sp.a’s intention was to use the fake video to grab people’s attention, then redirect them to an online petition calling on the Belgian government to take more urgent climate action. The video’s creators later said they assumed that the poor quality of the fake would be enough to alert their followers to its inauthenticity. “It is clear from the lip movements that this is not a genuine speech by Trump,” a spokesperson for sp.a told Politico.

As it became clear that their practical joke had gone awry, sp.a’s social media team went into damage control. “Hi Theo, this is a playful video. Trump didn’t really make these statements.” “Hey, Dirk, this video is supposed to be a joke. Trump didn’t really say this.”

The party’s communications team had clearly underestimated the power of their forgery, or perhaps overestimated the judiciousness of their audience. Either way, this small, left-leaning political party had, perhaps unwittingly, provided a deeply troubling example of the use of manipulated video online in an explicitly political context.

It was a small-scale demonstration of how this technology might be used to threaten our already vulnerable information ecosystem – and perhaps undermine the possibility of a reliable, shared reality.

Expert opinions

Danielle Citron, a professor of law at the University of Maryland, along with her colleague Bobby Chesney, began working on a report outlining the extent of the potential danger. As well as considering the threat to privacy and national security, both scholars became increasingly concerned that the proliferation of deep fakes could catastrophically erode trust between different factions of society in an already polarized political climate.

In particular, they could foresee deep fakes being exploited by purveyors of “fake news”. Anyone with access to this technology – from state-sanctioned propagandists to trolls – would be able to skew information, manipulate beliefs, and in so doing, push ideologically opposed online communities deeper into their own subjective realities.

“The marketplace of ideas already suffers from truth decay as our networked information environment interacts in toxic ways with our cognitive biases,” the report reads. “Deep fakes will exacerbate this problem significantly.”

Citron and Chesney are not alone in these fears. In April 2018, the film director Jordan Peele and BuzzFeed released a deep fake of Barack Obama calling Trump a “total and complete dipshit” to raise awareness about how AI-generated synthetic media might be used to distort and manipulate reality.

In September 2018, three members of Congress sent a letter to the director of national intelligence, raising the alarm about how deep fakes could be harnessed by “disinformation campaigns in our elections”.

While these disturbing hypotheticals might be easy to conjure, Tim Hwang, director of the Harvard-MIT Ethics and Governance of Artificial Intelligence Initiative, is not willing to bet on deep fakes having a high impact on elections in the near future. Hwang has been studying the spread of misinformation on online networks for a number of years, and, with the exception of the small-stakes Belgian incident, he is yet to see any examples of truly corrosive incidents of deep fakes “in the wild”.

Hwang believes that this is partly because using machine learning to generate convincing fake videos still requires a degree of expertise and lots of data. “If you are a propagandist, you want to spread your work as far as possible with the least amount of effort,” he said. “Right now, a crude Photoshop job could be just as effective as something created with machine learning.”

At the same time, Hwang acknowledges that as deep fakes become more realistic and easier to produce in the coming years, they could usher in an era of forgery qualitatively different from what we have seen before. In the past, for example, if you wanted to make a video of the president saying something he didn’t say, you needed a team of experts. Whereas today machine learning will not only automate this process, it will also probably make better forgeries.

Couple this with the fact that access to this technology will spread over the internet, and suddenly you have, as Hwang put it, “a perfect storm of misinformation”.

Technology on the rise

Nonetheless, research into machine learning-powered synthetic media forges ahead.

To make a convincing deep fake you usually need a neural model that is trained with a lot of reference material. Generally, the larger your dataset of photos, video, or sound, the more eerily accurate the result will be. But this May, researchers at Samsung’s AI Center in Moscow have devised a method to train a model to animate with an extremely limited dataset: just a single photo, and the results are surprisingly good.

The researchers were able to create the “photorealistic talking head models” using convolutional neural networks: they trained the algorithm on a large dataset of talking head videos with a wide variety of appearances. In this case, they used the publicly available VoxCeleb databases containing more than 7,000 images of celebrities from YouTube videos.

This trains the program to identify what they call “landmark” features of the faces: eyes, mouth shapes, the length and shape of a nose bridge.

This, in a way, is a leap beyond what even deep fakes and other algorithms using generative adversarial networks can accomplish. Instead of teaching the algorithm to paste one face onto another using a catalogue of expressions from one person, they use the facial features that are common across most humans to then puppeteer a new face.

As the team proves, its model even works on the Mona Lisa, and other single-photo still portraits. In the video, famous portraits of Albert Einstein, Fyodor Dostoyevsky, and Marilyn Monroe come to life as if they’re Live Photos in your iPhone’s camera roll. But like with most deep fakes, it’s pretty easy to see the seams at this stage. Most of the faces are surrounded by visual artifacts.

New detection methods

As the threat of deep fakes intensifies, so do efforts to produce new detection methods. In June 2018, researchers from the University at Albany (SUNY) published a paper outlining how fake videos could be identified by a lack of blinking in synthetic subjects. Facebook has also committed to developing machine learning models to detect deep fakes.

But Hany Farid, professor of computer science at the University of California, Berkeley, is wary. Relying on forensic detection alone to combat deep fakes is becoming less viable, he believes, due to the rate at which machine learning techniques can circumvent them. “It used to be that we’d have a couple of years between coming up with a detection technique and the forgers working around it. Now it only takes two to three months.”

This, he explains, is due to the flexibility of machine learning. “All the programmer has to do is update the algorithm to look for, say, changes of color in the face that correspond with the heartbeat, and then suddenly, the fakes incorporate this once imperceptible sign.”

Although Farid is locked in this technical cat-and-mouse game with deep fake creators, he is aware that the solution does not lie in new technology alone. “The problem isn’t just that deep fake technology is getting better,” he said. “It is that the social processes by which we collectively come to know things and hold them to be true or untrue are under threat.”

Reality apathy

Indeed, as the fake video of Trump that spread through social networks in Belgium demonstrated – a video for which it was later revealed that it was not forged by machine learning technology, as sp.a claimed at first, but by using an editing software called After Effects – deep fakes don’t need to be undetectable or even convincing to be believed and do damage. It is possible that the greatest threat posed by deep fakes lies not in the fake content itself, but in the mere possibility of their existence.

This is a phenomenon that scholar Aviv Ovadya has called “reality apathy”, whereby constant contact with misinformation compels people to stop trusting what they see and hear. In other words, the greatest threat isn’t that people will be deceived, but that they will come to regard everything as deception.

Recent polls indicate that trust in major institutions and the media is dropping. The proliferation of deep fakes, Ovadya says, is likely to exacerbate this trend.

According to Danielle Citron, we are already beginning to see the social ramifications of this epistemic decay. “Ultimately, deep fakes are simply amplifying what I call the liar’s dividend,” she said. “When nothing is true then the dishonest person will thrive by saying what’s true is fake.”

The Kremlin’s electoral meddling has many different forms around the world. In several recent European elections, we have seen a variety of interference tactics, including personal attacks, hack-and-leak operations, false narratives, amplifying sentiments, cyberattacks, and more. Of these methods, information manipulation remains a particularly pervasive tactic. Malign actors prefer techniques of information manipulation as they are effective and not expensive. As the Kremlin’s tactics evolve, research into their scope and impact is growing. We have collected some figures and data from recent European elections and referenda that illustrate the scope of this threat.

Common information manipulation tactics

  • Pro-Kremlin actors use a variety of information manipulation tactics to influence elections. On the one hand, they propagate certain disinformation narratives – in the case of Catalonia, for example, that Spain and Europe are in deep crisis – and try to amplify negative sentiments that are already present in discussions.
  • In other cases, for example, German parliamentary elections in 2017, the meddlers side with a particular candidate or political party, and they seek to boost public support for their sake. Meanwhile, ahead of Italy’s 2018 parliamentary elections, pro-Kremlin actors tried to amplify anti-migration messages and successfully spread them primarily through anti-immigration communities.
  • Finally, another common tactic is to create chaos and confusion by promoting contradictory narratives from both sides of a political issue, as exemplified in the 2016 Brexit referendum. Even though the Kremlin explicitly promoted the Leave campaign, Russia-linked Twitter accounts, in fact, spread both pro- and anti-Brexit narratives in the run-up to the referendum.

Amplifying via bots

  • Research shows that automated bots on social media play a significant role in election meddling. For instance, bots contributed to the rampant spread of anti-EU messages on British social media.
  • Bots constituted one-fourth of the accounts that spread the leading pro-Kremlin narratives during the unofficial Catalan referendum.

Manipulating traditional media

  • Complementing the social media tactics, pro-Kremlin actors rely on the manipulation of traditional media, namely by planting disinformation and deceptive narratives with help of pro-Kremlin outlets. Those narratives are then picked up by other sources and can gradually gain mainstream legitimacy.
  • This approach was effective in several cases: articles by RT and Sputnik were among the most shared in Spain around the unofficial Catalan referendum. Similarly, RT and Sputnik ranked in the top 3% of the most influential media outlets in Italy ahead of the 2018 national elections.