Did you know that 90% of data existing today was produced in the last two years? And, do you know in what way to use data for initiating social changes, storytelling or business development? What is data science, and what are crucial digital skills we must know to protect ourselves online? What are the key digital skills we must know to protect ourselves online? These and other questions were answered by renowned experts, participants of this year’s DataFest.

The topic of the third DataFest held mid-November in Tbilisi (Georgia) was centered on the role of data in everyday and business life of individuals and communities. Organizers, i.e. three local organizations ForSet, Tbilisi Startup Bureau and MaxinAI cooperating with international partners, gathered around many journalists, CSO activists, marketing specialists, business professionals, government officials, data analysts, developers and designers working with data or willing to explore data and its latest trends.

The agenda encompassed 12 main themes: data visualization, data science and artificial intelligence, data and Financial Technologies (FinTech), data for business and marketing, data startups and others. The DFC team found very important the latest discoveries in data and cyber-security, and trends in open-source research.

Raising awareness of the omnipresence of disinformation

Presenting their plans and projects, DFRLab’s Anna Pellegatta highlighted how important is to raise awareness of the omnipresence of disinformation and give some valuable open source tools for combating that phenomenon. She pointed out to many opportunities, but also vulnerabilities in the interconnected world we are living in and emphasized that there are those who would like to exploit those weaknesses for their own gain.

One of the most interesting presentations was Fake Anything: The Art of Deep Learning, held by Eyal Gruss, a machine-learning researcher from Israel. Eyal showcased the recent state of the art applications of generative deep learning algorithms in image processing and language modeling, as well as the power of deepfake, an increasingly popular topic around the globe.

Organizing practical workshops, the hosts gave participants a better insight into the most important topics and enabled them to learn practical skills

Mikalai Kvantiliani from Belarus and his workshop on cyber-security helped us to understand better the current trends and threats online, in order to protect our data better. Eto Buziashvili and Lukas Andriukaitis from the DFRLab gave the audience a better insight into coordinated inauthentic behavior and social media bots and how to investigate them and presented the ways authoritarian regimes want to hide the facts from the public. Andriukaitis demonstrated how they use open source intelligence to help identify, expose and explain disinformation and malicious narratives online.

Turning data into interactive stories

It was concluded at the panel that at the global level Facebook is the platform of choice among those willing to manipulate the facts, followed by Instagram and Twitter.

Marek Miller’s workshop dealt with data journalism, with reference to the increased role numerical data play in the production and distribution of information in the digital era. Turning data into interactive stories by using open source tools like Google MyMaps or Flourish was very useful for us because it showed us how the process was simple. Presentation of a London-based data scientist Cheuk Ting Ho taught us what kind of technology hides behind a chatbot and in what way it is created.

Besides the events on the mainstage and the vast number of workshops over the course of 3 days, the hosts gave us an opportunity to try something quite unexpected. OSCE Office for Democratic Institutions and Human Rights (ODIHR) in partnership with developers from Benetech created a video game Starlight Stadium, which theme was human rights monitoring mission in real life.

Eight digital skills

Digital skills we must know and be aware of:

  • Digital identity – the ability to create and manage one’s online identity and reputation
  • Digital use – the ability to use digital devices and media, including the mastery of control in order to achieve a healthy balance between offline and online life
  • Digital safety – the ability to manage risks online (cyberbullying, radicalization, etc) as well as problematic content (violence) and to avoid and limit these risks
  • Digital security – the ability to detect cyber threats (hacking, scams, malware), to understand best practices and to use suitable security tools for data protection
  • Digital emotional intelligence – the ability to be empathetic and build good relationships with others online
  • Digital communication – the ability to communicate and collaborate with others using digital technologies and media
  • Digital literacy – The ability to find, evaluate, utilize, share and create content
  • Digital rights – the ability to understand and uphold personal and legal rights, including rights to privacy, intellectual property, freedom of speech and protection from hate speech.

Facebook took down 50 Instagram accounts that indicated links to the Internet Research Agency (IRA) and posted 75,000 times.

An analysis of the recent disinformation campaign by Graphika, a leading data science and network analysis firm, reveals how Instagram accounts posting about US social and political issues and the 2020 election aimed to polarise communities of swing states ahead of the 2020 US presidential elections. Facebook, which is facing public pressure to crack down on election-related influence efforts on the platform, announced the takedown last week.

Facebook concluded that the operation originated from Russia and showed some links to the Internet Research Agency, the Russian troll factory that previously targeted US audiences and the United States presidential election in 2016. The IRACopyPasta campaign, which received this nickname due to its close resemblance to the previous IRA campaigns, focused on socially divisive issues like race, immigration, and Second Amendment (gun ownership) rights.

The 50 accounts claimed to represent various politically active US communities from both sides of the political spectrum, including black activists, police supporters, LGBT groups, Christian conservatives, Muslims, environmentalists, and gun-right activists. Some supported Senator Bernie Sanders and some President Donald Trump.

Notably, a number of accounts representing both left- and right-wing views attacked Joe Biden, who is widely considered to be the leading Democratic presidential candidate based on current polls. This brings back memories of similar strategy deployed by the IRA against Hillary Clinton during the 2016 Presidential elections campaign. In addition, some accounts also attacked Democratic candidates Kamala Harris and Elizabeth Warren, who are competing for Democrats nomination with Bernie Sanders. Almost half of the 50 accounts claimed to be based in swing states, especially Florida.

But the similarities with previous IRA campaigns do not stop with choice of targets and content. Some of the 50 Instagram accounts actually reused content originally produced by the IRA, though recreated IRA content is insufficient grounds on its own to provide a firm attribution. Moreover, multiple accounts shared the same content just hours apart, indicating that they were, more than likely, part of the same network.

Many posts only contained memes without any accompanying text, an approach possibly intended to reduce the risk of language errors – one of the most important telltale signs of inauthentic Russian-origin content. In the same vein, posts that did include text were usually a copy-paste of viral posts originally created by American accounts – hence the moniker IRACopyPasta. Screenshots of tweets from genuine Americans were also used.

Most of the posts by these Instagram accounts were about general news, probably intended to develop the brand and credibility of the online persona. In terms of reach, some posts gained hundreds of likes, although these numbers were orders of magnitude lower than the American originals they copied. Most of the accounts had fewer than 5,000 followers and only the account focusing on environmental issues had more than 20,000 followers.

False information labels

The app will add false information labels that obscure posts that have been debunked by Facebook’s fact checkers, the company announced.

The labels, which will roll out over the next month, will appear on posts in Stories and Instagram’s main feed. Users will still be able to view the original post, but they’ll have to click See Post to get there.

The update comes less than two weeks after the Senate Intelligence Committee released the second volume of its report on interference in the 2016 election, which called Instagram the most effective tool used by the Internet Research Campaign.

Instagram will also warn users who attempt to share a post that has previously been debunked. Before the post goes live, they’ll see a notice that fact checkers say it contains false information, with a link to more information. They can still opt to share the post with their followers, but it will appear with the false information label.

In addition to clearer labels, we’re also working to take faster action to prevent misinformation from going viral, especially given that quality reporting and fact-checking takes time, the company writes. In many countries, including in the US, if we have signals that a piece of content is false, we temporarily reduce its distribution pending review by a third-party fact-checkers.

The steps are the most aggressive that Facebook has taken to reduce the spread of viral misinformation on Instagram.

Link: https://euvsdisinfo.eu/figure-of-the-week-75000/

“DFC 365“ Conference, organized by the Atlantic Council of Montenegro was held on October 22, 2019, in Centreville hotel in Podgorica. The Conference served as the occasion to present a year of work of the Digital Forensic Center (DFC), a section of the Atlantic Council of Montenegro which is actively engaged in debunking of disinformation and fake news. Additionally, there was a discussion of the key problems and challenges, with a special emphasis on the forthcoming elections, both in Montenegro and in the Western Balkans.

Dr. Savo Kentera, President of the Atlantic Council of Montenegro gave an opening speech and emphasized the importance of the Digital Forensic Center in a fight against undermining of democratic systems.

Kentera said that the Western democracies became too relaxed after the Cold War and it was a huge mistake. According to him, it is high time to shift from defense to offense and take initiative since the year of elections is ahead of us and it will represent a fertile soil for malicious foreign influence. He also added that Montenegro is readier than it used to be to respond to hybrid attacks.

“I believe that we are readier to respond to these attacks than we used to be in 2016 and I think that many things are going to improve. There is a high quality coordination in sharing of the information within the system, but we should also include NGOs, companies and all those who perceive Montenegro as a part of the EU.“, Kentera stated.

Reflecting on the recent messages from Brussels, he emphasized that the region must not be left to wait due to the strengthening of the EU, because, all of those who want to “cover“ this region will eventually do it.

Finally, the President of the Atlantic Council had a clear message for all those who strive to undermine democratic systems:

“You have been trying to accomplish this and you have been successful. Keep trying as much as you want and you can win one or two battles but you will never win a war!”

Milan Jovanović, Analyst of the DFC explained the work of the DFC in his presentation, as presented the publications with accomplishments during the previous year.

According to him, the main target of disinformation was not predominantly the EU, but NATO and Army. Jovanović stated that there were campaigns against the Army of Montenegro which resulted in a belief that we lost our sovereignty with NATO accession. “I am telling you openly that all of this is coming from Russia and has to do with their political agenda“ Jovanović concluded.

Ljubo Filipović, the Chief Analyst of the DFC, highlighted that the trust is more important than the truth in this field and that the disinformation is created specifically in this context. According to him, we should not engage in counter-propaganda, but we should check the facts.

“And first, we should check our work, so that we would not become an easy target for the other party“, Filipović said.

Minister of Defence of Montenegro, Predrag Bošković, commended the cooperation between the Atlantic Council of Montenegro and the Ministry of Defence of Montenegro and said that Montenegro is readier to fight against these issues today than it used to be during the period before NATO accession. „It is a fact that in 2016, Montenegro was the first country in Europe to experience direct interference of external factors, through the usage of different methods. Among the leading campaigns was the creation of false information. Of course, combined with different cyber-attacks and direct attempted terrorist act in Montenegro“, Minister Bošković highlighted.

U.S. Ambassador to Montenegro, Judy Rising Reinke, emphasized that Russia and the USA have two completely different approaches to Montenegro. According to her, Russia attempts to confuse the public and slow the progress of Montenegro, while the USA offers support in building of democratic institutions. Rising Reinke commended the role of the DFC in a fight against demonstrating and creating fake content.

President of the NGO Why not, Darko Brkan, stated that regional cooperation is the key, given the fact that we speak the same language and without regional cooperation, we cannot go deeply into the matter. He also added that the hiding of information is considered disinformation.

Eto Buziashvili, Research Assistant in the Atlantic Council’s Digital Forensic Research Lab, emphasized the importance of social networks when it comes to disinformation and its creation. She said that the cooperation with Twitter and Facebook is crucial since the deactivation of profiles that coordinately spread disinformation helps us narrow the field for misuse of social media.

Thieves used voice-mimicking software to imitate a company executive’s speech and dupe his subordinate into sending hundreds of thousands of dollars to a secret account, the company’s insurer said, in a remarkable case that some researchers are calling one of the world’s first publicly reported artificial-intelligence heists.

The managing director of a British energy company, believing his boss was on the phone, followed orders one Friday afternoon in March to wire more than $240,000 to an account in Hungary, said representatives from the French insurance giant Euler Hermes, which declined to name the company.

The request was rather strange, the director noted later in an email, but the voice was so lifelike that he felt he had no choice but to comply. The insurer, whose case was first reported by the Wall Street Journal, provided new details on the theft to The Washington Post on Wednesday, including an email from the employee tricked by what the insurer is referring to internally as the false Johannes.

Now being developed by a wide range of Silicon Valley titans and AI start-ups, such voice-synthesis software can copy the rhythms and intonations of a person’s voice and be used to produce convincing speech. Tech giants such as Google and smaller firms such as the ultrarealistic voice cloning start-up Lyrebird have helped refine the resulting fakes and made the tools more widely available free for unlimited use.

New technologies and losing the trust 

But the synthetic audio and AI-generated videos, known as deepfakes, have fueled growing anxieties over how the new technologies can erode public trust, empower criminals and make traditional communication — business deals, family phone calls, presidential campaigns — that much more vulnerable to computerized manipulation.

Criminals are going to use whatever tools enable them to achieve their objectives cheapest, said Andrew Grotto, a fellow at Stanford University’s Cyber Policy Center and a senior director for cybersecurity policy at the White House during the Obama and Trump administrations.

This is a technology that would have sounded exotic in the extreme 10 years ago, now being well within the range of any lay criminal who’s got creativity to spare, Grotto added.

Developers of the technology have pointed to its positive uses, saying it can help humanize automated phone systems and help mute people speak again. But its unregulated growth has also sparked concern over its potential for fraud, targeted hacks and cybercrime.

At least three voice-mimicking frauds

Researchers at the cybersecurity firm Symantec said they have found at least three cases of executives’ voices being mimicked to swindle companies. Symantec declined to name the victim companies or say whether the Euler Hermes case was one of them, but it noted that the losses in one of the cases totaled millions of dollars.

The systems work by processing a person’s voice and breaking it down into components, like sounds or syllables, that can then be rearranged to form new phrases with similar speech patterns, pitch and tone. The insurer did not know which software was used, but a number of the systems are freely offered on the Web and require little sophistication, speech data or computing power.

Lyrebird, for instance, advertises the most realistic artificial voices in the world and allows anyone to create a voice-mimicking vocal avatar by uploading at least a minute of real-world speech.

Adjusting to increased development of technologies

The company, which did not respond to requests for comment, has defended releasing the software widely, saying it will help acclimate people to the new reality of a fast-improving and inevitable technology so that society can adapt. In an ethics statement, the company wrote Imagine that we had decided not to release this technology at all. Others would develop it and who knows if their intentions would be as sincere as ours.

Saurabh Shintre, a senior researcher who studies such adversarial attacks in Symantec’s California-based research lab, said the audio-generating technology has in recent years made transformative progress because of breakthroughs in how the algorithms process data and compute results. The amount of recorded speech needed to train the voice-impersonating tools to produce compelling mimicries, he said, is also shrinking rapidly.

The technology is imperfect, and some of the faked voices wouldn’t fool a listener in a calm, collected environment, Shintre said. But in some cases, thieves have employed methods to explain the quirks away, saying the fake audio’s background noises, glitchy sounds or delayed responses are the result of the speaker’s being in an elevator or car or in a rush to catch a flight.

Putting pressure – an eternal tactic of deception

Beyond the technology’s capabilities, the thieves have also depended on age-old scam tactics to boost their effectiveness, using time pressure, such as an impending deadline, or social pressure, such as a desire to appease the boss, to make the listener move past any doubts. In some cases, criminals have targeted the financial gatekeepers in company accounting or budget departments, knowing they may have the capability to send money instantly.

When you create a stressful situation like this for the victim, their ability to question themselves for a second — Wait, what the hell is going on? Why is the CEO calling me? — goes away, and that lets them get away with it, Shintre said.

Euler Hermes representatives said the company, a German energy firm’s subsidiary in Britain, contacted law enforcement but has yet to name any potential suspects.

The insurer, which sells policies to businesses covering fraud and cybercrime, said it is covering the company’s full claim.

The victim director was first called late one Friday afternoon in March, and the voice demanded he urgently wire money to a supplier in Hungary to help the company avoid late-payment fines. The fake executive referred to the director by name and sent the financial details by email.

The fraud unveiled

The director and his boss had spoken directly a number of times, said Euler Hermes spokeswoman Antje Wolters, who noted that the call was not recorded. The software was able to imitate the voice, and not only the voice: the tonality, the punctuation, the German accent, she said.

After the thieves made a second request, the director grew suspicious and called his boss directly. Then the thieves called back, unraveling the ruse The fake Johannes was demanding to speak to me whilst I was still on the phone to the real Johannes! the director wrote in an email the insurer shared with The Post.

The money, totaling 220,000 euros, was funneled through accounts in Hungary and Mexico before being scattered elsewhere, Euler Hermes representatives said. No suspects have been named, the insurer said, and the money has disappeared.

There’s a tension in the commercial space between wanting to make the best product and considering the bad applications that product could have, said Charlotte Stanton, the director of the Silicon Valley office of the Carnegie Endowment for International Peace. Researchers need to be more cautious as they release technology as powerful as voice-synthesis technology, because clearly it’s at a point where it can be misused.

Facebook and Twitter received the lion’s share of attention in connection with Russia’s election interference in 2016. But Instagram, the photo- and video-posting platform, was more important as a vehicle for disinformation than is commonly understood, and it could become a crucial Russian instrument again next year.

Instagram’s image-oriented service makes it an ideal venue for memes, which are photos combined with short, punchy text. Memes, in turn, are an increasingly popular vehicle for phony quotes and other disinformation. Deepfake videos are another potential danger on Instagram. Made with readily available artificial intelligence tools, deepfakes seem real to the naked eye and could be used to present candidates as saying or doing things they have never said or done.

There’s more to worry about than just Instagram. As I explain in a new report published by the New York University Stern Center for Business and Human Rights, the Russians may not be the only foreign operatives targeting the US. Iranians pretending to be Americans have already jumped into the US disinformation fray. And China, which has deployed English-language disinformation against protesters in Hong Kong, could turn to the US next.

In terms of sheer volume, domestically generated disinformation – coming mostly from the US political right, but also from the left – will probably exceed foreign-sourced false content. One of the conspiracy theories likely to gain traction in coming months is that the major social media companies are conspiring with Democrats to defeat Donald Trump’s bid for re-election.

Whoever is spreading disinformation meant to rile up the American electorate, Instagram will almost certainly come into play. Started in 2010, it was acquired by Facebook 18 months later for $1bn. Today, Instagram has about 1 billion users, compared to nearly 2.4 billion for Facebook, 2 billion for YouTube (which is owned by Google), and 330 million for Twitter.

In 2016, the Internet Research Agency (IRA), a notorious Russian trolling operation, enjoyed more US user engagement on Instagram (187m ), than it did on any other social media platform (Facebook 77m and Twitter 73m), according to a report commissioned by the Senate intelligence committee and released in December 2018. Other observers have noted that, beyond Russian interference, domestically generated hoaxes and conspiracy theories are thriving on Instagram.

Instagram is a hotbed for disinformation dissemination, Otavio Freire, chief technology officer and president of SafeGuard Cyber, a social media security company, told me. The visual nature of content makes it easier to stoke discord by speaking to audiences’ beliefs through memes. This content is easy and inexpensive to produce but more difficult to factcheck than articles from dubious sites.

Facebook belatedly is trying to filter out some of the muck found on Instagram. In the past year, hundreds of Instagram accounts have been removed for displaying what Facebook calls coordinated inauthentic behavior.

In August, Facebook announced a test program that uses image-recognition and other tools to find questionable content on Instagram, which is then sent to outside fact-checkers that work with Facebook. In addition, Instagram users now for the first time can flag dubious content as they encounter it. The platform has made it easier for users to identify suspicious accounts by disclosing such information as the accounts’ location and the ads they’re running.

But Facebook and Instagram could do more. Content that factcheckers deem to be false is removed from certain Instagram pages but not taken down altogether. In my view, once social media platforms carefully determine that material is provably false, it ought to be eliminated so that it won’t spread further. Platforms should retain a copy of the excised content in a cordoned-off archive available for research purposes to scholars, journalists and others.

Another problem is that Facebook and the other major social media companies have allowed responsibility for content decisions to be dispersed among different teams within each firm. To simplify and consolidate, each company should hire a senior official who reports to the CEO and supervises all efforts to combat disinformation.

Finally, the platforms should cooperate more than they do now to counter disinformation. Purveyors of false content, whether foreign or domestic, tend to operate across multiple platforms. To rid the coming election of as much disinformation as possible, the social media companies ought to emulate collaborative initiatives they have used to stanch the flow of child pornography and terrorist incitement.

Fair elections depend on voters making decisions informed by facts, not lies and distortions. That’s why the social media companies must do as much as possible to protect users of Instagram and the other popular platforms from disinformation.

By Paul M BARRETT, deputy director of the New York University Stern Center for Business and Human Rights

The British Broadcasting Corporation (BBC) and some of the biggest names in journalism and technology have presented their plans to help fighting so-called fake news.

The new measures include a system which will be used during elections, in life-threatening situations; it will provide more instructions and explanations on using social networks, as well as improved access to impartial resources for voters.

Companies such as Google, Twitter and Facebook helped devise the scheme.

The BBC estimated these moves as crucial in fighting disinformation.

The agreement was reached after numerous criticism directed towards big technology companies, for they were considered not to have done many efforts in preventing fake news spread – from groundless scares about vaccines to stories manufactured to influence voters before elections, which example is the recent voting in India.

At the beginning of summer, the BBC convened a Trusted News Summit, bringing together senior figures from major global technology companies to help tackle the problem.

The group came up with measures including:

  • Early warning system: creating a system where the organizations can exchange information on discovered disinformation that threatens human life or disrupts democratic elections.
  • Media education: a joint campaign and promotion of media education.
  • Voter information: co-operation on information that will be available to citizens around elections, so there is a way to explain how and where to vote.
  • Shared learning: particularly around high-profile elections.

BBC Director-General Tony Hall said that disinformation and so-called fake news are a threat to us all.

At its worst, they can present a serious threat to democracy and even to people’s lives, said Hall adding that this Summit showed a determination to take collective action to fight this problem.

Further details will be released at a later date.

Debate on data security while using mobile application FaceApp has taken over social networks and media around the world.

FaceApp was created in 2017 by Yaroslav Goncharov in Russia. The application works on the principle that users select an image they want to modify, upload it, then select a filter they want (most commonly a filter that makes the user older) and get the completely modified image.

Joshua Nozzi, an app developer, posted a tweet about FaceApp on July 15 and warned that the application automatically uploads all photos from the camera roll, regardless of whether the user approved it or not. Nozzi made a public apology on the following day, saying that he had hastily posted the tweet without prior testing the application. However, he claimed that it is strange for an application to require access to all photos when it is not necessary.

That tweet and the fact that the application was made in Russia caused panic across the world and launched an avalanche of online discussions about allegations that Russian troll farm makes the database using the uploaded photos. There is a justified fear that photo might be used for launching fake accounts in future disinformation campaigns or in creating deepfake. The thing went too far when the UNILAD journalist, Emma Rosemurgey, wrote in her article that Joshua was working for the Russians. He denied these allegations and soon after the disputable part was removed from the text on the UNILAD website.

With regard to the topic that has shaken the public recently, Yaroslav Goncharov gave a statement denying the claims that the application uploads all photos from the camera roll, it uploads only the one that was selected. He stated that data is not transferred to Russia – it is kept on US-controlled servers Amazon and Google. Also, he pointed out that, at the request of the users, photos can be removed from the server, as well as that most photos are being deleted from the server within 48 hours from the moment of uploading. He named performance and traffic as the main reasons for storing uploaded photos, since the company wants to be sure that the user does not upload a photo several times for every editing. Goncharov emphasized that the company neither sells nor shares users’ data with third parties. However, can we rely on his word?

Terms of service – are we aware of what we accept?!

Another interesting issue is the terms of service of all popular applications, that users accept without previous reading. A lawyer Elizabeth Pots Weinstein posted a tweet stating that everyone who installs the application and accepts FaceApp terms of service give perpetual, irrevocable, free, worldwide license to use, reproduce, modify, adapt, publish, translate, create derivative works. That was a trigger for new discussion and comparison of the terms of service of applications, such as Facebook, Twitter, Instagram, etc. Lance Ulanoff, Editor-in-Chief of a website Lifewire posted a tweet on terms of service of Twitter, stating: By submitting, posting or displaying Content on or through the Services, you grant us a worldwide, non-exclusive, licence to use, copy, reproduce, process, adapt, modify, publish, transmit, display and distribute such Content in any and all media or distribution methods. It is obvious that FaceApp terms of service are as disputable as terms of service of other applications we use on a daily basis.

Furthermore, FaceApp is similar to Guess my age, an application made by Microsoft in 2005, which has specific characteristics as its Russian counterpart. The reason for not bringing the issue of security of that application back then might be because it was not made in Russia, which, following the events in previous years automatically became a synonym for danger.

Foundations of professional journalism, at least the one inherited by Western civilization, shook the story of Claas Relotius, now a former journalist of German Spiegel (Der Spiegel). Reputation of the renowned German weekly, which has been being built up for seven decades, was tarnished when it came to light that their journalist fabricated, and even invented information and interviewees in his texts.

WHAT DO WE KNOW?

Almost all national, regional and world media wrote about the case of journalist Claas Relotius, who, as he himself admitted, fabricated information, invented or embellished statements and quotes in at least 14 texts published by German Spiegel. He started his engagement in one of the most influential print media in Germany in 2011, as a so-called freelancer. In this period, he published around 60 texts that “captured” attention of the world public, that was fascinated by his researcher spirit reach. Thanks to his writing he received several prestigious awards, among which are the one for journalist of the year awarded by the US CNN in 2014 and German Reporterpreis, just one month ago.

Klas Relocijus sa nagradom CNN-a, izvor: NBCnews.com

Through internal investigation, along with the suspicions of several journalists working with Relotius, the editorial board of Spiegel found out that he fabricated information in his text about building a wall between the US and Mexico, stating that he saw a sign in Minnesota saying “Mexicans, keep out”. Furthermore, it is also doubted that texts about prisoner Muhammad Bawazir, who refused to leave Guantanamo Bay, and those about Aleppo brothers and sisters who live on the streets of Turkey, are fabricated. The last story cost him criminal charges, on suspicion that through his personal email account, he sent instructions for donations for these two children, that ended up on his private account. Even the very existence of the girl is subject to suspicion.

WHAT ARE THE CONSEQUENCES?

First and foremost, the credibility of all journalism aspects, responsibility for what is written, accountability to the reader, and faith in the truth, have been shaken. The question has arisen: “What were the reasons that made reputable journalist fabricate already dramatic and sad life stories, invent statements, interviewees, characters?” We would say nothing but blindness to the global aspiration for exclusivity in reporting. The speed imposed by the structure of the media in the 21st century puts to test the established postulates of journalistic ethics, where “we write truthfully about the truth in the interest of the entire community” is not the top priority. The imposed reporting dynamics that prefers speed over quality, leave the system completely unprotected against such sorts of abuse and fraud.

Jaegers Grenze jedan od tekstova koji imaju fabrikovan sadržaj

This is, also, a smack to democracy as a whole. We live in the time of the information and news crisis, which consequently makes an impact on the perspective of democracy, and which is going on parallel with processes of new media boom, processes of post-democracy and post-politics. Online media have altered professional standards, there is no more impartiality, the public is no more passive information recipient, but on contrary, citizens themselves look for information due to distrust in media, they establish alternative media, and finally “new, odd forms”, such as fake news, appear. The very definition of this phenomenon contradicts the professional standards since news is by its definition truthful, otherwise, it is not news. Lazy objectivism can be reflected in a situation where fake news and disinformation run the marathon, while a journalist is still on the starting point, meaning that the traditional media has been sleeping peacefully tucked in, not being informed at all about the organized marathon.

How are media going to regain trust of the public in quality journalism, will be the main issue in the years to come, not only for scholars and students who analyze mass communications, but for those who try to shape citizens’ attitudes, hence, institutions in democratic societies, by their performing and reporting on what has happened. Information crisis is the one that touches upon the perspective of democracy, but also the one that shakes its fundaments. The increase of propaganda, hatred speech, populism, disinformation, and fake news spreading, as well as self-critical policy with extremism blade that jeopardize stability and peace in the country and abroad, are some of the characteristics of the world we live in.

Strengthening public purpose of journalism, and helping the media connect with citizens in a more effective manner, will be a challenge in the period to come. This existential crisis requires, primarily, journalists who revert to the postulates of their profession and to reporting in a manner close to the public. It is imperative to find solutions for financing journalism of public interest as well. Those solutions require political will to invest in open, connected and pluralistic systems of communication. More investments in quality information and actions in fighting hatred, racism, disinformation, and intolerance are needed; more resources for research reporting; more connections to ethical values in governing and managing media.

At the time of social network popularity boom, the traditional media have to embrace social media, with a view to defend truth and spread verified and reliable information. What a group of like-minded persons can do with unverified information posted on social networks, in a short period of time, traditionally structured media cannot retrieve easily. “Electronic autism” creates an illusion of communication, creates a reader who, in this conceit, believes, and then spreads unverified information, often unconsciously creating a double false narrative whose decoding requires a coordinated action guided by truthful reporting. The solution is seen in strong advocacy of cooperation, the media, and computer literacy, in combating various forms of inequality, but also in criticizing single-mindedness, disinformation, and populism.

“Electronic autism” creates an illusion of communication, creates a reader who, in this conceit, believes, and then spreads unverified information, often unconsciously creating a double false narrative whose decoding requires a coordinated action guided by truthful reporting. The solution is seen in strong advocacy of cooperation, the media, and computer literacy, in combating various forms of inequality, but also in criticizing single-mindedness, disinformation and populism.

Software giant Microsoft says it has uncovered a series of cyberattacks by hackers linked to Russia targeting democratic institutions, think tanks and nonprofit organizations in Europe, highlighting concerns of possible interference in European Union elections in May.

The attacks occurred between September and December, targeting employees of the German Council on Foreign Relations and European offices of The Aspen Institute and The German Marshall Fund, the company said in a blog post.

Microsoft said the activity targeted more than 100 employee accounts in Belgium, France, Germany, Poland, Romania, and Serbia. The attacks were discovered through Microsoft’s Threat Intelligence Center and Digital Crimes Unit, the company said.

Many of the attacks originated from Strontium, one of the world’s oldest cyberespionage groups, which has been previously associated with the Russian government.

Strontium has also been called APT 28, Fancy Bear, Sofancy, and Pawn Storm by a range of security firms and government officials.

Security firm CrowdStrike has said the group may be associated with the Russian military intelligence service, the GRU.

Microsoft’s cybersecurity service AccountGuard will be expanded to 12 new markets in Europe including Germany, France, and Spain, to help customers secure their accounts, the company said.

The AccountGuard service will also be available in Sweden, Denmark, Netherlands, Finland, Estonia, Latvia, Lithuania, Portugal, and Slovakia.

The announcement comes as EU officials are bracing for attempted meddling ahead of the bloc’s elections in May when far-right parties appear set to make gains.