Informaatiosodankäynti, propaganda ja kulttuurivaikuttaminen - Turvallisuuden ulottuvuus

  • Viestiketjun aloittaja Viestiketjun aloittaja LasUps
  • Aloitus PVM Aloitus PVM
The effect of Russian trolls influencing opinion through social media is far more minor than commonly supposed, according to a new study.

It is believed Kremlin agents orchestrated efforts to manipulate public opinion on the web, often around major political events such as the US presidential election, through dedicated accounts, or "trolls". These trolls spread disinformation and fire up discord on social media, distracting people from real issues.

Researchers from Cyprus University of Technology, University College London, and the University of Alabama, analysed 27,000 tweets posted by a thousand Twitter users identified as having ties with Russian propaganda factory the Internet Research Agency, and were therefore likely to be state-sponsored trolls.
Linkki
 
The data analytics firm that worked with Donald Trump’s election team and the winning Brexit campaign harvested millions of Facebook profiles of US voters, in the tech giant’s biggest ever data breach, and used them to build a powerful software program to predict and influence choices at the ballot box.

A whistleblower has revealed to the Observer how Cambridge Analytica – a company owned by the hedge fund billionaire Robert Mercer, and headed at the time by Trump’s key adviser Steve Bannon – used personal information taken without authorisation in early 2014 to build a system that could profile individual US voters, in order to target them with personalised political advertisements.

Christopher Wylie, who worked with an academic at Cambridge University to obtain the data, told the Observer: “We exploited Facebook to harvest millions of people’s profiles. And built models to exploit what we knew about them and target their inner demons. That was the basis that the entire company was built on.”
https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election
 

St Petersburg's Internet Research Agency -- AKA "The Troll Factory" -- is in the news since Robert Mueller indicted 13 of its employees, but it first came to public attention in 2013, when investigative reporters working for the independent newspaper Novaya Gazeta revealed that the agency was working to manipulate Russian public opinion in favor of Putin and the Kremlin and against opposition politicians by flooding Russian online discussions with thousands of "patriotic" posts made under a welter of pseudonyms.

The story of Novaya Gazeta's scoop -- and the followup revelations in Russia's precarious independent press -- is quite a tale, where bravery, smarts, and dogged determination uncovers a plot by a seemingly impregnable state to shore up its power.
https://boingboing.net/2018/03/26/firehoses-of-falsehood.html
 
Psychological Weapons of Mass Persuasion

When I was a teenager, my parents often asked me to come along to the store to help carry groceries. One day, as I was waiting patiently at the check-out, my mother reached for her brand new customer loyalty card. Out of curiosity, I asked the cashier what information they record. He replied that it helps them keep track of what we’re buying so that they can make tailored product recommendations. None of us knew about this. I wondered whether mining through millions of customer purchases could reveal hidden consumer preferences and it wasn’t long before the implications dawned on me: are they mailing us targeted ads?

This was almost two decades ago. I suppose the question most of us are worried about today is not all that different: how effective are micro-targeted messages? Can psychological “big data” be leveraged to make you buy products? Or, even more concerning, can such techniques be weaponized to influence the course of history, such as the outcomes of elections? On one hand, we’re faced with daily news from insiders attesting to the danger and effectiveness of micro-targeted messages based on unique “psychographic” profiles of millions of registered voters. On the other hand, academic writers, such as Brendan Nyhan, warn that the political power of targeted online ads and Russian bots are widely overblown.

In an attempt to take stock of what psychological science has to say about this, I think it is key to disentangle two prominent misunderstandings that cloud this debate.

First, we need to distinguish attempts to manipulate and influence public opinion, from actual voter persuasion. Repeatedly targeting people with misinformation that is designed to appeal to their political biases may well influence public attitudes, cause moral outrage, and drive partisans further apart, especially when we’re given the false impression that everyone else in our social network is espousing the same opinion. But to what extent do these attempts to influence translate into concrete votes?

The truth is, we don’t know exactly (yet). But let’s evaluate what we do know. Classic prediction models that only contain socio-demographic data (e.g. a person’s age), aren’t very informative on their own in predicting behavior. However, piecing together various bits of demographic, behavioral, and psychological data from people, such as pages you’ve liked on Facebook, results from a personality quiz you may have taken, as well as your profile photo (which reveals information about your gender and ethnicity) can improve data quality. For example, in a prominent studywith 58,000 volunteers, a Stanford researcher found that a model using Facebook likes (170 likes on average), predicted a whole range of factors, such as your gender, political affiliation, and sexual orientation with impressive accuracy.

In a follow-up study, researchers showed that such digital footprints can in fact be leveraged for mass persuasion. Across three studies with over 3.5 million people, they found that psychologically tailored advertising, i.e. matching the content of a persuasive message to an individuals’ broad psychographic profile, resulted in 40% more clicks and in 50% more online purchases than mismatched or unpersonalized messages. This is not entirely new to psychologists: we have long known that tailored communications are more persuasive than a one-size-fits all approach. Yet, the effectiveness of large-scale digital persuasion can vary greatly and is sensitive to context. After all, online shopping is not the same thing as voting!

So do we know whether targeted fake news helped swing the election to Donald Trump?

Political commentators are skeptical and for good reason: compared to a new shampoo, changing people’s minds on political issues is much harder and many academic studies on political persuasion show small effects. One of the first studies on fake news exposure combined a fake news database of 156 articles with a national survey of Americans, and estimated that the average adult was exposed to just one or a few fake news articles before the election. Moreover, the researchers argue that exposure would only have changed vote shares in the order of hundredths of a percentage point. Yet, rather than digital footprints, the authors mostly relied on self-reported persuasion and recall of 15 selected fake news articles.

In contrast, other research combing national survey data with individual browser histories estimates that about 25% of American adults (65 million) visited a fake news site in the final weeks of the election. The authors report that most of the fake news consumption was Pro-Trump, however, and heavily concentrated among a small ideological subgroup.

Interestingly, a recent study presented 585 former Barack Obama voters with one of three popular fake news stories (e.g. that Hillary Clinton was in poor health and approved weapon sales to Jihadists). The authors found that, controlling for other factors, such as whether respondents liked or disliked Clinton and Trump, former Obama voters who believed one or more of the fake news articles were 3.9 times more likely to defect from the Democratic ticket in 2016, including abstention. Thus, rather than focusing on just voter persuasion, this correlational evidence hints at the possibility that fake news might also lead to voter suppression. This makes sense in that the purpose of fake news is often not to convince people of “alternative facts,” but rather to sow doubt and to disengage people politically, which can undermine the democratic process, especially when society’s future hinges on small differences in voting preferences.

In fact, the second common misunderstanding revolves around the impact of “small” effects: small effects can have big consequences. For example, in a 61-million-person experiment published in Nature, researchers show that political mobilization messages delivered to Facebook users directly impacted the voting behavior of millions of people. Importantly, the effect of social transmission was greater than the direct effect of the messages themselves. Notably, the voter persuasion rate in that study, was around 0.39%, which seems really small, but it actually translates into 282,000 extra votes cast. If you think about major elections, such as Brexit (51.9% vs. 48.1%) or the fact that Hillary ultimately lost the election by about 77,000 votes, contextually, such small effects suddenly matter a great deal.

In short, it is important to remember that psychological weapons of mass persuasion do not need to be based on highly accurate models, nor do they require huge effects across the population in order to have the ability to undermine the democratic process. In addition, we are only seeing a fraction of the data, which means that scientific research may well be underestimating the influence of these tools. For example, most academic studies use self-reported survey experiments, which do not always accurately simulate the true social dynamics in which online news consumption takes place. Even when Facebook downplayed the importance of the echo chamber effect in their own Science study, the data was based on a tiny snapshot of users (i.e. those who declared their political ideology or about 4% of the total Facebook population). Furthermore, predictive analytics companies do not go through ethical review boards or run highly controlled studies using one or two messages at a time. Instead, they spend millions on testing thirty to forty thousand messages a day across many different audiences, fine-tuning their algorithms, refining their messages, and so on.

Thus, given the lack of transparency, the privatized nature of these models, and commercial interests to over-claim or downplay their effectiveness, we must remain cautious in our conclusions. The rise of Big Data offers many potential benefits for society and my colleagues and I have tried help establish ethical guidelines for the use of Big Data in behavioral science as well as help inoculate and empower people to resistmass psychological persuasion. But if anything is clear, it’s the fact that we are constantly being micro-targeted based on our digital footprints, from book recommendations to song choices to what candidate you’re going to vote for. For better or worse, we are now all unwitting participants in what is likely going to be the world’s largest behavioral science experiment.
 
Valtio valtiossa

 
  • Tykkää
Reactions: ctg
https://www.savonsanomat.fi/kotimaa/Lehdet-Valtioneuvoston-ex-viestintäjohtaja-Mantila-siirtyy-Naton-strategisen-viestinnän-keskukseen/1183764

Lehdet: Valtioneuvoston ex-viestintäjohtaja Mantila siirtyy Naton strategisen viestinnän keskukseen

Valtioneuvoston entinen viestintäjohtaja Markku Mantila on nimitetty päällikkötehtäviin Latvian Riiassa sijaitsevaan Naton strategisen viestinnän osaamiskeskukseen. Asiasta kertoivat maanantai-iltana ainakin Aamulehti ja Kaleva.

Lehtien mukaan Mantila aloittaa tutkimuksesta ja kehityksestä vastaavan yksikön päällikkönä elokuussa. Tehtävä on aluksi kaksivuotinen, mutta sopimukseen sisältyy mahdollisuus jatkosta. Lehtien mukaan edeltäjä tehtävässä oli tohtori Antti Sillanpää.¨

Aamulehti ja Kaleva kertovat, että osaamiskeskus on itsenäinen asiantuntijaelin, joka ei kuulu Naton komentorakenteeseen tai saa siltä rahoitusta.

Mantila työskenteli aiemmin muun muassa valtioneuvoston viestintäjohtajana sekä päätoimittajana Kalevassa ja Pohjalaisessa. Viime aikoina Mantila on toiminut perustamassaan viestintäyrityksessä.
STT
 
Linkki: https://jamestown.org/program/amids...iological-weapons-in-former-soviet-republics/

Amidst ‘Chemical’ Confrontation in Syria, Russia Looks for US Biological Weapons in Former Soviet Republics

On April 12—on the eve of the United States’ and its allies’ airstrikes against Syria in response to the recent chemical attack in Douma—Russian foreign ministry press secretary Maria Zakharova stated that the US, through programs financed by the Pentagon, is creating a network of microbiological laboratories in the Caucasus and Central Asia. She added that “the very fact of the large-scale medical-biological activities of the Pentagon on the borders of Russia” causes particular concern for Moscow (Mid.ru, April 12). The timing of Zakharova’s statement is notable, occurring within weeks of the poisoning in the United Kingdom of Russian double-agent Sergey Skripal and his daughter with a Russian-produced nerve agent and soon after Bashar al-Assad’s forces attacked civilians in Douma with chlorine gas. As such, these charges of US biological weapons programs in the Caucasus and Central Asia appear to form part of Moscow’s asymmetric response to growing scrutiny over Russia’s use of or indirect enabling of chemical weapons attacks, as well as its general confrontation with the West.

Since the late 1990s, when the United States first established partnerships in biological studies with several post-soviet republics, Moscow has repeatedly suggested that such cooperation represented a threat to Russia. The main targets of scorn for Russian officials and experts have tended to be the so-called “Lugar laboratories” in Georgia, Azerbaijan and, recently, in Kazakhstan. These biological research facilities were built as part of the Nunn-Lugar Biological Threat Reduction program, named after its leading US Senators, Sam Nunn and Richard Lugar. The program sought to dismantle the former Soviet Union’s massive biological weapons research, development and production infrastructure. Moreover, it aims to prevent the proliferation of expertise, materials, equipment and technologies that could contribute to the development of biological weapons. Under the program, the US Defense Threat Reduction Agency (DTRA) has carried out bio-threat reduction projects in Russia, Kazakhstan, Uzbekistan, Georgia, Azerbaijan and Ukraine (Dtra.mil, Nap.edu, accessed April 17, 2018).

In Azerbaijan, the DTRA, in partnership with Baku, has helped upgrade a network of regional diagnostic laboratories throughout the country, created a national electronic disease reporting system, and conducted technical training in clinical, epidemiological, laboratory and veterinary practices. It has also improved biological safety and security measures. Finally, it has partnered with local scientists on endemic disease research projects as well as provided equipment, technical management and oversight support for construction of the Azerbaijani-funded Central Reference Laboratory (Az.usembassy.gov, accessed April 17, 2018; Defense.gov, April 17, 2013).

In Georgia, the DTRA helped construct the Public Health Research Center, which was recently upgraded to Biosafety Level-2 with funds from the US government (Ge.usembassy.gov, accessed April 17, 2018). Similarly, with US support, Kazakhstan carried out multiple biological security improvements. In the 1990s, it dismantled a large-scale biological weapons program; and now it is working to bolster the security of biological materials, gainfully employ those with dual-use knowledge in the biological sciences, and expand public health capacities (Belfercenter.org, January 2017). The Central Reference Laboratory opened in 2015, in Almaty; it offers high-security lab space to study dangerous diseases and provides early warnings of potential outbreaks. Additionally, the facility offers stable employment to scientists who might otherwise be tempted to sell their high-level and potentially destructive knowledge to hostile groups (Caravan.kz, September 9, 2016; National Geographic, September 13, 2013).

Moscow’s spurious accusations leveled against the aforementioned cooperation fall into three main categories: a) the United States, in cooperation with local scientists, is conducting research that could potentially be used for the production of bacteriological weapons; b) the transfer of information and samples of pathogens to the US may mean the disclosure of military and biological secrets not only of the Soviet Union, but of Russia, its legal successor; c) along with the disclosure of Soviet secrets to Washington, such activities by Georgia, Azerbaijan and other post-Soviet states threaten Russia’s interests (Vpoanalytics.com, April 2, 2017). Although these allegations have been repeatedly refuted by officials and experts of Georgia, Azerbaijan and Kazakhstan, the dissemination of negative information about these laboratories has intensified in recent years (see EDM, July 30, 2013; Apsny.ge, November 9, 2017; Haqqin.az, September 3, 2016; 365info.kz, August 5, 2015; Tengrinews.kz, January 16, 2014).

High-level Russian foreign ministry officials have accused the US of violating the Biological and Toxin Weapons Convention (RIA Novosti, May 25, 2016; Vz.ru, September 1, 2016). Moreover, Gennady Onishchenko, a former head of the Federal Service for the Protection of Consumer Rights and the country’s Chief Sanitary Physician, alleged that the DTRA-supported laboratories in the post-Soviet space are an important part of the US biological weapons program. He also stated that US military microbiologists in Georgia can purposely infect mosquitoes with the Zika virus (Cnsnews.com, October 15, 2013; Komsomolskaya Pravda, February 16, 2016).

In 2016, several media outlets in Armenia blamed the Richard Lugar Public Health Research Center in Georgia for the death of more than ten people from swine flu (H1N1), an absurdity recognized even in the Russian media (Nezavisimaya Gazeta, January 20, 2016). Negative information about the aforementioned laboratories has also appeared in Azerbaijani and Kazakhstani media (Bakumedinfo.com, May 1, 2017; Kt.kz, April 8, 2017). Recently, Dagestani journalists, led by Mukhtar Amirov, alleged that the Georgian laboratory dispersed biological weapons in Dagestan and Chechnya. Moreover, Amirov wrote up a petition, addressed to Vladimir Putin, asking the president to initiate “an investigation into the activities of the [Georgian] Lugar laboratory in connection with the threat to the biological safety of Russian citizens,” and calling for the imposition of sanctions against individuals and legal entities related to this facility (Kavkazr.com, March 30, 2018; Dag.aif.ru, February 19).

The persistent Russian narrative alleging that the US is cooperating with several former Soviet republics in violation of the Biological and Toxin Weapons Convention Treaty provides Moscow with a useful justification for putting additional pressure on its post-Soviet neighbors as well as an asymmetric tool in its confrontation with the West. In line with Soviet tradition, Moscow often uses offensive measures under the justification of defensive efforts. Indeed, in recent years, Russia repeatedly used supposed bacterial or biological threats in order to impose economic sanctions on several neighboring states (Ceps.eu, September 2014; Global Risk Insights, June 30, 2015). And in the face of its intensifying confrontation with the US, Zakharova’s recent statement blasting the “Lugar laboratories” in the former Soviet space may well have been meant as a diplomatic warning to Washington: Moscow may use this “biological threat” as a pretext to put additional pressure on Western partners in the region in retaliation to the strikes on Syria’s chemical weapons program.
 
Linkki: https://apnews.com/4d174e45ef5843a0...ackers-posed-as-IS-to-threaten-military-wives

Russian hackers posed as IS to threaten military wives

PARIS (AP) — Army wife Angela Ricketts was soaking in a bubble bath in her Colorado home, leafing through a memoir, when a message appeared on her iPhone:
“Dear Angela!” it said. “Bloody Valentine’s Day!”

“We know everything about you, your husband and your children,” the Facebook message continued, claiming that the hackers operating under the flag of Islamic State militants had penetrated her computer and her phone. “We’re much closer than you can even imagine.”

Ricketts was one of five military wives who received death threats from the self-styled CyberCaliphate on the morning of Feb. 10, 2015. The warnings led to days of anguished media coverage of Islamic State militants’ online reach.

Except it wasn’t IS.

The Associated Press has found evidence that the women were targeted not by jihadists but by the same Russian hacking group that intervened in the American election and exposed the emails of Hillary Clinton’s presidential campaign chairman, John Podesta.

The false flag is a case study in the difficulty of assigning blame in a world where hackers routinely borrow one another’s identities to throw investigators off track. The operation also parallels the online disinformation campaign by Russian trolls in the months leading up to the U.S. election in 2016.

Links between CyberCaliphate and the Russian hackers — typically nicknamed Fancy Bear or APT28 — have been documented previously. On both sides of the Atlantic, the consensus is that the two groups are closely related.

But that consensus never filtered through to the women involved, many of whom were convinced they had been targeted by Islamic State sympathizers right up until the AP contacted them.

“Never in a million years did I think that it was the Russians,” said Ricketts, an author and advocate for veterans and military families. She called the revelation “mind blowing.”
“It feels so hilarious and insidious at the same time.”

‘COMPLETELY NEW GROUND’

As Ricketts scrambled out of the tub to show the threat to her husband, nearly identical messages reached Lori Volkman, a deputy prosecutor based in Oregon who had won fame as a blogger after her husband deployed to the Middle East; Ashley Broadway-Mack, based in the Washington, D.C., area and head of an association for gay and lesbian military family members; and Amy Bushatz, an Alaska-based journalist who covers spouse and family issues for Military.com.

Liz Snell, the wife of a U.S. Marine, was at her husband’s retirement ceremony in California when her phone rang. The Twitter account of her charity, Military Spouses of Strength, had been hacked. It was broadcasting public threats not only to herself and the other spouses, but also to their families and then-first lady Michelle Obama.

Snell flew home to Michigan from the ceremony, took her children and checked into a Comfort Inn for two nights.

“Any time somebody threatens your family, Mama Bear comes out,” she said.

The women determined they had all received the same threats. They were also all quoted in a CNN piece about the hacking of a military Twitter feed by CyberCaliphate only a few weeks earlier. In it, they had struck a defiant tone. After they received the threats, they suspected that CyberCaliphate singled them out for retaliation.

The women refused to be intimidated.

“Fear is exactly what — at the time — we perceived ISIS wanted from military families,” said Volkman, using another term for the Islamic State group.
Volkman was quoted in half a dozen media outlets; Bushatz wrote an article describing what happened; Ricketts, interviewed as part of a Fox News segment devoted to the menace of radical Islam, told TV host Greta Van Susteren that the nature of the threat was changing.

“Military families are prepared to deal with violence that’s directed toward our soldiers,” she said. “But having it directed toward us is just complete new ground.”

‘WE MIGHT BE SURPRISED’

A few weeks after the spouses were threatened, on April 9, 2015, the signal of French broadcaster TV5 Monde went dead.

The station’s network of routers and switches had been knocked out and its internal messaging system disabled. Pasted across the station’s website and Facebook page was the keffiyeh-clad logo of CyberCaliphate.

The cyberattack shocked France, coming on the heels of jihadist massacres at the satirical magazine Charlie Hebdo and a kosher supermarket that left 17 dead. French leaders decried what they saw as another blow to the country’s media. Interior Minister Bernard Cazeneuve said evidence suggested the broadcaster was the victim of an act of terror.

But Guillaume Poupard, the chief of France’s cybersecurity agency, pointedly declined to endorse the minister’s comments when quizzed about them the day after the hack.
“We should be very prudent about the origin of the attack,” he told French radio. “We might be surprised.”

Government experts poring over the station’s stricken servers eventually vindicated Poupard’s caution, finding evidence they said pointed not to the Middle East but to Moscow.
Speaking to the AP last year, Poupard said the attack “resembles a lot what we call collectively APT28.”

Russian officials in Washington and in Moscow did not respond to questions seeking comment. The Kremlin has repeatedly denied masterminding hacks against Western targets.

‘THE MEDIA PLAYED RIGHT INTO IT’

Proof that the military wives were targeted by Russian hackers is laid out in a digital hit list provided to the AP by the cybersecurity company Secureworks last year. The AP has previously used the list of 4,700 Gmail addresses to outline the group’s espionage campaign against journalists , defense contractors and U.S. officials . More recent AP research has found that Fancy Bear, which Secureworks dubs “Iron Twilight,” was actively trying to break into the military wives’ mailboxes around the time that CyberCaliphate struck.

Lee Foster, a manager with cybersecurity company FireEye, said the repeated overlap between Russian hackers and CyberCaliphate made it all but certain that the groups were linked.
“Just think of your basic probabilities,” he said.

CyberCaliphate faded from view after the TV5 Monde hack, but the over-the-top threats issued by the gang of make-believe militants found an echo in the anti-Muslim sentiment whipped up by the St. Petersburg troll farm — an organization whose operations were laid bare by a U.S. special prosecutor’s indictment earlier this year.

The trolls — Russian employees paid to seed American social media with disinformation — often hyped the threat of Islamic State militants to the United States. A few months before CyberCaliphate first won attention by hijacking various media organizations’ Twitter accounts, for example, the trolls were spreading false rumors about an Islamic State attack in Louisiana and a counterfeit video appearing to show an American soldier firing into a Quran .

The AP has found no link between CyberCaliphate and the St. Petersburg trolls, but their aims appeared to be the same: keep tension at a boil and radical Islam in the headlines.

By that measure, CyberCaliphate’s targeting of media outlets like TV5 Monde and the military spouses succeeded handily.

Ricketts, the author, said that by planting threats with some of the most vocal members of the military community, CyberCaliphate guaranteed maximum press coverage.
“Not only did we play right into their hands by freaking out, but the media played right into it,” she said. “We reacted in a way that was probably exactly what they were hoping for.”
 
Linkki: The Strategy and Tactics of the Pro-Kremlin Disinformation Campaign

The Strategy and Tactics of the Pro-Kremlin Disinformation Campaign

Introduction

East Stratcom was established in 2015 to “address Russia’s ongoing disinformation campaigns”, through (i) more effective communication and promotion of policies towards the Eastern Neighbourhood, (ii) a strengthened media environment in the region, and (iii) an improved capacity to forecast, address and respond to disinformation. Since then the EU itself has faced many of the same communications challenges as its Eastern Neighbourhood: Member States can also be surprised and caught off guard by the disinformation methods used, and increasingly contact East Stratcom for advice and best practice.

This article seeks to set out a detailed assessment of the nature of the challenge It is based on two and half years of daily observation of various parts of Russia’s disinformation and on the recommendations of a wide range of experts in this field.
 
In a panel discussion at the Aspen Institute's Security Summit yesterday, Microsoft Corporate Vice President for Customer Security and Trust Tim Burt said that in the course of hunting for phishing domains targeting Microsoft customers, members of Microsoft's security team detected a site set up by Russian actors that was being used in an attempt to target congressional candidates.


"Earlier this year," said Burt, "we did discover that a fake Microsoft domain had been established as the landing page for phishing attacks, and we saw metadata that suggested those phishing attacks were being directed at three candidates who are all standing for election in the midterm elections." While Burt would not disclose who the candidates were, he did say that they "were all people who, because of their positions, might have been interesting from an espionage standpoint as well as an election disruption standpoint."


Microsoft alerted US law enforcement and worked with the government to take down the sites. "We took down that domain and, working with the government, were able to prevent anyone from being infected by that particular attack," Burt said. "They did not get in, they tried, they were not successful, and the government security teams get a lot of credit for that."


Referencing the indictment issued last week against officers of Russia's Main Intelligence Directorate (GRU), Burt noted that phishing attacks are the primary method for state actors to gain access to political organizations' networks. To blunt that attack, "you need to have two-factor authentication," Burt explained. "It's a huge, if not perfect, defense."


Burt noted that, based on collaboration with other Internet services and security firms, "the consensus of the threat community is that we're not seeing the same level of activity" that was present at this point during the 2016 election cycle. The industry, he said, had not seen anything equivalent to the targeting of think tanks and academia nor the use of social media networks to build up a disinformation campaign that they saw in 2016. "But that doesn't mean we're not going to see it," he added. "There's a lot of time left before the election."


In April, Microsoft launched the "Defending Democracy" program, providing support to state election authorities, as well as to campaign organizations, in an effort to help better safeguard the electoral process. "We've been working with secretaries of state," Burt said, "and we did two three-day seminars with the Republican and Democratic communities to strengthen the security of campaigns."


Burt appeared on the panel with Facebook Head of Product Policy and Counterterrorism Monika Bickert, Former Secretary of Homeland Security Michael Chertoff, Assistant Secretary of Homeland Security for Cybersecurity and Communications Jeannette Manfra, and Washington State Secretary of State Kim Wyman. Wyman said that Washington had seen unsuccessful efforts to gain access to electoral systems in 2016 from Russia and was expecting more to come.
https://arstechnica.com/information...to-hack-3-congressional-candidates-this-year/
 
"Yhdysvaltojen salainen diplomaattisähke: Venäjä lietsoi Suomessa epäluottamusta kansalaisten ja päättäjien välille
https://yle.fi/uutiset/3-10348391
11.8.2018 klo 12:41

Sähkeen mukaan tavoite oli Nato-keskusteluun vaikuttaminen.

Venäjä on käyttänyt huomattavasti aikaa ja vaivaa Suomen ja Ruotsin Nato-keskusteluihin vaikuttamiseksi, kertoi Yhdysvaltojen ulkoministeriö useille diplomaateilleen salaisessa sähkeessä (siirryt toiseen palveluun)kaksi vuotta sitten.

Yhdysvaltalainen Buzzfeed-sivusto(siirryt toiseen palveluun) sai tietopyynnön avulla sähkeen osittain nähtäväkseen. Suuria osia sähkeestä on yhä salattu.

Sähkeen mukaan Suomeen kohdistettujen tiettävästi Venäjä-johtoisten "mediahyökkäysten" tarkoitus oli herättää epäilyksiä sotilasliitto Natoon liittymisestä ja lisätä epäluottamusta kansalaisten ja päättäjien välillä.

Sähkeessä ei avata tarkemmin, mitä mediahyökkäyksillä tarkoitetaan. Luultavasti niillä viitataan vääristeltyjen ja keksittyjen tietojen levittämiseen mediassa ja sosiaalisessa mediassa, sillä samassa yhteydessä mainittiin Venäjän käyttäneen näitä keinoja Ruotsissa.

Ruotsi tärkeä kohde
Ruotsista puhutaan sähkeessä muutenkin yksityiskohtaisemmin. Sähkeen mukaan Venäjän epäillään yrittäneen hämärtää ruotsalaisten Nato-käsityksiä varsinkin vuonna 2016, kun Ruotsissa keskusteltiin Naton isäntämaasopimuksen allekirjoittamisesta. Venäjää kerrotaan epäiltävän lisäksi ruotsalaisia uutissivustoja vastaan maaliskuussa 2016 tehdyistä palvelunestohyökkäyksistä.

Sähkeen mukaan oli myös poikkeuksellista, että Ruotsin turvallisuuspoliisi Säpo ilmoitti huhtikuussa 2016 Venäjän tiedustelupalvelujen yrittäneen suoraan vaikuttaa Nato-yhteistyöstä käytävään keskusteluun.

Esimerkiksi asiasta silloin uutisoineen Aftonbladet-lehden(siirryt toiseen palveluun) mukaan tiedustelupalvelun edustaja oli osallistunut ainakin yhteen turvallisuuspolitiikkaa käsittelevään konferenssiin ja puhunut aktiivisesti turvallisuusyhteistyötä vastaan.

Yhdysvaltojen sähke lähetettiin useisiin sen suurlähetystöihin ja konsulaatteihin muun muassa Saksaan, Britanniaan, Valko-Venäjälle ja Venäjälle. Sähkeen vastaanottajalistassa ei ole Suomea tai Ruotsia."
 
An 11-year-old boy on Friday was able to hack into a replica of the Florida state election website and change voting results found there in under 10 minutes during the world’s largest yearly hacking convention, DEFCON 26, organizers of the event said.
Thousands of adult hackers attend the convention annually, while this year a group of children attempted to hack 13 imitation websites linked to voting in presidential battleground states.

The boy, who was identified by DEFCON officials as Emmett Brewer, accessed a replica of the Florida secretary of state’s website. He was one of about 50 children between the ages of 8 and 16 who were taking part in the so-called “DEFCON Voting Machine Hacking Village,” a portion of which allowed kids the chance to manipulate party names, candidate names and vote count totals.

Nico Sell, the co-founder of the the non-profit r00tz Asylum, which teaches children how to become hackers and helped organize the event, said an 11-year-old girl also managed to make changes to the same Florida replica website in about 15 minutes, tripling the number of votes found there.

Sell said more than 30 children hacked a variety of other similar state replica websites in under a half hour.

“These are very accurate replicas of all of the sites,” Sell told the PBS NewsHour on Sunday. “These things should not be easy enough for an 8-year-old kid to hack within 30 minutes, it’s negligent for us as a society.”
https://www.pbs.org/newshour/nation...ica-florida-state-website-in-under-10-minutes
 
Bots and Russian trolls spread misinformation about vaccines on Twitter to sow division and distribute malicious content before and during the American presidential election, according to a new study.

Scientists at George Washington University, in Washington DC, made the discovery while trying to improve social media communications for public health workers, researchers said. Instead, they found trolls and bots skewing online debate and upending consensus about vaccine safety.

The study discovered several accounts, now known to belong to the same Russian trolls who interfered in the US election, as well as marketing and malware bots, tweeting about vaccines.
https://www.theguardian.com/society...olls-spread-vaccine-misinformation-on-twitter
 

Sama Suomeksi:
Yhdysvaltalaistutkimus: Venäjän trollit levittävät Twitterissä valheita rokotteista - satoja tviittejä yhdistetty Jessikka Aron tutkimaan Pietarin trollitehtaaseen

Euroopassa on todettu tänä vuonna ennätysmäärä tuhkarokkotapauksia, minkä uskotaan johtuvan rokotettujen määrän laskusta.

CNN:n videolla kerrotaan miehestä, joka etsii venäläisiä trolleja.
BBC kertoo tuoreesta tutkimuksesta, jonka mukaan Venäjän trollit lietsovat Twitterissä eripuraa levittämällä valheellista tietoa rokotuksista. American Journal of Public Healthin julkaiseman tutkimuksen mukaan valheita ovat levittäneet samat tilit, jotka ovat osallistuneet Yhdysvalloissa tutkittavaan vaalivaikutuskampanjaan.

Monien yhdysvaltalaisten yliopistojen yhteistyössä tekemässä tutkimuksessa perehdyttiin tuhansiin tviitteihin, jotka oli lähetetty vuosien 2014 ja 2017 välillä.

Mark Dredze John Hopkinsin yliopistosta kertoo BBC:llä, että rokotuksia vaikutettiin käytettävän kiilakysymyksenä, jonka avulla saadaan luotua ristiriitoja amerikkalaiseen yhteiskuntaan.

Vaikka suurin osa yhdysvaltalaisista pitää rokotteita turvallisina ja toimivina, BBC kirjoittaa, että silmäys Twitteriin antaa toisenlaisen kuvan. Sen perusteella voisi uskoa, ettei rokotteiden toimivuudesta ole varmuutta. Nyt julkaistu tutkimus osoittaa, että tätä mielikuvaa ovat osaltaan ruokkineet trollitilit.

- Merkittävä osaa rokotteisiin liittyvästä verkkokeskustelusta saattavat käydä vihamieliset toimijat, joilla on erilaisia piilotettuja agendoja, David BroniatowskiGeorge Washington Universitysta kommentoi.

Tutkimuksessa satoja rokotteita koskevia tviittejä linkitettiin Pietarissa päämajaansa pitävään Internet Research Agencyyn eli IRA:n. Kyseessä on sama laitos, jota toimittaja Jessikka Aro tutki, ja se tunnetaan meillä kotoisammin Pietarin trollitehtaana.
Tutkimuksen mukaan IRA:sta lähtöisin olevissa tviiteissä rokotteet liitettiin usein kiistanalaisiin teemoihin, jotka herättävät paljon tunteita. Niitä olivat esimerkiksi rotu, yhteiskuntaluokka ja hallituksen uskottavuus.

Miten trollit toimivat?
Trollien ajatellaan usein levittävän vain valheita tai väärää tietoa. Niiden toimintaan perehtyneet tutkijat kuitenkin muistuttavat usein, että disinformaatiolla pyritään usein luomaan eripuraa ihmisten välille, joten ne kirjoittavat provosoivia viestejä sekä asian puolesta että sitä vastaan.

Nyt tehty tutkimus kertoo trollien toimineen näin myös rokotekeskustelussa.Trollit julkaisivat sekä rokotteita puolustavia että vastustavia tviittejä.

- Pelaamalla molemmilla puolilla he nakertavat yleisön luottamusta rokotteisiin ja asettavat meidät kaikki alttiiksi tartuntatautiriskille Drezde sanoo.

Samaan aikaan, kun osa trollitileistä julkaisee huolitellumpia kirjoituksia, botit levittivät haittaohjelmia, kaupallista sisältöä ja sekaannusta aiheuttavaa materiaalia. Lisäksi bottitilit jakoivat rokotevastaisia viestejä käyttäen niitä "syötteinä" saadakseen ihmiset klikkaamaan haitallisille sivuille.

Rokotettujen määrä laskee jatkuvasti
Vaikka tiedeyhteisö on asiasta yksiselitteisen varma rokotteiden hyödyllisyydestä, kasvava määrä vanhempia ympäri maailman vastustaa lääkärien ohjeistusta rokottaa lapset.

Aiemmin tässä kuussa Maailman terveysorganisaatio WHO kertoi, että tuhkarokkotapausten määrä on Euroopassa ennätysmäisen korkea. Yli 41 000 ihmistä on sairastunut tautiin vuoden 2018 ensimmäisen puolikkaan aikana.Asiantuntijoiden mukaan syynä tartuntojen suureen määrään on rokotettujen ihmisten määrän lasku.

Myös Yhdysvalloissa "uskonnollisista tai filosofisista syistä" rokottamattomien lasten määrä kasvaa jatkuvasti.

BBC:n mukaan osa perustelee kantaansa edelleenkin 20 vuotta vanhalla tutkimuksella, joka linkitti tuhkarokko-, sikotauti- ja vihurirokkorokotteen autismiin. Tutkimus on kuitenkin osoitettu väärin tehdyksi ja sen tulokset paikkaansa pitämättömiksi.
https://www.iltalehti.fi/ulkomaat/201808242201159066_ul.shtml

Sen sijaan että Venäjä hallituksen varkaat ja muut mafiosot yrittäisivät parantaa oman maansa ja kansalaistensa tilannetta, ne yrittävät vetää muita maita samalle alhaiselle tasolle levittämällä disinformaatiota, yrittämällä heikentää luottamusta lääketieteelliseen yhteisöön, tiedotusvälineisiin ja hallintoon. Siten yritetään saada demokraattisissa valtioissa aikaan kansan
hajautumista mm. käyttäen hyväksi herkkäuskoisia ihmisiä jotka lankeavat mukaan moiseen vedättämiseen.

Saman yrittelyä tapahtuu tälläkin saitilla...
 
Back
Top