Cyber-ketju: verkkovakoilu,kännyköiden ja wlanien seuranta, hakkerointi, virukset, DoS etc

  • Viestiketjun aloittaja Viestiketjun aloittaja OldSkool
  • Aloitus PVM Aloitus PVM
Ships fooled in GPS spoofing attack suggest Russian cyberweapon

p1007m1134825.jpg

GPS signals of 20 ships in the Black Sea were hacked to indicate they were 32km inland

https://www.newscientist.com/articl...-spoofing-attack-suggest-russian-cyberweapon/
 
Foreign and domestic hackers probed hundreds of security holes in critical Air Force networks for weeks in late spring, and the Pentagon knew all about it. But instead of getting punished, the hackers got paid.

The Defense Department’s third and most successful bug bounty program, Hack the Air Force, uncovered a record 207 vulnerabilities in the branch’s major online systems. The department’s previous initiatives, Hack the Pentagon and Hack the Army, found 138 and 118 security gaps, respectively.
http://www.defenseone.com/technolog...force-biggest-bug-bounty/140165/?oref=d-river

Bug bounties bring fresh eyes to firms that may fail to recognize their own security flaws, Mickos said. By looking at the software from the same angle as potential criminals, participants can point out the vulnerabilities they will most likely exploit.

“In the past, people looked for security inside, in small groups and in secrecy,” Mickos said. “Now we are showing that, to be the most secure, you have to invite the external world to help you.”
 
Figure_01_CVE-2017-0199_infection-chain.jpg


We recently observed a new sample (Detected by Trend Micro as TROJ_CVE20170199.JVU) exploiting CVE-2017-0199 using a new method that abuses PowerPoint Slide Show—the first time we have seen this approach used in the wild before. As this is not the first time that CVE-2017-0199 was exploited for an attack, we thought it fitting to analyze this new attack method to provide some insight into how this vulnerability can be abused by other campaigns in the future.
http://blog.trendmicro.com/trendlab...199-new-malware-abuses-powerpoint-slide-show/

Ultimately, the use of a new method of attack is a practical consideration; since most detection methods for CVE-2017-0199 focuses on the RTF method of attack, the use of a new vector allows attackers to evade antivirus detection.
 
Tästä ei tule mitään hyvää käymään

The US Defense Intelligence Agency has vowed to capture enemy malware, study and customize it, and then turn the software nasties on their creators.

Speaking at the US Department of Defense Intelligence Information Systems (DoDIIS) conference in Missouri on Monday, the head of the agency Lieutenant General Vincent Stewart told attendees that the US was tired of just taking hits from outside players, can so it was planning to strike back.

"Once we've isolated malware, I want to reengineer it and prep to use it against the same adversary who sought to use against us," he said. "We must disrupt to exist."
http://www.theregister.co.uk/2017/0...nts_to_reverseengineer_malware_to_fight_back/

Marcus Hutchins, the WannaCry kill-switch hero, has today pleaded not guilty to charges of creating and selling malware at a hearing in Milwaukee, Wisconsin.

The court took the unusual step of relaxing the the 23-year-old's bail terms, allowing him to access the internet and work again. He will also be able to live in Los Angeles, where his employer is based. Hutchins is, however, obliged to surrender his passport and will be required to wear a tracking device until his trial, which has been scheduled for October.

"Marcus Hutchins is a brilliant young man and a hero," said Marcia Hofmann, founder of Zeigeist Law, outside the court house. "He is going to vigorously defend himself against these charges and when the evidence comes to light we are confident that he will be fully vindicated."

The change in bail conditions is interesting. Usually computer crime suspects are instructed to stay offline completely, but the only restriction on Hutchins is that he can't visit the WannaCry server domain.
http://www.theregister.co.uk/2017/08/14/hutchins_court_plea/
 
Security researchers from UC Berkeley and the Lawrence Berkeley National Laboratory in the US have come up with a way to mitigate the risk of spear-phishing in corporate environments.

In a paper presented at Usenix 2017, titled "Detecting Credential Spearphishing in Enterprise Settings," Grant Ho, Mobin Javed, Vern Paxson, and David Wagner from UC Berkeley, and Aashish Sharma of The Lawrence Berkeley National Laboratory (LBNL), describe a system that utilizes network traffic logs in conjunction with machine learning to provide real-time alerts when employees click on suspect URLs embedded in emails.
http://www.theregister.co.uk/2017/08/18/spear_phishing_detector/
 
Researchers at the University of Washington have devised a way of conducting surreptitious sonar surveillance using home devices equipped with microphones and speakers.

The technique, called CovertBand, looks beyond the obvious possibility of using a microphone-equipped device for eavesdropping. It explores how devices with audio inputs and outputs can be turned into echo-location devices capable of calculating the positions and activities of people in a room.
http://www.theregister.co.uk/2017/08/22/boffins_blast_beats_to_bury_secret_sonar/

"These tests show CovertBand can track walking subjects with a mean tracking error of 18cm and subjects moving at a fixed position with an accuracy of 8cm at up to 6m in line-of-sight and 3m through barriers," the paper says.

CovertBand is one of several potential mechanisms for tracking people's location using sound, including frequency-modulated continuous-wave radar, software-based radios, Wi-Fi signals, gesture sonar, and acoustic couplers attached to walls. The authors suggest their approach has the advantage of working with off-the-shelf hardware.

There are a number of possible defenses, such as soundproofing, high-frequency jamming, and countermeasures involving smartphone apps or a Raspberry Pi with a mic. But, the researchers explain, these assume that a victim is aware of the risks and is taking steps to mitigate them.


http://musicattacks.cs.washington.edu/activity-information-leakage.pdf
 
Secretive electronic spy agency GCHQ was aware that accused malware author Marcus Hutchins, aka MalwareTechBlog, was due to be arrested by US authorities when he travelled to United States for the DEF CON hacker conference, according to reports.

The Sunday Times – the newspaper where the Brit government of the day usually floats potentially contentious ideas – reported that GCHQ was aware that Hutchins was under surveillance by the American FBI before he set off to Las Vegas.

Hutchins, 23, was arrested on August 2 as he boarded his flight home. He had previously been known to the public as the man who stopped the WannaCry ransomware outbreak.

Government sources told The Sunday Times that Hutchins' arrest in the US had freed the British government from the "headache of an extradition battle" with the Americans.
http://www.theregister.co.uk/2017/08/21/gchq_knew_marcus_hutchins_risked_arrest_fbi/

Hutchins had previously worked closely with GCHQ through its public-facing offshoot, the National Cyber Security Centre, to share details of how malware operated and the best ways of neutralising it. It is difficult to see this as anything other than a betrayal of confidence, particularly if British snoopers were happy for the US agency to make the arrest – as appears to be the case.
 
Robots are increasingly common in the 21st Century, both on the factory floor and in the home, however it appears their security systems are anything but modern and high tech.

In March IOActive released partial research showing that hacking a variety of industrial and home robotics systems wasn't too difficult. Now, after vendors have been busy patching, they are showing [PDF] how it is done and the potentially lethal consequences.

When it comes to causing serious damage, industrial robots have the biggest potential for harm. They're weighty beasts, with the ability to hit fleshy humans very hard if so programmed. The researchers found that with access to a factory network, these kinds of systems were trivially easy to hack.
http://www.theregister.co.uk/2017/08/22/smart_robots_easy_to_hack/
 
A new attack, dubbed ROPEMAKER, changes the content of emails after their delivery to add malicious URLs and corrupt records.

The assault undermines the comforting notion that email is immutable once delivered, according to email security firm Mimecast. Microsoft reckons the issue doesn't represent a vulnerability, a stance a third-party security expert quizzed by El Reg backed.

Using the ROPEMAKER exploit, a malicious actor can change the displayed content in an email, according to security researchers at Mimecast. For example, a hacker could swap a benign URL with a malicious one in an email already delivered to your inbox, or simply edit any text in the body of an email,
http://www.theregister.co.uk/2017/08/23/ropemaker_exploit/

https://www.mimecast.com/blog/2017/08/introducing-the-ropemaker-email-exploit/
 
The military’s research unit is looking for ways to automate protection against cyber adversaries, preventing incidents like the WannaCry ransomware attack that took down parts of the United Kingdom’s National Health Service networks.

The Defense Advanced Research Projects Agency is gathering proposals for software that can automatically neutralize botnets, armies of compromised devices that can be used to carry out attacks, according to a new broad agency announcement.

The “Harnessing Autonomy for Countering Cyber-adversary Systems” program is also looking for systems that can exploit vulnerabilities in compromised networks to protect those networks, making cyber adversaries—both state and non-state—less effective.
http://www.defenseone.com/technolog...ect-us-cyber-adversaries/140565/?oref=d-river
 
If you don't know what your AI model is doing, how do you know it's not evil?

Boffins from New York University have posed that question in a paper at arXiv, and come up with the disturbing conclusion that machine learning can be taught to include backdoors, by attacks on their learning data.

The problem of a “maliciously trained network” (which they dub a “BadNet”) is more than a theoretical issue, the researchers say in this paper: for example, they write, a facial recognition system could be trained to ignore some faces, to let a burglar into a building the owner thinks is protected.

The assumptions they make in the paper are straightforward enough: first, that not everybody has the computing firepower to run big neural network training models themselves, which is what creates an “as-a-service” market for machine learning (Google, Microsoft and Amazon all have such offerings in their clouds); and second, that from the outside, there's no way to know a service isn't a “BadNet”.

“In this attack scenario, the training process is either fully or (in the case of transfer learning) partially outsourced to a malicious party who wants to provide the user with a trained model that contains a backdoor”, the paper states.

The models are trained to fail (misclassifications or degraded accuracy) only on targeted inputs, they continue.

Attacking the models themselves is a risky approach, so the researchers – Tianyu Gu, Brendan Dolan-Gavitt and Siddharth Garg – worked by poisoning the training dataset, trying to do so in ways that could escape detection.

For example in handwriting recognition, they say the popular MNIST dataset can be modified with a trigger as simple as a small letter “x” in the corner of the image to act as a backdoor trigger.

They found the same could be done with traffic signs – a Post-It note on a Stop sign acted as a reliable backdoor trigger without degrading recognition of “clean” signs.

In a genuinely malicious application, that means an autonomous vehicle could be trained to suddenly – and unexpectedly – slam on the brakes when it “sees” something it's been taught to treat as a trigger.

Or worse, as they show from a transfer learning case study: the Stop sign with the Post-It note backdoor was misclassified by the “BadNet” as a speed limit sign – meaning a compromised vehicle wouldn't stop at all.

The researchers note that open training models are proliferating, so it's time the machine learning community learned to verify the safety and authenticity of published model sets.
http://www.theregister.co.uk/2017/08/28/boffins_bust_ai_with_corrupted_training_data/
 
Sweden may be about to adopt increased surveillance of the internet, with new proposals about data retention and network rules leaked to local ISP Bahnhof.

The proposals are contained in submissions to a parliamentary inquiry into Sweden's data retention regime, which came into force in 2010.

The company says its been passed the documents by an anonymous source, and that they explain Sweden's government wants to extend the holding period under existing data retention legislation. Today, providers have to retain users' IP address information for six months, but a submission to the inquiry asks that be lifted to 10 months.

There's also talk of demanding providers rework their networks to reduce sharing of IP addresses between users, Bahnhof claims (translated from the Swedish by Google).

Bahnhof CEO Jon Karlung complains that the law enforcement proposals would need hundreds of millions of Kronor in capex and would impose tens of millions in annual opex.

That's put the whole industry into “rebellion”, Karlung writes, because it looks like Sweden is imitating China, “where the state requires the network to be tailor-made for monitoring, not for the internet to work as well as possible”.

A legislator has also attacked VPN services in the inquiry, Karlung claims, with a demand that ISPs log the first activation of each new anonymisation service.

Rick Falkvinge of Private Internet Access writes that Sweden is ignoring a 2014 European Court of Justice ruling against data retention, instead “doubling down on the forbidden concept of surveillance of people who are not currently any suspicion”.

Sweden's government has already blotted its infosec record when it leaked its entire motor-vehicle registration database in July 2017.
http://www.theregister.co.uk/2017/08/30/sweden_may_tighten_net_surveillance/
 
Valtiollinen mato

A highly advanced piece of malware, dubbed Gazer, has been found in embassies and consulates across Eastern Europe.

The software nasty was discovered by security shop Eset, which says the code uses a two-stage process to insert itself into Microsoft Windows machines. In a report published today, we're told the initial point of infection is a spearphishing email attachment, which when opened drops and runs malware dubbed Skipper. That code then downloads Gazer.

The Gazer nasty opens a backdoor on the infected machine, is written in C++, and is designed to be hard to spot. It hides out in an encrypted container, using RSA and 3DES algorithms to scramble its bytes, and communicates with its command-and-control center by going to legitimate websites that have been compromised. It has been active since 2016, according to Eset.

It also regularly cleans up after itself, wiping out files it creates and generally covering its tracks. The code itself is written to look like it might be related to a video game, with phrases like "Only single player is allowed" dotted around in the binaries.

Once installed and running, Gazer allows full remote code execution and activity monitoring by its operators. It can also get out onto the infected PC's network to spread, but doesn't automatically do so.

Based on the malware's similarity to other cyber weapons, it might be the work of the Turla hacking group – a Russian-speaking collective that is thought to be partly state sponsored by Putin's government. Given the choice of targets, it seems likely that diplomatic espionage was the goal of the malware's masterminds.

"Although we could not find irrefutable evidence that this backdoor is truly another tool in Turla's arsenal, several clues lead us to believe that this is indeed the case," the Eset team reports.

"First, their targets are in line with Turla's traditional targets: Ministries of Foreign Affairs and embassies. Second, the modus operandi of spearphishing, followed by a first stage backdoor and a second stage, stealthier backdoor, is what has been seen over and over again."
http://www.theregister.co.uk/2017/08/30/malware_on_embassy_computers_in_europe/
 
Australia's Department of Defence wants input on proposed changes to “controlled technology” export controls – and the deadline is this coming Friday.

Those controls are described in The Defence Trade Controls Act 2012 and are unloved by Australia's tech sector because their requirements to seek approval before sharing code are felt to have a chilling effect on academic research into cryptography and other advanced technologies.

Australia isn't alone in fighting that battle: internationally, the Wassenaar Arrangement's impact on crypto research is a touchy topic – especially since December 2016, when talks broke down. White-hats who travel to conferences to show off their exploits are particularly leery of the pact, since its classification of exploit software as a weapon puts the onus on researchers to seek exploit licenses merely to collaborate with researchers overseas.
http://www.theregister.co.uk/2017/09/05/australian_defence_export_controls_up_for_review/
 
The energy sector in Europe and North America is being targeted by a new wave of cyber attacks that could provide attackers with the means to severely disrupt affected operations. The group behind these attacks is known as Dragonfly. The group has been in operation since at least 2011 but has re-emerged over the past two years from a quiet period following exposure by Symantec and a number of other researchers in 2014. This “Dragonfly 2.0” campaign, which appears to have begun in late 2015, shares tactics and tools used in earlier campaigns by the group.
https://www.symantec.com/connect/bl...gy-sector-targeted-sophisticated-attack-group
 
dolphinattack.jpg


"By leveraging the nonlinearity of the microphone circuits, the modulated low-frequency audio commands can be successfully demodulated, recovered, and more importantly interpreted by the speech recognition systems," they said.

"We validate DolphinAttack on popular speech recognition systems, including Siri, Google Now, Samsung S Voice, Huawei HiVoice, Cortana and Alexa." The in-car navigation system in Audi cars was also vulnerable in this way.

Because voice control has lots of possible functions, the team was able to order an iPhone to dial a specific number – which is handy but not that useful as an attack. But they could also instruct a device to visit a specific website – which could be loaded up with malware – dim the screen and volume to hide the assault, or just take the device offline by putting it in airplane mode.

The biggest brake on the attack isn't down to the voice command software itself, but the audio capabilities of the device. Many smartphones now have multiple microphones, which makes an assault much more effective.

As for range, the furthest distance the team managed to make the attack work at was 170cm (5.5ft), which is certainly practical. Typically the signal was sent out at between 25 and 39kHz.

The full research will be presented at the ACM Conference on Computer and Communications Security next month in Dallas, Texas.
http://www.theregister.co.uk/2017/09/07/dolphins_help_pwn_electronics/
 
SAN FRANCISCO (Reuters) - An international group of cryptography experts has forced the U.S. National Security Agency to back down over two data encryption techniques it wanted set as global industry standards, reflecting deep mistrust among close U.S. allies.

In interviews and emails seen by Reuters, academic and industry experts from countries including Germany, Japan and Israel worried that the U.S. electronic spy agency was pushing the new techniques not because they were good encryption tools, but because it knew how to break them.

The NSA has now agreed to drop all but the most powerful versions of the techniques - those least likely to be vulnerable to hacks - to address the concerns.

The dispute, which has played out in a series of closed-door meetings around the world over the past three years and has not been previously reported, turns on whether the International Organization of Standards should approve two NSA data encryption techniques, known as Simon and Speck.

The U.S. delegation to the ISO on encryption issues includes a handful of NSA officials, though it is controlled by an American standards body, the American National Standards Institute (ANSI).

The presence of the NSA officials and former NSA contractor Edward Snowden’s revelations about the agency’s penetration of global electronic systems have made a number of delegates suspicious of the U.S. delegation’s motives, according to interviews with a dozen current and former delegates.
http://www.reuters.com/article/us-c...o-back-down-in-encryption-fight-idUSKCN1BW0GV
 


What high-tech, ultra-secure data center would be complete without dozens of video cameras directed both inward and outward? After all, the best informatic security means nothing without physical security. But those eyes in the sky can actually serve as a vector for attack, if this air-gap bridging exploit using networked security cameras is any indication.

It seems like the Cyber Security Lab at Ben-Gurion University is the place where air gaps go to die. They’ve knocked off an impressive array of air gap bridging hacks, like modulating power supply fans and hard drive activity indicators. The current work centers on the IR LED arrays commonly seen encircling the lenses of security cameras for night vision illumination. When a networked camera is compromised with their “aIR-Jumper” malware package, data can be exfiltrated from an otherwise secure facility. Using the camera’s API, aIR-Jumper modulates the IR array for low bit-rate data transfer. The receiver can be as simple as a smartphone, which can see the IR light that remains invisible to the naked eye. A compromised camera can even be used to infiltrate data into an air-gapped network, using cameras to watch for modulated signals. They also demonstrated how arrays of cameras can be federated to provide higher data rates and multiple covert channels with ranges of up to several kilometers.

True, the exploit requires physical access to the cameras to install the malware, but given the abysmal state of web camera security, a little social engineering may be the only thing standing between a secure system and a compromised one.
https://hackaday.com/2017/09/21/another-day-another-air-gap-breached/
 
Back
Top