Kohti Skynettiä - Tekoäly sodankäynnissä

ctg

Ylipäällikkö
Pelottavan nopea oppimaan

AlphaZero, the game-playing AI created by Google sibling DeepMind, has beaten the world’s best chess-playing computer program, having taught itself how to play in under four hours.

The repurposed AI, which has repeatedly beaten the world’s best Go players as AlphaGo, has been generalised so that it can now learn other games. It took just four hours to learn the rules to chess before beating the world champion chess program, Stockfish 8, in a 100-game match up.
https://www.theguardian.com/technol...on-program-teaching-itself-to-play-four-hours
 
Ja sitten joku porilainen partakeijo vakuuttelee, ettei tekoälystä voi mitään vaaraa olla eikä se opi enempää kuin ihminen.:ROFLMAO:
 

ctg

Ylipäällikkö
eikä se opi enempää kuin ihminen.:ROFLMAO:
Kun rupeat vertaamaan shakinpeluuta kaikkeen siihen mitä ihminen oppii niin siinä on tekoälyllä oppimista ihan tarpeeksi. Me otamme esim kävelyn ihan toisisijaisena hommana, kun koneelle se on vaikeampi asia kuin shakin sääntöjen oppiminen ja niiden kanssa pelaaminen rajoitetulla pelilaudalla. Tosin meille tämä asia on käsittämätön koska vitun peli ja saatanalliset vastustajat, mutta kyse koneelta että käykö se keittämässä kahvit sillä välillä, kun sä tuumaat siirtoasi ja se tekoäly saattaa jäädä sille tielle.
 
Viimeksi muokattu:

ctg

Ylipäällikkö
Tykkäykset: krd

ctg

Ylipäällikkö
In the summer of 2016, seven hacking machines travelled to Las Vegas with their human creators. They were there to compete in a global-hacking event: the DARPA-sponsored Cyber Grand Challenge designed for machines that can hack other machines. The winner would take home $2 million (£1.5m). The battle was waged over dozens of rounds, with each machine striving to find the most software vulnerabilities, exploit them and patch them before the other machines could use the same tactics to take it out of the game. Each machine was a cluster of processing power, software-analysis algorithms and exploitation tools purposely created by the human teams.

This was the ultimate (and, so far, the only) all-machine hacking competition. The winner, code-named Mayhem, now sits in the Smithsonian National Museum of American history in Washington DC, as the first "non-human entity" to win the coveted DEFCON black badge - one of the highest honours known to hackers.

Mayhem's next tournament, also in August 2017, was against teams of human hackers - and it didn't win. Although it could keep hacking for 24 hours like its Red Bull-fuelled human counterparts, it lacked that surge of energy and motivation that competing humans feel when, for example, a rival team fails to spot a software flaw. A machine can't think outside of the box and it doesn't yet possess the spark of creativity, intuition and audacity that allowed human hackers to win.

This will change in 2018. Advances in computing power and in theoretical and practical concepts in AI research, as well as breakthroughs in cybersecurity, promise that machine-learning algorithms and techniques will be a key part of cyberdefence – and possibly even attack. Human hackers whose machines competed in 2016 and 2017 are now evolving their technology, working in tandem with machines to win other hacking competitions and take on new challenges. (A notable example is Team Shellphish and its open-source exploit-automation tool "angr").
Linkki
 

ctg

Ylipäällikkö
Deep learning and neural networks may have benefited from the huge quantities of data and computing power, but they won't take us all the way to artificial general intelligence, according to a recent academic assessment.

Gary Marcus, ex-director of Uber's AI labs and a psychology professor at the University of New York, argues that there are numerous challenges to deep learning systems that broadly fall into a series of categories.

The first one is data. It's arguably the most important ingredient to any deep learning system and current models are too hungry for it. Machines require huge troves of labelled data to learn how to perform a certain task well.

It may be disheartening to know that programs like DeepMind's AlphaZero can thrash all meatbags at a game of chess and Go, but that only happened after playing a total of 68 million matches against itself across both types of games. That's far above what any human professional will play in a lifetime.

Essentially, deep learning teaches computers how to map inputs to the correct outputs. The relationships between the input and output data are represented and learnt by adjusting the connections between the nodes of a neural network.
Linkki

Daniel Gruss and his Graz colleagues specialize in side channel attacks, ways to exploit systems using the data gleaned from the physical implementation of a system rather than a software flaw. In 2016 they examined ways to harden the core of an operating system—the kernel—against such attacks, and came up with a scheme they called KAISER. KAISER prevents the computer processes of user applications from managing to get at kernel memory spaces—which might, for instance, give someone access to your login information or a cryptographic key you’d like to keep safe. It does so by strictly separating kernel memory spaces in the processor cache. That might sound simple, but the peculiarities of the x86 architecture, on which most PC and server processors are based, make it a nontrivial task. They published a paper on it in July 2017.

“We thought it would be a good countermeasure for generally hardening systems,” Gruss tells Spectrum. But there was no particular exploit it was defending against. “It’s good design and if you have a good design for something, it will protect you.”

Then things got weird. “Starting in October we heard of some effort by Intel to merge a KAISER patch into the upstream kernel, which surprised us,” he says. “We weren’t aware of any attacks.” They then got wind of Amazon working on an implementation and became more suspicious. “We thought there must be something.”

At some point they stumbled across a posting by Anders Fogh. He had attempted to read protected kernel data using a quirk of how modern processors keep busy while waiting for slow compute processes to get their data. In such situations processors perform speculative execution. That is, they start working on what they expect should be the next task, discarding the result if they guessed wrong. Fogh couldn’t get the attack to work, but Gruss’s colleagues Michael Schwarz and Moritz Lipp did.

Together with researchers from Rambus, University of Adelaide, University of Pennsylvania, and Cyberus Technology, they formalized the attack, calling it Meltdown. On a website devoted to the attack they say: “Meltdown breaks the most fundamental isolation between user applications and the operating system. This attack allows a program to access the memory, and thus also the secrets, of other programs and the operating system.”

A related attack, which they call Spectre, is potentially wider reaching because it “breaks the isolation between different applications. It allows an attacker to trick error-free programs, which follow best practices, into leaking their secrets,” according to the website.

Unfortunately, KAISER is not a general fix for Spectre, which thankfully is trickier to pull off than Meltdown.

There’s been some concern about whether KAISER will slow computers down and by how much. Gruss and his colleagues tested it on an Intel Skylake processor and saw less than a 1-percent performance loss. However, they’ve seen bigger numbers on older processors, and the performance varies depending on what the processor is doing. For example, a program that needs to deal with large amounts of small files will likely see a slowdown, because it has to interface with the kernel frequently, says Gruss
Linkki
 
Viimeksi muokattu:

ctg

Ylipäällikkö
Nopeaoppisuus vähästä datasta ei näy olevan näiden koneiden vahvuus.
Totta, pienet datasetit ei edes auta tekoälyä oppimaan. Mutta kun se kerran oppii, niin se muistaa sen niin kauan kun virtaa piisaa. Ajan kartuessa se mikä on satunnainen saattaa uusiutua riittävän monta kertaa jotta uusi toiminto saadaan ulos.
 

ctg

Ylipäällikkö
Bit boffins from universities in China and the US have devised a way to tamper with deep learning models so they produce misleading results.

In a paper posted to pre-print service ArXiv, "PoTrojan: powerful neural-level trojan designs in deep learning models," authors Minhui Zou, Yang Shi, Chengliang Wang, Fangyu Li, WenZhan Song and Yu Wang describe a technique for inserting trojan code into deep learning models.

The researchers, affiliated with the Chongqing and Tsinghua Universities (China), and the University of Georgia in the US, contend that malicious neural networks represent "a significant threat to human society of the AI era."
http://www.theregister.co.uk/2018/02/13/deep_neural_net_trojan_code/

Commercial AI is great at recognising the gender of white men, but not so good at doing the same job for black women.

That's the conclusion of a new study, "Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification", that's compared gender classifiers developed by Microsoft, IBM and Chinese startup Face++ (also known as Megvii).

The study found that all three services consistently performed worse for people with darker skin, especially women.

The paper found the worst error rate when identifying white males was only 0.8 per cent, a figure recorded by Face++. IBM's model got it wrong with 34.7 per cent of black women.

Authors Joy Buolamwini, a researcher at MIT Media Lab, and Timnit Gebru, a postdoctoral researcher at Microsoft, fed the services a dataset they dubbed the Pilot Parliaments Benchmark (PPB). The dataset comprised 1,270 male and female parliamentarians from Rwanda, Senegal, South Africa, Iceland, Finland, Sweden. The authors assert the resulting set of images reflects a fair approximation of the world's population.

Other datasets such as the IJB-A, used for a facial recognition competition set by the US National Institute of Standards and Technology (NIST), and Adience, used for gender and age classification were both overwhelmingly skewed towards people with lighter skin.
http://www.theregister.co.uk/2018/0...ware_is_better_at_white_men_than_black_women/
 

ctg

Ylipäällikkö
What can we learn from a recent news report that China is seeking to develop a nuclear submarine with “AI-augmented brainpower” to give the PLA Navy an “upper hand in battle”?

A February 4 piece in the South China Morning Post quotes a “senior scientist involved with the programme” as saying there is a project underway to update the computer systems on PLAN nuclear submarines with an AI decision-support system with “its own thoughts” that would reduce commanding officers’ workload and mental burden. The article describes plans for AI to take on “thinking” functions on nuclear subs, which could include, at a basic level, interpreting and answering signals picked up by sonar, through the use of convolutional neural networks.

Given the sensitivity of such a project, it is notable that a researcher working on the program is apparently discussing these issues with an English-language Hong Kong-based newspaper owned by Chinese tech giant Alibaba. That alone suggests that powers-that-be in Beijing intend such a story to receive attention. The release of this information should be considered critically – and might even be characterized as either a deliberate, perhaps ‘deterrent’ signal of China’s advances and/or ‘technological propaganda’ that hypes and overstates current research and development. Necessarily, any analysis based on such sourcing is difficult to confirm – and must thus be caveated heavily.
http://www.defenseone.com/ideas/201...-ai-help-decision-making/145906/?oref=d-river