Kohti Skynettiä - Tekoäly sodankäynnissä


Pelottavan nopea oppimaan

AlphaZero, the game-playing AI created by Google sibling DeepMind, has beaten the world’s best chess-playing computer program, having taught itself how to play in under four hours.

The repurposed AI, which has repeatedly beaten the world’s best Go players as AlphaGo, has been generalised so that it can now learn other games. It took just four hours to learn the rules to chess before beating the world champion chess program, Stockfish 8, in a 100-game match up.


Ja sitten joku porilainen partakeijo vakuuttelee, ettei tekoälystä voi mitään vaaraa olla eikä se opi enempää kuin ihminen.:ROFLMAO:


eikä se opi enempää kuin ihminen.:ROFLMAO:
Kun rupeat vertaamaan shakinpeluuta kaikkeen siihen mitä ihminen oppii niin siinä on tekoälyllä oppimista ihan tarpeeksi. Me otamme esim kävelyn ihan toisisijaisena hommana, kun koneelle se on vaikeampi asia kuin shakin sääntöjen oppiminen ja niiden kanssa pelaaminen rajoitetulla pelilaudalla. Tosin meille tämä asia on käsittämätön koska vitun peli ja saatanalliset vastustajat, mutta kyse koneelta että käykö se keittämässä kahvit sillä välillä, kun sä tuumaat siirtoasi ja se tekoäly saattaa jäädä sille tielle.
Viimeksi muokattu:


Tykkäykset: krd


In the summer of 2016, seven hacking machines travelled to Las Vegas with their human creators. They were there to compete in a global-hacking event: the DARPA-sponsored Cyber Grand Challenge designed for machines that can hack other machines. The winner would take home $2 million (£1.5m). The battle was waged over dozens of rounds, with each machine striving to find the most software vulnerabilities, exploit them and patch them before the other machines could use the same tactics to take it out of the game. Each machine was a cluster of processing power, software-analysis algorithms and exploitation tools purposely created by the human teams.

This was the ultimate (and, so far, the only) all-machine hacking competition. The winner, code-named Mayhem, now sits in the Smithsonian National Museum of American history in Washington DC, as the first "non-human entity" to win the coveted DEFCON black badge - one of the highest honours known to hackers.

Mayhem's next tournament, also in August 2017, was against teams of human hackers - and it didn't win. Although it could keep hacking for 24 hours like its Red Bull-fuelled human counterparts, it lacked that surge of energy and motivation that competing humans feel when, for example, a rival team fails to spot a software flaw. A machine can't think outside of the box and it doesn't yet possess the spark of creativity, intuition and audacity that allowed human hackers to win.

This will change in 2018. Advances in computing power and in theoretical and practical concepts in AI research, as well as breakthroughs in cybersecurity, promise that machine-learning algorithms and techniques will be a key part of cyberdefence – and possibly even attack. Human hackers whose machines competed in 2016 and 2017 are now evolving their technology, working in tandem with machines to win other hacking competitions and take on new challenges. (A notable example is Team Shellphish and its open-source exploit-automation tool "angr").


Deep learning and neural networks may have benefited from the huge quantities of data and computing power, but they won't take us all the way to artificial general intelligence, according to a recent academic assessment.

Gary Marcus, ex-director of Uber's AI labs and a psychology professor at the University of New York, argues that there are numerous challenges to deep learning systems that broadly fall into a series of categories.

The first one is data. It's arguably the most important ingredient to any deep learning system and current models are too hungry for it. Machines require huge troves of labelled data to learn how to perform a certain task well.

It may be disheartening to know that programs like DeepMind's AlphaZero can thrash all meatbags at a game of chess and Go, but that only happened after playing a total of 68 million matches against itself across both types of games. That's far above what any human professional will play in a lifetime.

Essentially, deep learning teaches computers how to map inputs to the correct outputs. The relationships between the input and output data are represented and learnt by adjusting the connections between the nodes of a neural network.

Daniel Gruss and his Graz colleagues specialize in side channel attacks, ways to exploit systems using the data gleaned from the physical implementation of a system rather than a software flaw. In 2016 they examined ways to harden the core of an operating system—the kernel—against such attacks, and came up with a scheme they called KAISER. KAISER prevents the computer processes of user applications from managing to get at kernel memory spaces—which might, for instance, give someone access to your login information or a cryptographic key you’d like to keep safe. It does so by strictly separating kernel memory spaces in the processor cache. That might sound simple, but the peculiarities of the x86 architecture, on which most PC and server processors are based, make it a nontrivial task. They published a paper on it in July 2017.

“We thought it would be a good countermeasure for generally hardening systems,” Gruss tells Spectrum. But there was no particular exploit it was defending against. “It’s good design and if you have a good design for something, it will protect you.”

Then things got weird. “Starting in October we heard of some effort by Intel to merge a KAISER patch into the upstream kernel, which surprised us,” he says. “We weren’t aware of any attacks.” They then got wind of Amazon working on an implementation and became more suspicious. “We thought there must be something.”

At some point they stumbled across a posting by Anders Fogh. He had attempted to read protected kernel data using a quirk of how modern processors keep busy while waiting for slow compute processes to get their data. In such situations processors perform speculative execution. That is, they start working on what they expect should be the next task, discarding the result if they guessed wrong. Fogh couldn’t get the attack to work, but Gruss’s colleagues Michael Schwarz and Moritz Lipp did.

Together with researchers from Rambus, University of Adelaide, University of Pennsylvania, and Cyberus Technology, they formalized the attack, calling it Meltdown. On a website devoted to the attack they say: “Meltdown breaks the most fundamental isolation between user applications and the operating system. This attack allows a program to access the memory, and thus also the secrets, of other programs and the operating system.”

A related attack, which they call Spectre, is potentially wider reaching because it “breaks the isolation between different applications. It allows an attacker to trick error-free programs, which follow best practices, into leaking their secrets,” according to the website.

Unfortunately, KAISER is not a general fix for Spectre, which thankfully is trickier to pull off than Meltdown.

There’s been some concern about whether KAISER will slow computers down and by how much. Gruss and his colleagues tested it on an Intel Skylake processor and saw less than a 1-percent performance loss. However, they’ve seen bigger numbers on older processors, and the performance varies depending on what the processor is doing. For example, a program that needs to deal with large amounts of small files will likely see a slowdown, because it has to interface with the kernel frequently, says Gruss
Viimeksi muokattu:


Nopeaoppisuus vähästä datasta ei näy olevan näiden koneiden vahvuus.
Totta, pienet datasetit ei edes auta tekoälyä oppimaan. Mutta kun se kerran oppii, niin se muistaa sen niin kauan kun virtaa piisaa. Ajan kartuessa se mikä on satunnainen saattaa uusiutua riittävän monta kertaa jotta uusi toiminto saadaan ulos.


Bit boffins from universities in China and the US have devised a way to tamper with deep learning models so they produce misleading results.

In a paper posted to pre-print service ArXiv, "PoTrojan: powerful neural-level trojan designs in deep learning models," authors Minhui Zou, Yang Shi, Chengliang Wang, Fangyu Li, WenZhan Song and Yu Wang describe a technique for inserting trojan code into deep learning models.

The researchers, affiliated with the Chongqing and Tsinghua Universities (China), and the University of Georgia in the US, contend that malicious neural networks represent "a significant threat to human society of the AI era."

Commercial AI is great at recognising the gender of white men, but not so good at doing the same job for black women.

That's the conclusion of a new study, "Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification", that's compared gender classifiers developed by Microsoft, IBM and Chinese startup Face++ (also known as Megvii).

The study found that all three services consistently performed worse for people with darker skin, especially women.

The paper found the worst error rate when identifying white males was only 0.8 per cent, a figure recorded by Face++. IBM's model got it wrong with 34.7 per cent of black women.

Authors Joy Buolamwini, a researcher at MIT Media Lab, and Timnit Gebru, a postdoctoral researcher at Microsoft, fed the services a dataset they dubbed the Pilot Parliaments Benchmark (PPB). The dataset comprised 1,270 male and female parliamentarians from Rwanda, Senegal, South Africa, Iceland, Finland, Sweden. The authors assert the resulting set of images reflects a fair approximation of the world's population.

Other datasets such as the IJB-A, used for a facial recognition competition set by the US National Institute of Standards and Technology (NIST), and Adience, used for gender and age classification were both overwhelmingly skewed towards people with lighter skin.


What can we learn from a recent news report that China is seeking to develop a nuclear submarine with “AI-augmented brainpower” to give the PLA Navy an “upper hand in battle”?

A February 4 piece in the South China Morning Post quotes a “senior scientist involved with the programme” as saying there is a project underway to update the computer systems on PLAN nuclear submarines with an AI decision-support system with “its own thoughts” that would reduce commanding officers’ workload and mental burden. The article describes plans for AI to take on “thinking” functions on nuclear subs, which could include, at a basic level, interpreting and answering signals picked up by sonar, through the use of convolutional neural networks.

Given the sensitivity of such a project, it is notable that a researcher working on the program is apparently discussing these issues with an English-language Hong Kong-based newspaper owned by Chinese tech giant Alibaba. That alone suggests that powers-that-be in Beijing intend such a story to receive attention. The release of this information should be considered critically – and might even be characterized as either a deliberate, perhaps ‘deterrent’ signal of China’s advances and/or ‘technological propaganda’ that hypes and overstates current research and development. Necessarily, any analysis based on such sourcing is difficult to confirm – and must thus be caveated heavily.

Suomi haluaa rakentaa uuden kehitysyhtiönsä ja valtion osakeomistusten avulla uuden liki kolmen miljardin euron rahaston, joka tukisi esimerkiksi digitalisaation, alustatalouden ja tekoälyn parissa toimivia yrityksiä, kertoo Tekniikka&Talous.

Talousuutistoimisto Bloombergin mukaan sijoitusrahaston ensimmäinen lähde saattaa olla valtion viinayhtiö Altian listaaminen pörssiin.



AI experts have emitted a lengthy report spitballing how intelligent software may be turned against us humans in the near future.

Their aim is to put in motion safeguards and policies to crackdown on malevolent uses of machine-learning technology, rather than whip up panic, and to make scientists and engineers understand the dual-use of their code – that it can be used for good and bad.

The paper, titled The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation, was made public on Tuesday evening. It is a followup to a workshop held at the University of Oxford in the UK last year, during which boffins discussed topics from safety and drones, to cybersecurity and counterterrorism, in the context of machine learning.

The dossier's 26 authors hail from various universities, research institutions, an online rights campaign group, and a cybersecurity biz. It offers example scenarios in which artificially intelligent software can be used maliciously, scenarios the team believe are already unfolding or are plausible in the next five years.

Syväoppimisen mullistuksen keskellä on toisaalta hyvä ymmärtää myös tekniikan rajoituksia. Vaikka syväoppiminen on aito vallankumous, siihen liittyy pari perustavanlaatuista ongelmaa.

”Jos tien yli menee talutushihna, jonka toisessa päässä on koira ja toisessa sen ulkoiluttaja, robottiauto ajaa surutta eteenpäin. Tekoälyllä ei vain ole keinoja ymmärtää tätä tilannetta”, kertoo pitkän linjan tekoälykehittäjä Harri Valpola. Hän on tekoälyä ja syväoppimista kehittävän Curious AI:n toimitusjohtaja ja yksi yhtiön perustajista.

Ongelman tekninen nimi on segmentointi. Tekoäly ei osaa yhdistää asioita kokonaisuuksiin, sillä syväoppimisessa ei ole kokonaisrakennetta, johon tietoja voisi yhdistää. Tämä johtaa vaikeuksiin objektien erottamisessa. Esimerkiksi robottiauton on vaikea arvioida, koostuuko edessä oleva valkoinen massa lumihiutaleista vai onko kyseessä päälle ajava valkoinen auto.

Puheen- ja tekstin tunnistamisessa ongelma on sama.

”Kun ihminen puhuu, hänellä on päässään sisäinen malli maailmasta, ja hän pystyy välittämään mielikuvia. Neuroverkkomallien maailma on fundamentaalisesti rajoittunut, sillä niiden pään sisällä ei ole tällaista rikasta maailmaa. Ihminen yrittää välittää ajatuksen, mutta vastaanottavassa päässä ei ole kykyä ymmärtää ja hahmottaa sitä”, Valpola toteaa.

Toinen haaste on se, että syväoppiminen vaatii ihmisen ohjaamaan tätä oppimista. ”Kone vain oppii imitoimaan ihmisen antamaa vastausta. Syväoppiminen on purkitettua ihmisälyä”, Valpola kertoo.

Tämä on yksi suurimmista haasteista syväoppimisjärjestelmien kehittämisessä. Ihmisen täytyy usein tehdä valtava työ pohjamateriaalin luokittelussa. Ihminen voi oppia itsekseen lukemalla kirjoja, katsomalla YouTube-videoita tai olemalla vuorovaikutuksessa toisten ihmisten kanssa, mutta tekoäly ei tähän pysty.



Being able to learn from mistakes is a powerful ability that humans (being mistake-prone) take advantage of all the time. Even if we screw something up that we’re trying to do, we probably got parts of it at least a little bit correct, and we can build off of the things that we did not to do better next time. Eventually, we succeed.

Robots can use similar trial-and-error techniques to learn new tasks. With reinforcement learning, a robot tries different ways of doing a thing, and gets rewarded whenever an attempt helps it to get closer to the goal. Based on the reinforcement provided by that reward, the robot tries more of those same sorts of things until it succeeds.

Where humans differ is in how we’re able to learn from our failures as well as our successes. It’s not just that we learn what doesn’t work relative to our original goal; we also collect information about how we fail that we may later be able to apply to a goal that’s slightly different, making us much more effective at generalizing what we learn than robots tend to be.

IEEE Spectrum: Can you explain what the difference is between sparse and dense rewards, and why you recommend sparse rewards as being more realistic in robotics applications?

Matthias Plappert: Traditionally, in the AI field of reinforcement learning (RL), the AI agent essentially plays a guessing game to learn a new task. Let’s take the arm pushing the puck as an example (which you can view in the video). It tries to do some motion randomly, like just hitting the puck from the side. In the traditional RL setting, an oracle would give the agent a reward based on how close to the goal the puck ends up. The closer puck to the goal, the bigger the reward. So, in a way, the oracle tells the agent, “You’re getting warmer”—this is a dense reward.

Sparse rewards essentially pushes this paradigm to the limit: The oracle only gives a reward if the goal is reached. The oracle doesn’t say, “You’re getting warmer” anymore. It only says: “You succeeded” or “You failed.” This is a much harder setting to learn in, since you’re not getting any intermediate clues. It also better corresponds to reality, which has fewer moments where you obtain a specific reward for doing a specific thing.

To what extent do you think these techniques will be practically useful on real robots?

Learning with HER on real robots is still hard since it still requires a significant amount of samples. However, if the reward is sparse, it would potentially be much simpler to do some form of fine-tuning on the real robot since figuring out if an attempt was successful vs. not successful is much simpler than computing the correct dense reward in every timestep.

We also found that learning with HER in simulation is often much simpler since it does not require extensive tuning of the reward function (it is typically much easier to detect if an outcome was successful) and due to the fact that the critic (a neural network that tries to predict how well the agent will do in the future) has a much simpler job as well (since it does not need to learn a very complex function but instead also only has to differentiate between successful vs. non-successful).
Tämän hetkinen tila autonomisesta robottitappelusta

Tuosta on pitkä matka terminaattoriin.
Pöh. Sä et vaan tajuu itämaista filosofiaa ja taistelutaitoja. Tuossa on selvästi luettu sekä Sun Tzu että Kusetuksen korkeampi taito 1-12 useampaankin kertaan. Länsimainen katsoja helposti ajattelee vain lyhyellä aikaperspektiivillä, mutta itämaiset taistelurobotit funtsii asioita vuosikymmenten päähän. Komman-Do:n saloihin perehtyneenä voin kertoa, että viimeistään kohdassa 2:06 kolmas silmäni herpaantui kauhusta ja jouduin lähtemään raappahousun vaihtoon.



Tämän hetkinen tila autonomisesta robottitappelusta

Tuosta on pitkä matka terminaattoriin.
Kahvit meinasi mennä syliin kun nauratti - ei tuo mitenkään täysin epäonnistunut suoritus ole :D .. tylsähkö juoni mutta viattoman hauskaa.
Tykkäykset: ctg