Kohti Skynettiä - Tekoäly sodankäynnissä

Mustaruuti

Ylipäällikkö
BAN
RÖLLIKKÄ
Foorumilla on miehittämättömien härveleiden ja robottien ketju. Mutta, tekoäly on paljon laajempi asia, ja sille on ehkä hyvä avata oma ketjunsa.

Mikä kaikki sitten on tekoälyä? Ehkä ei kannata vetää hirveän tiukkaa rajaa, vaan tähän ketjuun voidaan hyväksyä mukaan vähän yksinkertaisemmatkin algoritmit ja käyttötapaukset.


300


Milloin tuo on todellisuutta? Analyysia ja luettavaa asiasta tarjoaa esim. tämä Kurzweilin opus.

http://singularity.com/
 
Jenkit ovat ainakin hereillä asian suhteen. Se on osa kolmatta erottumis-strategiaa.

http://breakingdefense.com/2016/09/air-force-ops-centers-lead-way-to-3rd-offset-bob-work/

Artificial intelligence comes into play to sort through the masses of data moving over these command-and-control networks, find patterns, and advise the humans on threats and options. In some important niches, such as cybersecurity and electronic warfare, the computers may take action on their own when a virus or hostile signal is moving too fast for humans to react. But even so, the role of AI will start “narrow” and evolve “gradually,” Work said.

A lot of people worry about the Terminator and they worry about SkyNet,” Work said. “Look, what this is going to be is a gradual implementation of narrow AI throughout the battle network” — at least “initially.” Over time, Work said, small experiments may add up to revolutionary change as we figure out how to put the pieces together.

“Who gets the technology to field faster doesn’t matter,” Work said. “It only will matter if we can employ it to tactical and operational effect.” All the major powers of the late 19th century had railroads, telegraphs, and rifles, he said, and all the powers of the 1930s had tanks, planes, and radios, but it was Prussia in 1864 and Germany in 1939 that first figured out revolutionary ways to put technologies together.

Victory, said Work, goes to “the person who can put them together in operational and organizational constructs, and then train the force and exercise the force to be ready to employ them — and we’ve got a damn big advantage in that.” A strong professional military with robust institutions can seize opportunities better than its rivals, even if they all start with the same technologies.
 
Tässä tarkempaa kuvausta edellisestä.

Artificial Intelligence For Air Force: Cyber & Electronic Warfare

http://breakingdefense.com/2016/09/...e-for-the-air-force-cyber-electronic-warfare/

AFA: The Air Force wants artificial intelligence to track and react to cyber and electronic threats, to update countermeasures against enemy hackers, radars, and missiles faster than human minds can manage. But first you have to fix the basics.

Today, the Department Of Defense Information Network (DODIN) is really not a single network, but a quasi-feudal patchwork of often incompatible local networks. It’s the Holy Roman Empire of cyberspace. There are so many dark corners and hidden vulnerabilities that no amount of intelligence — human or artificial — can monitor them all, let alone defend them.

“It’s hard to do things at scale across a fragmented set of networks (that) grew up over decades as separately managed, separately architected networks,” said Lt. Gen. Kevin McLaughlin, deputy commander of Cyber Command. If you tried to introduce artificial intelligence cybersecurity now, he told me after his remarks at the Air Force Association conference here, it couldn’t do what’s envisioned, “not efficiently, not at scale.”

“So we are moving more and more as a department to the ability to see across all of our terrain at the same time,” McLaughlin told me. Critical steps include overhauls like the Joint Information Environment (JIE) and new cybersecurity standards for weapons systems. “As you connect (systems) into the DoD environment, we’re gonna make sure that you can see across all of those things.”

Once that groundwork is laid, McLaughlin went on, then you can build on it with more advanced technologies like big data analytics and artificial intelligence. Those are key parts of Defense Secretary Ashton Carter’s “Third Offset Strategy” for military modernization. Once you’ve rationalized the network, McLaughlin told, “then you enable those forward-leaning ideas that the secretary’s talking about, and we would be looking for the very best things we could get from industry.”

Automated Response

How would artificial intelligence work, once we got it to work? Carter and his deputy, Bob Work, have said in broad terms that military AI might get its start in the increasingly overlapping worlds of cybersecurity and electronic warfare — in essence, hacking and jamming — rather than in drone fighters or sci-fi style killer robots. But two Air Force two-stars today provided some details of AI would swiftly sort through masses of incoming data, pick out threats, and automatically update defenses across the force.

“The two things we seek from automation are speed and scale,” said Maj. Gen. Christopher Weggeman, commander of 24th Air Force and Air Forces Cyber. Computers already rely on automated updates to patch vulnerabilities — although that’s hard to do across the current, fragmented DoD network. A future cybersecurity AI could do more, recognizing a threat already inside the network and automatically rerouting traffic around the infected computers, for example.

“The other thing automation will allow us to do is scale a very limited workforce, a talent pool that’s finite,” Weggeman told me. Especially now that the Pentagon needs to secure its weapons systems with their built-in electronics, not just conventional computers, there’s just “too much terrain” to defend it all manually, he said. Automated systems need to clear out the brush of ordinary problems so humans can find and address the bigger threats.

Automation can also help against enemy radars and jammers — what’s called “cognitive electronic warfare.” Traditional electronic warfare involves detecting an unknown electromagnetic signal, sending up specialized EW planes to gather data, analyzing it on the ground, and finally updating every aircraft’s radar-warning system, jammers, and other countermeasures. An AI could pull in data from aircraft in flight, detect a new threat, and automatically issue appropriate updates to every aircrafts’ EW software so they can counter it. The AI would also update maps of where threats were and automatically reroute both manned and unmanned aircraft around them, Maj. Gen. Thomas Deale told me, kind of like a map app on your phone directing you around a traffic jam.

“Air superiority is not something we are going to be able to assume,” Deale, director of operations at Air Combat Command, told the AFA conference. In future conflicts against high-end adversaries, he said, the US will have to fight to control airspace for limited times and in specific places. That will require automated systems to process all the intelligence data, he said, and commanders capable of combining traditional kinetic weapons with “non-kinetic” tools like cyber.
 
  • Tykkää
Reactions: ctg
Ja tämä uutinen kesältä oli aika mielenkiintoinen. Uusiin hävittäjiin on jo kehitetty kaikenlaista tilannetietoisuuden parantamiseksi. Seuraava askel lienee se, että keinoäly rupeaa entistä enemmän avustamaan lentäjää ja tekemään ehdotuksia parhaista toimenpiteistä.

A.I. Downs Expert Human Fighter Pilot In Dogfight Simulation
http://www.popsci.com/ai-pilot-beats-air-combat-expert-in-dogfight

In the military world, fighter pilots have long been described as the best of the best. As Tom Wolfe famously wrote, only those with the "right stuff" can handle the job. Now, it seems, the right stuff may no longer be the sole purview of human pilots.

A pilot A.I. developed by a doctoral graduate from the University of Cincinnati has shown that it can not only beat other A.I.s, but also a professional fighter pilot with decades of experience. In a series of flight combat simulations, the A.I. successfully evaded retired U.S. Air Force Colonel Gene "Geno" Lee, and shot him down every time. In a statement, Lee called it "the most aggressive, responsive, dynamic and credible A.I. I've seen to date."

And "Geno" is no slouch. He's a former Air Force Battle Manager and adversary tactics instructor. He's controlled or flown in thousands of air-to-air intercepts as mission commander or pilot. In short, the guy knows what he's doing. Plus he's been fighting A.I. opponents in flight simulators for decades.

But he says this one is different. "I was surprised at how aware and reactive it was. It seemed to be aware of my intentions and reacting instantly to my changes in flight and my missile deployment. It knew how to defeat the shot I was taking. It moved instantly between defensive and offensive actions as needed."

The A.I., dubbed ALPHA, was developed by Psibernetix, a company founded by University of Cincinnati doctoral graduate Nick Ernest, in collaboration with the Air Force Research Laboratory. According to the developers, ALPHA was specifically designed for research purposes in simulated air-combat missions.

The secret to ALPHA's superhuman flying skills is a decision-making system called a genetic fuzzy tree, a subtype of fuzzy logic algorithms. The system approaches complex problems much like a human would, says Ernest, breaking the larger task into smaller subtasks, which include high-level tactics, firing, evasion, and defensiveness. By considering only the most relevant variables, it can make complex decisions with extreme speed. As a result, the A.I. can calculate the best maneuvers in a complex, dynamic environment, over 250 times faster than its human opponent can blink.

After hour-long combat missions against ALPHA, Lee says,"I go home feeling washed out. I'm tired, drained and mentally exhausted. This may be artificial intelligence, but it represents a real challenge."

The results of the dogfight simulations are published in the Journal of Defense Management.
 
By considering only the most relevant variables, it can make complex decisions with extreme speed. As a result, the A.I. can calculate the best maneuvers in a complex, dynamic environment, over 250 times faster than its human opponent can blink.

Millainen kone siellä on taustalla, ja käyttääkö tuo AI cheatteja noiden päätöksien tekemiseen? Machine Learningilla on nopeutensa. Tuo artikkeli kertoo tarinan jonka pentagon hyväksyi julkaisukelvolliseksi. Kaikella on kuitenkin rajansa.
 
Millainen kone siellä on taustalla, ja käyttääkö tuo AI cheatteja noiden päätöksien tekemiseen? Machine Learningilla on nopeutensa. Tuo artikkeli kertoo tarinan jonka pentagon hyväksyi julkaisukelvolliseksi. Kaikella on kuitenkin rajansa.

Ei tietoa, mutta tuossa koko kappaleessa oli kuvattu, että se pyrkii arvioimaan tarkemmin vain kaikista relevanteimmat toimintamallit. Mikä sitten säästää prosessointitarvetta.

The secret to ALPHA's superhuman flying skills is a decision-making system called a genetic fuzzy tree, a subtype of fuzzy logic algorithms. The system approaches complex problems much like a human would, says Ernest, breaking the larger task into smaller subtasks, which include high-level tactics, firing, evasion, and defensiveness. By considering only the most relevant variables, it can make complex decisions with extreme speed. As a result, the A.I. can calculate the best maneuvers in a complex, dynamic environment, over 250 times faster than its human opponent can blink.
 
Kuten sanoin kaikki eivät ole valmiina. Meilläkin on joitakin vaikeuksia, mutta koulutettuja ihmisiä riittää tämän ongelman ratkaisemiseksi. Kaikkien ei tarvitse edes olla tohtoritason ihmisiä. Silti, sanoisin että ensi vuosikymmenellä on ihan erillainen näkemys jos meilläkin jatketaan näiden asioiden parissa. Varsinkin kun Nokia ja muut on kiinnostuneita droneteknologiasta, ja kehittyneitä roboteista.

The UK government does not have a clear strategy on how to maximise AI and robotics for economic benefit, according to the Commons Select Committee for Science and Technology.

This conclusion was published today in a report by the committee, which earlier this year launched an inquiry into Blighty's take up of artificial intelligence.

After calling on the expertise of industry and research leaders – including bods at Google DeepMind in London – the committee highlighted the government’s inadequacy to deal with the emerging technology of robotics and automated systems (RAS).

The UK government promised to establish a RAS Leadership council to kickstart the development of skills and investment needed for AI and robotics to grow in March 2015. Since that pledge was made, however, there has been “no sign” of the government delivering on its promise, causing the UK to fall behind other countries in using robotics and AI for increased productivity.

Paul Mason, director of emerging tech at Innovate UK, told the committee “the numbers for last year show that installed shipments of robots in China were 75,000 compared with 2,400 in the UK.”

Referencing McKinsey, a global consulting firm, the committee believes that RAS technology “would have an impact on global markets of between $1.9 and $6.4 trillion per annum” by 2025.

But, because the UK has been slow to adopting the technology, Professor Philip Nelson of Research Councils UK thought that it was “probably losing market share,” compared to China, South Korea and Japan.

The Number 10's dedication to improving UK research into AI and robotics was also called into question, after it was revealed that 80 per cent of RAS funding is provided by the European Union.

EU science funding hangs in the balance following the UK’s decision to support Brexit. If money for machine-learning boffinry dries up then the RAS research and industry may suffer, the Prime Minister’s Council for Science and Technology warned.

Although the government have increased the EPSRC funding amount in RAS technology by three times since 2010, bringing the current total to £33.8m, it is not enough, according to the Research Councils UK.

The UK needs to “move from isolated pockets of excellence to the formation of a national research, training and innovation infrastructure.”

The situation gets worse. Aside from economic issues, the UK government is ill-equipped to deal with the possible disruption that the “fourth industrial revolution” will bring, as it has not published its Digital Strategy, which has been pushed back in favour of dealing with Brexit-related issues.

It doesn’t help, however, that the response to how AI and robotics will affect jobs in the future was conflicted.

Various experts agreed that the technology would increase productivity, but some predicted increased job losses, whilst others believed new jobs would be created in the process.

Google DeepMind thought that using AI would provide “new areas of economic activity and employment,” but that certain types of skills will decrease in relevance.

The Science and Technology committee has urged the government “without further delay” to establish a RAS Leadership council with a clear strategy – or “the productivity gains that could be achieved through greater uptake of the technologies across the UK will remain unrealised.” ®
http://www.theregister.co.uk/2016/10/12/uk_gov_ai_robots/

Saksalla, taikka Ranskalla ei ole juuri mitään kehitystä tällä saralla. Muuta kuin mitä on muutaman valtion projekti ja ne muutamat tutkijat, jotka työskentelevät drone-androidi projekteissa. Japani ja korea on paljon edellä muita, tulihan heiltä viime vuoden voittaja darpan "hazard robot" kilpailussa.

Jenkeillä on paras mahdollisuus tekoälyä, ja meillä on hyvää kykyä auttaa heitä, sekä hyötyä tietotaidon keräämisessä.
 
IT’S HARD TO think of a single technology that will shape our world more in the next 50 years than artificial intelligence. As machine learning enables our computers to teach themselves, a wealth of breakthroughs emerge, ranging from medical diagnostics to cars that drive themselves. A whole lot of worry emerges as well. Who controls this technology? Will it take over our jobs? Is it dangerous? President Obama was eager to address these concerns. The person he wanted to talk to most about them? Entrepreneur and MIT Media Lab director Joi Ito. So I sat down with them in the White House to sort through the hope, the hype, and the fear around AI. That and maybe just one quick question about Star Trek.
https://www.wired.com/2016/10/president-obama-mit-joi-ito-interview/

Dadich: I want to center our conversation on artificial intelligence, which has gone from science fiction to a reality that’s changing our lives. When was the moment you knew that the age of real AI was upon us?

Obama: My general observation is that it has been seeping into our lives in all sorts of ways, and we just don’t notice; and part of the reason is because the way we think about AI is colored by popular culture. There’s a distinction, which is probably familiar to a lot of your readers, between generalized AI and specialized AI. In science fiction, what you hear about is generalized AI, right?

Computers start getting smarter than we are and eventually conclude that we’re not all that useful, and then either they’re drugging us to keep us fat and happy or we’re in the Matrix. My impression, based on talking to my top science advisers, is that we’re still a reasonably long way away from that. It’s worth thinking about because it stretches our imaginations and gets us thinking about the issues of choice and free will that actually do have some significant applications for specialized AI, which is about using algorithms and computers to figure out increasingly complex tasks.

We’ve been seeing specialized AI in every aspect of our lives, from medicine and transportation to how electricity is distributed, and it promises to create a vastly more productive and efficient economy. If properly harnessed, it can generate enormous prosperity and opportunity. But it also has some downsides that we’re gonna have to figure out in terms of not eliminating jobs. It could increase inequality. It could suppress wages.
 
Tämän hetken tila tekoälyssä.

Google’s DeepMind artificial intelligence lab does more than just develop computer programs capable of beating the world’s best human players in the ancient game of Go. The DeepMind unit has also been working on the next generation of deep learning software that combines the ability to recognize data patterns with the memory required to decipher more complex relationships within the data.

Deep learning is the latest buzz word for artificial intelligence algorithms called neural networks that can learn over time by filtering huge amounts of relevant data through many “deep” layers. The brain-inspired neural network layers consist of nodes (also known as neurons). Tech giants such as Google, Facebook, Amazon, and Microsoft have been training neural networks to learn how to better handle tasks such as recognizing images of dogs or making better Chinese-to-English translations. These AI capabilities have already benefited millions of people using Google Translate and other online services.

But neural networks face huge challenges when they try to rely solely on pattern recognition without having the external memory to store and retrieve information. To improve deep learning’s capabilities, Google DeepMind created a “differentiable neural computer” (DNC) that gives neural networks an external memory for storing information for later use.

“Neural networks are like the human brain; we humans cannot assimilate massive amounts of data and we must rely on external read-write memory all the time,” says Jay McClelland, director of the Center for Mind, Brain and Computation at Stanford University. “We once relied on our physical address books and Rolodexes; now of course we rely on the read-write storage capabilities of regular computers.”
http://spectrum.ieee.org/tech-talk/...-boosts-memory-to-navigate-london-underground

Tässä mielessä tuo USAFn treeniAI on mielestäni cheattaava. En tiedä miten se on rakennettu, mutta tähän mennessä liian hyvä ollakseen totta on minun ajatteluasema heidän mainostamastaan teknologiasta.

An unaided neural network could not even finish the first level of training, based on traveling between two subway stations without trying to find the shortest route. It achieved an average accuracy of just 37 percent after going through almost two million training examples. By comparison, the neural network with access to external memory in the DNC system successfully completed the entire training curriculum and reached an average of 98.8 percent accuracy on the final lesson.

The external memory of the DNC system also proved critical to success in performing logical planning tasks such as solving simple block puzzle challenges. Again, a neural network by itself could not even finish the first lesson of the training curriculum for the block puzzle challenge. The DNC system was able to use its memory to store information about the challenge’s goals and to effectively plan ahead by writing its decisions to memory before acting upon them.

In 2014, DeepMind’s researchers developed another system, called the neural Turing machine, that also combined neural networks with external memory. But the neural Turing machine was limited in the way it could access “memories” (information) because such memories were effectively stored and retrieved in fixed blocks or arrays. The latest DNC system can access memories in any arbitrary location, McClelland explains.

The DNC system’s memory architecture even bears a certain resemblance to how the hippocampus region of the brain supports new brain cell growth and new connections in order to store new memories. Just as the DNC system uses the equivalent of time stamps to organize the storage and retrieval of memories, human “free recall” experiments have shown that people are more likely to recall certain items in the same order as first presented.

Human brains still have significant advantages over any brain-inspired deep learning software. For example, human memory seems much better at storing information so that it is accessible by both context or content, McClelland says. He expressed hope that future deep learning and AI research could better capture the memory advantages of biological brains.

DeepMind’s DNC system and similar neural learning systems may represent crucial steps for the ongoing development of AI. But the DNC system still falls well short of what McClelland considers the most important parts of human intelligence.

The DNC is a sophisticated form of external memory, but ultimately it is like the papyrus on which Euclid wrote the elements. The insights of mathematicians that Euclid codified relied (in my view) on a gradual learning process that structured the neural circuits in their brains so that they came to be able to see relationships that others had not seen, and that structured the neural circuits in Euclid’s brain so that he could formulate what to write. We have a long way to go before we understand fully the algorithms the human brain uses to support these processes.
 
Viimeksi muokattu:
Jotta aihe saa hieman elävämpää perspektiiviä noin lopputuloksen kannalta, niin viitteeksi rohkenen esittää tieteellistä muotoilua elokuvien kautta tutustuttavaksi.
Tänään scifiä - huomenna todellisuutta.

Ex Macchina http://www.imdb.com/title/tt0470752/
Stealth http://www.imdb.com/title/tt0382992/
Artificial intelligence http://www.imdb.com/title/tt0212720/?ref_=nv_sr_1

.

Ja tähän listaan lisäisin myös nämä sarjat:

Battlestar Galactica http://www.imdb.com/title/tt0407362/
Äkta människor http://www.imdb.com/title/tt2180271/

Stephen Hawking on hiukkasen huolestunut. Olisikohan syynä se, että yleensä kaikki mitä ihminen peukaloi, tahraa kädet verellä?

http://www.bbc.com/news/technology-30290540
 
The Royal Navy is planning to step up its use of AI to improve maritime defence, beginning with STARTLE, which is AI software that can can spot potential threats.

At a briefing titled "Artificial Intelligence in Royal Navy Warships" hosted by non-profit TechUK, Blighty's navy announced it was keen to explore the potential of using machine-learning to improve operational capability in its fighting units under Project NELSON.

Developed by Roke Manor Research, the company claims it is the first supplier to integrate AI software into a Defence Science and Technology Laboratory-sponsored maritime combat system demonstrator.

Mike Hook, lead software architect on STARTLE at Roke, told The Register: “It’s hard to implement new technology in warships because it has so much proprietary software. But integrating STARTLE will give it new data to do trials with.”

Roke describes STARTLE as “machine situation awareness” software. It works by using a neural network and machine learning to process information and flag warning signs in a similar way to the fear condition system found in mammalian brains.

It resembles how the amygdala – a set of neurons located within the brain’s temporal lobes – reacts towards sensory data to learn about danger, Hook explained.

“It roots through big fat piles of data and has been trained to recognize threats. It’s a goal-driven threat accessor. It monitors multiple sources of information and has a queuing system build up a body of information to assess and confirm potential threats by going through a series of criteria like a human would do,” Hook said.

The software explicitly logs its decisions. which makes the system more transparent, and decisions are ultimately signed off by humans. It is expected to assist the Principal Warfare Officer, Roke said.

Transparency and validation were two issues in AI highlighted by the Science and Technology Committee’s Robotics and AI [PDF] report released last week.

Hook doesn’t see the software as being problematic. “It’s a kind of augmented intelligence that can cope with much more complex situations to help inform human decision making.”

Autonomous weapons is also another area of concern in the committee’s report, but it did note that the UK has no capability for autonomous weapons yet.

The potential of AI is slowly encroaching the defense industry. The Royal Navy said the ultimate goal was to develop an AI system called “Mind,” which will be able to process information for navigation, logistics, medical, engineering and cyber defense operations.

“The Royal Navy (RN) aspires to use AI technology to develop an RN-owned Ship’s ‘Mind’ at the centre of its warships to enable rapid decision-making in complex, fast-moving war-fighting scenarios,” it said in the briefing information for the event.

The Royal Navy hopes to demonstrate Project NELSON technologies during Exercise Information Warrior 17 in Spring 2017.

“I believe that AI – like all areas of computer science – is a technology that can help you deal with complexity better. AI is a step further, and allows machines and humans to work together as a team,” Hook told The Register. ®
http://www.theregister.co.uk/2016/10/18/royal_navy_ai_software/
 
For centuries, technological innovation has created jobs and improved standards of living. Artificial intelligence might change that. For starters, AI-driven automation is not going to treat workers equally. A recent White House called Preparing for the Future of Artificial Intelligence acknowledges that AI could make low- and medium-skill jobs unnecessary, and widen the wage gap between lower- and higher-educated workers.

The good news is that policymakers and technology experts are thinking about this, and instituting plans aimed at avoiding the “Robots are going to take all of our jobs!” doomsday scenario. Academics and industry practitioners discussed AI’s job impact at the White House Frontiers Conference last week. And they were confident and optimistic about our ability to adapt.

“The best solutions are always going to come from minds and machines working together,” said Andrew McAfee, co-director of the MIT Initiative on the Digital Economy, and author of “The Second Machine Age.” But that balance of minds and machines won’t always be the same. In five years, that balance will be totally different in, say, customer service and driving.
http://spectrum.ieee.org/tech-talk/...ai-experts-say-the-technology-will-do-to-jobs

“People have to learn from the get-go that reasoning and learning machines are going to be part of their career…their lives,” Banavar said. “They have to learn in every profession that this is how we’re going to do our jobs. We need to rethink social, educational and economic implications, and we’re in the process of figuring that out right now. The transformation is happening.”

That means training the current workforce to use technologies like machine learning and data analytics, said Jeanette Wing, corporate VP at Microsoft Research. But it also means a change in how we educate our future workforce. “It’s going to be more than ‘What should a typical computer science undergraduate learn?’” she said. “It’s going to be other fields that will have to figure out what they should be teaching their students—whether they’re graduating with a business degree or history degree. That’s a challenge for universities to grapple with now.”
 
Tässäpä uuehkompi konferenssivideo google Deepmind:sta. Suht vasta voitti Go-pelissä kovan luokan ammattilaisen, joka on siis shakkiakin haastavampi peli. Mielestäni tätä voitaisiin käyttää ihan hyvin sotataktiikoiden kehittämiseen ja vaikka liittämiseen harjoitustilanteessa kuvitteellisen vihollisen luomiseen ja/tai palautteen antamiseen. En tiedä kuinka helposti se olisi liitettävissä meidän gps-pohjaisen tykistö/miinoite yms simulaattoriin, jolla voitaisiin kehittää omaa toimintaa. Vastaavanlainen tekoäly olisi kyllä tosi kätevä varsinkin, kun nyt simulaattoreita ollaan ilmeisesti uusimassa. Alempana vielä kipinän podcasti simulaattoreista.


 
in the other words, it's a memory transfer, but instead of just memories, it's a skill and knowledge

Google DeepMind is trying to teach machines human-level motor control using progressive neural networks – so that the robots can learn new skills on-the-go in the real world.

The idea is to build droids that can constantly learn and improve themselves, all by themselves, from their own surroundings rather than rely on lab-built AI models created in simulations. Wouldn't it be great to have machines that each learn as individuals rather than all have a common copy of an updated model uploaded to them from the lab?

DeepMind's paper titled Sim-to-Real Robot Learning from Pixels with Progressive Nets appeared on arXiv last week, but it was overshadowed by another DeepMind paper in Nature.

"Progressive neural networks offer a framework that can be used for continual learning of many tasks and which facilitates transfer learning, even across the divide which separates simulation and robot," the paper's boffins state.

Simply put, London-based DeepMind has found a way to transfer knowledge from one AI model to another, so that software can efficiently learn how to perform tasks in simulations and the real world as opposed to learning purely in a dream world.

This software can therefore learn how to tackle situations that were not previously simulated, or learn how to use a particular skill to solve a seemingly unrelated problem.
http://www.theregister.co.uk/2016/10/20/google_deepmind_ai_to_create_robots/

ai_robots_deepmind_1.jpg


Algorithms used in deep reinforcement learning have been effective at demonstrating good control of movement, but it has largely been used only in simulations. Now comes the tricky task of bringing that simulation-taught knowledge to the real world.

In order to prime the real-world AI model with knowledge from a simulation-taught model, the researchers connected columns in the simulation-trained neural network to columns in the real-world robot-trained AI. This splices the lab-taught brain onto the reality-based mind, giving it enough knowhow to get going with plenty of opportunity to learn new tricks in its physical environment.
 
As chatbot technology advances it will no longer be necessary to book an appointment to see a doctor as the whole meeting can all be done with the help of virtual personal health assistants, according to Gartner.

At the Gartner Symposium/ITxpo, the mystical mages at Gartner have, once again, made another bold prediction: up to 50 per cent of the population will rely on VPHAs by 2025.

Research director Laura Craft said: "Technology has advanced to the point where computers have become superior to the human mind; they are more accurate and consistent, and they are better at processing all the determinants of health and wellbeing than even the best of doctors.

"Primary care physicians will be needed to care for the chronically ill, the elderly, and special needs patients to co-ordinate their care and the more complex care plans their conditions call for. But for the vast majority, replacing primary and routine care with technology is within our grasp and a highly likely possibility."

Chabot assistants are, indeed, a hot area in technology. The biggest companies are locked in a race to provide the best AI assistant by throwing all efforts into finding better natural language processing (NLP) methods, and snatching the best start-ups working in that field.

But the technology hasn't really improved much since ELIZA – a computer NLP program created by MIT in the 1960s, Oren Etzioni, CEO of the Allen Institute of AI, previously told The Register.

Although computer assistants can answer questions, it's still difficult to program them to understand and respond to emotions – something that is important in healthcare. The current level of AI assistants are like a "second-class concierge", Etzioni said.

Technology isn’t the only barrier standing in the way. There are ethical and legal challenges as well. Chris Holder, a partner at Bristows law firm, previously told The Register, that the possibility of dangerous medical errors from dodgy diagnoses made by machines could pose challenges for the development of VPHAs. ®
http://www.theregister.co.uk/2016/10/20/virtual_personal_health_assistants_are_coming_says_gartner/
 
Back
Top