Kohti Skynettiä - Tekoäly sodankäynnissä

Koodaus on yksialue tekoäly projektissa, mutta jos ihmisillä on kykyä, niin tekoälyjen viilaus on noussut kuumimmaksi aiheeksi IT alan töissä. Koodaus on yksi alue, toinen on arkkitehtuurin väsäys ja kehitystyöt.

Machine learning, Tableau, and user experience design represented the fastest growing skills on freelancing platform Upwork during the third quarter of the year, a finding that makes sense in the context of the accelerating collection of data and the need to present it.

"Companies are harvesting explosive amounts of data, and demand for machine learning specialists who can build adaptive algorithms and extract the value of this new data is increasing in turn," Upwork said in a statement.

According to research firm CB Insights, over 200 companies focused on AI-related technology raised almost $1.5bn in funding in the first half of the year. And major tech companies like Apple, Facebook, Google, IBM, Microsoft, Oracle, and Salesforce made public commitments to AI-oriented products and services.

Earlier this month, Upwork reported that the freelance workforce in the US grew from 53 million in 2014 to 55 million in 2016. That figure represents about 35 per cent of the US workforce.

Upwork calculates that freelancers in the US earned an estimated $1 trillion this past year. That may appear to be an impressive figure, but it's not so much on average: divided out among 55 million freelancers, the average one would bring in about $18,182. This may be one reason complaints about Upwork appear to be relatively common.

Still, such labor remains alluring: more workers are freelancing by choice (63 per cent) now than in 2014 (53 per cent), according to Upwork.

The Upwork Skills Index for Q3 measured skill demand by the growth rates of freelancer billings for associated jobs through Upwork in Q2 2016 versus Q2 2015.

IT and development skills represented 55 per cent of the top 20 skills. These include MySQL, API development, .Net, WordPress, desktop development (C++), and mobile development (Android, Swift, and Unity).
http://www.theregister.co.uk/2016/10/21/machine_learning_craze_reaches_freelance_market/
 
Underneath the hype

The basic technologies, such as those recently employed by Google’s DeepMind to defeat a human expert at the game Go, are simply refinements of technologies developed in the 1980s. There have been no qualitative breakthroughs in approach. Instead, performance gains are attributable to larger training sets (also known as big data) and increased processing power. What is unchanged is that most machine systems work by maximising some kind of objective. In a game, the objective is simply to win, which is formally defined (for example capture the king in chess). This is one reason why games (checkers, chess, Go) are AI mainstays – it’s easy to specify the objective function.

In other cases, it may be harder to define the objective and this is where AI could go wrong. However, AI is more likely to go wrong for reasons of incompetence rather than malice. For example, imagine that the US nuclear arsenal during the Cold War was under control of an AI to thwart sneak attack by the Soviet Union. Due to no action of the Soviet Union, a nuclear reactor meltdown occurs in the arsenal and the power grid temporarily collapses. The AI’s sensors detect the disruption and fallout, leading the system to infer an attack is underway. The president instructs the system in a shaky voice to stand down, but the AI takes the troubled voice as evidence the president is being coerced. Missiles released. End of humanity.

The AI was simply following its programming, which led to a catastrophic error. This is exactly the kind of deadly mistakes that humans almost made during the Cold War. Our destruction would be attributable to our own incompetence rather than an evil AI turning on us – no different than an auto-pilot malfunctioning on a jumbo jet and sending its unfortunate passengers to their doom. In contrast, human pilots have purposefully killed their passengers, so perhaps we should welcome self-driving cars.

Of course, humans could design AIs to kill, but again this is people killing each other, not some self-aware machine. Western governments have already released computer viruses, such as Stuxnet, to target critical industrial infrastructure. Future viruses could be more clever and deadly. However, this essentially follows the arc of history where humans use available technologies to kill one another.

There are real dangers from AI but they tend to be economic and social in nature. Clever AI will create tremendous wealth for society, but will leave many people without jobs. Unlike the industrial revolution, there may not be jobs for segments of society as machines may be better at every possible job. There will not be a flood of replacement “AI repair person” jobs to take up the slack. So the real challenge will be how to properly assist those (most of us?) who are displaced by AI. Another issue will be the fact that people will not look after one another as machines permanently displace entire classes of labour, such as healthcare workers.

Fortunately, the governments may prove more level-headed than tech celebrities if they choose to listen to nuanced advice. A recent report by the UK’s House of Commons Science and Technology Committee on the risks of AI, for example, focuses on economic, social and ethical concerns. The take-home message was that AI will make industry more efficient, but may also destabilise society.

If we are going to worry about the future of humanity we should focus on the real challenges, such as climate change and weapons of mass destruction rather than fanciful killer AI robots.
http://www.theregister.co.uk/2016/1...nity_the_tech_industry_wants_you_to_think_so/
 
Kertoo tekoälyn todellisuudesta ja siitä kuinka nämä ongelmat eivät ole aivan triviaaleja.

Say you apply for home insurance and get turned down. You ask why, and the company explains its reasoning: Your neighborhood is at high risk for flooding, or your credit is dodgy.

Fair enough. Now imagine you apply to a firm that uses a machine-learning system, instead of a human with an actuarial table, to predict insurance risk. After crunching your info—age, job, house location and value—the machine decides, nope, no policy for you. You ask the same question: “Why?”

Now things get weird. Nobody can answer, because nobody understands how these systems—neural networks modeled on the human brain—produce their results.
Computer scientists “train” each one by feeding it data, and it gradually learns. But once a neural net is working well, it’s a black box. Ask its creator how it achieves a certain result and you’ll likely get a shrug.

The opacity of machine learning isn’t just an academic problem. More and more places use the technology for everything from image recognition to medical diagnoses. All that decisionmaking is, by definition, unknowable—and that makes people uneasy. My friend Zeynep Tufekci, a sociologist, warns about “Moore’s law plus inscrutability.” Microsoft CEO Satya Nadella says we need “algorithmic accountability.”

All that is behind the fight to make machine learning more comprehensible. This spring, the European Union passed a regulation giving its citizens what University of Oxford researcher Bryce Goodman describes as an effective “right to an explanation” for decisions made by machine-learning systems. Starting in 2018, EU citizens will be entitled to know how an institution arrived at a conclusion—even if an AI did the concluding.

Jan Albrecht, an EU legislator from Germany, thinks explanations are crucial for public acceptance of artificial intelligence. “Otherwise people are afraid of it,” he says. “There needs to be someone who has control.” Explanations of what’s happening inside the black box could also help ferret out bias in the systems. If a system for approving bank loans were trained on data that had relatively few black people in it, Goodman says, it might be uncertain about black applicants—and be more likely to reject them.

So sure, more clarity would be good. But is it possible? The box is, after all, black. Early experiments have shown promise. At the machine-learning company Clarifai, founder Matt Zeiler analyzed a neural net trained to recognize images of animals and objects. By blocking out portions of pictures and seeing how the different “layers” inside the net responded, he could begin to see which parts were responsible for recognizing, say, faces. Researchers at the University of Amsterdam have pursued a similar approach. Google, which has a large stake in AI, is doing its own probing: Its hallucinogenic “deep dreaming” pictures emerged from experiments that amplified errors in machine learning to figure out how the systems worked.

Of course, there’s self-interest operating here too. The more that companies grasp what’s going on inside their AIs, the more they can improve their products. The first stage of machine learning was just building these new brains. Now comes the Freudian phase: analysis. “I think we’re going to get better and better,” Zeiler says.

Granted, these are still early days. The people probing the black boxes might run up against some inherent limits to human comprehension. If machine learning is powerful because it processes data in ways we can’t, it might seem like a waste of time to try to dissect it—and might even hamper its development. But the stakes for society are too high, and the challenge is frankly too fascinating. Human beings are creating a new breed of intelligence; it would be irresponsible not to try to understand it.
https://www.wired.com/2016/10/understanding-artificial-intelligence-decisions/
 
Kertoo tekoälyn todellisuudesta ja siitä kuinka nämä ongelmat eivät ole aivan triviaaleja.
Voiko tekoälyn määrää rajoittaa? Jos huolena on että ihmisellä olisi vaikeuksia ymmärtää tekoälyn ajatuskuvioita, niin eikö ensimmäinen tehtävä ole silloin joko yrittää rajata tekoäly ihmisen käsityskyvyn puolelle, tai edes yrittää tehdä siitä pätevä perustelija ja opettaja?

Se toinen vaihtoehto on että tekoälyä ei aiota rajata ollenkaan, vaan mennään puhtaasti uskon perusteella, ja luotetaan kaikkiin sen antamiin vastauksiin. :D Eli puhutaan jo uudenlaisen uskonnon ja jumalan syntymisestä. Eikö yksi jumalan määritelmän vaatimuksia ole että palvojat (noudattajat) eivät täysin ymmärrä hänen ajatuskuvioita?

tl;dr
Tekoälytutkijat ovat uskovaisia.
 
Jossain vaiheessa kävi niin, että muodostui erilleen tekoäly- ja koneoppimistutkijoiden leirit. Itselle on tutumpi jälkimmäinen. Koneoppimista varmasti hyödynnetään nykyäänkin puolustussovelluksissa. Esimerkkinä hahmontunnistus vaikkapa satelliittikuvista. Toinen sovellus on tiedustelutiedon kaivaminen suuresta tietomäärästä, esimerkiksi sähköposteja.

Nyt kuuma juttu on rinnakkaislaskenta (parallel computing), millä saadaan tehoja kasvatettua. Siinä useat koneet tai prosessoriytimet laskevat haluttua prosessia yhtä aikaa. Ytimiä on paljon näytönohjaimissa ja siksi niitä hyödynnetään raskaassa laskennassa. Esimerkiksi Nvidia tarjoaa softaa (CUDA) ja ammattikäyttöön tarkoitettuja kortteja (Tesla) tähän tarkoitukseen. GPU kortteja voi asentaa useita: normiemollekin pari ja kalliimmalle X99 piirisarjan levylle vaikkapa neljä. Nykyään tietty on mahdollista rakentaa tehokas virtuaalikone myös pilveen. Laskentatehoa tarvitaan erityisesti deep learning mallien kanssa.

Edellisellä sivulla ollut AI pilotti taasen hyödynsi sumeita (fuzzy) malleja. Näistä on paljon sovelluksia insinööritieteissä ja erityisesti säätötekniikassa. Yksi sumean logiikan eduista on verrattain vähäinen laskentatehon tarve(, missä nämä eroavat deep learning malleista aika lailla). Kääntäen: tietyllä käytettävissä olevalla teholla saadaan nopea sovellus.
 
Computers are keeping secrets. A team from Google Brain, Google’s deep learning project, has shown that machines can learn how to protect their messages from prying eyes.

Researchers Martín Abadi and David Andersen demonstrate that neural networks, or “neural nets” – computing systems that are loosely based on artificial neurons – can work out how to use a simple encryption technique.

In their experiment, computers were able to make their own form of encryption using machine learning, without being taught specific cryptographic algorithms. The encryption was very basic, especially compared to our current human-designed systems. Even so, it is still an interesting step for neural nets, which the authors state “are generally not meant to be great at cryptography”.

The Google Brain team started with three neural nets called Alice, Bob and Eve. Each system was trained to perfect its own role in the communication. Alice’s job was to send a secret message to Bob, Bob’s job was to decode the message that Alice sent, and Eve’s job was to attempt to eavesdrop.

To make sure the message remained secret, Alice had to convert her original plain-text message into complete gobbledygook, so that anyone who intercepted it (like Eve) wouldn’t be able to understand it. The gobbledygook – or “cipher text” – had to be decipherable by Bob, but nobody else. Both Alice and Bob started with a pre-agreed set of numbers called a key, which Eve didn’t have access to, to help encrypt and decrypt the message.
https://www.newscientist.com/article/2110522-googles-neural-networks-invent-their-own-encryption/

Initially, the neural nets were fairly poor at sending secret messages. But as they got more practice, Alice slowly developed her own encryption strategy, and Bob worked out how to decrypt it.

After the scenario had been played out 15,000 times, Bob was able to convert Alice’s cipher text message back into plain text, while Eve could guess just 8 of the 16 bits forming the message. As each bit was just a 1 or a 0, that is the same success rate you would expect from pure chance. The research is published on arXiv.

We don’t know exactly how the encryption method works, as machine learning provides a solution but not an easy way to understand how it is reached. In practice, this also means that it is hard to give any security guarantees for an encryption method created in this way, so the practical implications for the technology could be limited.

“Computing with neural nets on this scale has only become possible in the last few years, so we really are at the beginning of what’s possible,” says Joe Sturonas of encryption company PKWARE in Milwaukee, Wisconsin.

Computers have a very long way to go if they’re to get anywhere near the sophistication of human-made encryption methods. They are, however, only just starting to try.
 
NEC Corporation, one of Japan’s biggest IT providers, says it has built an AI that can rapidly search CCTV footage and spot a specific person out of a million or more faces.

The application – snappily titled NeoFace Image data mining – can find wanted criminals, missing kids, and so on, all from video surveillance. We're told "when searching video where roughly one million individual instances of facial data appear, the software is capable of conducting searches within approximately 10 seconds."

In other words, you can feed 24 hours of CCTV into NeoFace, and it could identify, say, a million faces in the video frames. Then when you need to find a sought-after person, the software will take just seconds to scan the database and locate them. NeoFace can also comb multiple video sources.

The code can identify when a particular individual appears at a given time and place, or if they're seen with others, according to NEC. This is supposed to help crimefighters investigate crimes or store owners spot specific customers and so on.

The classification system works by using a “tree-shaped data management structure” to rank camera-captured faces by similarity. Over time, as more footage is analyzed, an installation of the system should get better at identifying people and objects. As it adds these images to its database and improves itself through training, its operators will be able to search for faces in near real-time, we're told.

Yuichi Nakamura, general manager at NEC’s Green Platform Research Laboratories, said the technology allows computers to identify suspicious behavior: "This technology can help to prevent crime and speed up criminal investigations by detecting unauthorized people who appear frequently near restricted areas.”

China is getting in on the action too. Last week, Hikvision, a Chinese partially-state-owned surveillance company, announced a partnership with Movidiussoon to be acquired by Intel – on AI CCTV software.

Movidius specializes in machine vision and have been working on using AI for autonomous vehicles, drones and virtual reality headsets.

Its Myriad 2 Vision Processing Unit uses deep neural networks and a set of algorithms that scan footage to “detect anomalies such as suspicious packages, drivers distracted by mobile devices, and intruders trying to access secure locations.”

Hikivision’s cameras report up to 99 per cent accuracy for visual recognition. Hu Yangzhong, Hikivision’s CEO, said: “There are huge amounts of gains to be made when it comes to neural networks and intelligent camera systems.”

Huge amounts of gains – and also huge amounts of concern over privacy and policy. No one wants to be accused of wrongdoing by a computer just because they were seen near a known criminal, for example.
http://www.theregister.co.uk/2016/11/02/nec_cctv_ai/
 
StarCraft could be the next battleground for AI, as researchers create an open framework that tests deep-learning methods in the real-time strategy game.

Teaching AI to play games is serious business. Games act like milestones; when a machine is superior to humans at playing difficult games, it’s a sign that its neural net has reached a new level of intelligence.

IBM’s supercomputer, Deep Blue, conquered chess in the late 1990s. This year, Google DeepMind achieved notoriety for creating Alpha Go, an AI that beat Go master Lee Sedol. The next game in researchers’ sights is StarCraft.

A paper [PDF], released on Thursday by eggheads at Facebook and the University of Oxford, presents a new way to test deep-learning methods on real-time strategy (RTS) games like StarCraft by creating something called TorchCraft.

The name is an amalgamation between StarCraft and Torch, a machine learning library. It helps developers build AI agents capable of handling the difficult environment of StarCraft.

The researchers connect the machine learning library to StarCraft: Brood War by injecting code in the game engine that acts as a server. The server relays information about the state of the game to the external machine-learning code and receives commands to send to the game.

There are already a number of platforms for games for the Atari 2600 plus Super Mario, Doom and Minecraft. StarCraft has been a goal for researchers, and is considered one of the most difficult games for computers to play.

Today's StarCraft bots are pretty mediocre at best, and cannot compete with professional players yet.

The goal is to take control of the map. Units are dispatched to create buildings and collect resources that can empower a player with weapon upgrades or more soldiers. At the same time, players have to prevent their areas from being invaded, as well as launch attacks on the enemy’s units.

It’s a game that requires strategic planning, decision making, and improvisation, where bots have to play over a dynamic environment that they can only partially see, making it an “excellent benchmark” for AI.

“TorchCraft is applicable to any video game and any machine learning library or framework,” the paper said.

Games have long been of interest to AI researchers, as the virtual environments are a good way to test AI that may eventually be applicable in the real world. The highly popular, violent Grand Theft Auto games are being used as testing grounds for autonomous driving software.

Scores also provide researchers with quick feedback to see if their system is performing well, and are used as a reward in deep reinforcement learning. The researchers said they will release their code soon. So watch out, human StarCraft players – AI could soon be joining a tournament near you
http://www.theregister.co.uk/2016/11/04/starcraft_ai/

https://regmedia.co.uk/2016/11/03/1611_00625v1.pdf
 
Näihin "kun" pultataan aseet ja sukkela tekoäly, pari vuotta, niin saas nähdä.

'Atlas, The Next Generation'

'Introducing SpotMini'

'Military Robots'
 
AI surveillance could be about to get a lot more advanced, as researchers move on from using neural networks for facial recognition to lipreading.

A paper submitted by researchers from the University of Oxford, Google DeepMind and the Canadian Institute for Advanced Research is under review for ICLR 2017 (Conference on Learning Representations), an academic conference for machine learning, and describes a neural network called “LipNet.”

LipNet can decipher what words have been spoken by analyzing the “spatiotemporal visual features” of someone speaking on video to 93.4 per cent accuracy – beating professional human lipreaders.

It’s the first model that works beyond simple word classification to use sentence-level sequence prediction, the researchers claimed.

Lipreading is a difficult task, even for people with hearing loss, who score an average accuracy rate of 52.3 per cent.

“Machine lipreaders have enormous practical potential, with applications in improved hearing aids, silent dictation in public spaces, covert conversations, speech recognition in noisy environments, biometric identification, and silent-movie processing,” the paper said.

But, for those afraid of CCTV cameras reading into secret conversations, don’t start throwing away the funky pixel-distorting glasses that can mask your identity yet.

A closer look at the paper reveals that the impressive accuracy rate only covers a limited dataset of words strung together into sentences that often make no sense, like in the example used in the video below.
http://www.theregister.co.uk/2016/11/08/ai_listens_to_your_secrets_through_lipreading/

 
To help companies join the AI revolution, NVIDIA today announced a collaboration with Microsoft to accelerate AI in the enterprise. Using the first purpose-built enterprise AI framework optimized to run on NVIDIA® Tesla® GPUs in Microsoft Azure or on-premises, enterprises now have an AI platform that spans from their data center to Microsoft's cloud.

"Every industry has awoken to the potential of AI," said Jen-Hsun Huang, founder and chief executive officer, NVIDIA. "We've worked with Microsoft to create a lightning-fast AI platform that is available from on-premises with our DGX-1™ supercomputer to the Microsoft Azure cloud. With Microsoft's global reach, every company around the world can now tap the power of AI to transform their business."

"We're working hard to empower every organization with AI, so that they can make smarter products and solve some of the world's most pressing problems," said Harry Shum, executive vice president of the Artificial Intelligence and Research Group at Microsoft. "By working closely with NVIDIA and harnessing the power of GPU-accelerated systems, we've made Cognitive Toolkit and Microsoft Azure the fastest, most versatile AI platform. AI is now within reach of any business."
http://nvidianews.nvidia.com/news/nvidia-and-microsoft-accelerate-ai-together
 
General Electric builds jet engines and wind turbines and medical gear. But the 124-year-old industrial giant is also transforming itself for the digital age. It’s fashioning software that pulls data from all this hardware, hoping to gain an insight into industrial operations that was never possible in the past. The problem is that analyzing all this data is difficult, and the talent needed to make it happen is scarce. So GE is going shopping.

The company just paid an undisclosed amount to acquire a Berkeley-based machine learning startup called Wise.io. “There’s thirty of them,” GE CEO Jeff Immelt says gleefully of the Wise.io team, which is heavy with astrophysicists. “You match them with aviation data, and they’re killer.”

That’s great for GE and Immelt—and for their customers. But what if you’re a small software company trying to inject some AI into your operation? Wise.io was on a mission to “democratize AI”—to creating tools anyone could use to build machine learning services—but it’s now disappearing into GE. And at a time when machine learning is the prime way of staying competitive in the software world, that’s a notable thing. In the battle for scarce AI talent, companies like GE have an overwhelming advantage.

Not everyone can go out and grab thirty AI-happy astrophysicists. And if you can’t do that, the talent pool becomes very small very quickly, since these machine learning techniques are so new and so different from standard software development. Even the big players talk about the tiny talent pool: Microsoft research chief Peter Lee says the cost of acquiring a top AI researcher is comparable to the cost of acquiring an NFL quarterback.

Over the past few years, other heavyweights have snapped up so many other AI startups you’ve never heard of. Twitter bought Mad Bits, Whetlab, and Magic Pony. Apple bagged Turi and Tuplejump. Salesforce acquired MetaMind and Prediction I/O. Intel acquired Nervana. And that’s just a partial list. And it’s not just software and Internet companies doing the buying. It’s also giants like Samsung and GE that are incorporating AI into physical things. As soon as startups spring up to meet the AI need, they get sucked up into the maws of the hungriest, richest corporations
https://www.wired.com/2016/11/giant-corporations-hoarding-worlds-ai-talent/
 

Shanahan needs data scientists, people with experience in deep neural networks and software like Google’s open source TensorFlow engine that drives this increasingly important form of AI. She’s trying to hire four data scientists who specialize in machine learning, and she can’t land even one.

Joopa, tulisivat täällä Suomessa kysymään meiltä yliopistolta poispotkittavilta pätkätyöläisiltä. Täällä kun haet data-analyytikon paikkaa, niin se käytännössä tarkoittaa tietokantaosaajaa.

Can’t she just hire ordinary coders? Not really. Building this machine learning technology is quite different from standard software engineering. It’s less about coding and more about coaxing results from vast pools of data.

Täysin samaa mieltä.
 
Viimeksi muokattu:
Researchers at Facebook have attempted to build a machine capable of reasoning from text – but their latest paper shows true machine intelligence still has a long way to go.

The idea that one day AI will dominate Earth and bring humans to their knees as it becomes super-intelligent is a genuine concern right now. Not only is it a popular topic in sci-fi TV shows such as HBO’s Westworld and UK Channel 4’s Humans – it features heavily in academic research too.

Research centers such as the University of Oxford’s Future of Humanity Institute and the recently opened Leverhulme Centre for the Future of Intelligence in Cambridge are dedicated to studying the long-term risks of developing AI.

The key to potential risks about AI mostly stem from its intelligence. The paper, which is currently under review for 2017's International Conference on Learning Representations, defines intelligence as the ability to predict.

“An intelligent agent must be able to predict unobserved facts about their environment from limited percepts (visual, auditory, textual, or otherwise), combined with their knowledge of the past.

“In order to reason and plan, they must be able to predict how an observed event or action will affect the state of the world. Arguably, the ability to maintain an estimate of the current state of the world, combined with a forward mode of how the world evolves, is a key feature of intelligent agents.”

If a machine wants to predict an event, first it has to be able to keep track of its environment before it can learn to reason. It’s something Facebook has been interested in for a while – its bAbI project is “organized towards the goal of automatic text understanding and reasoning.”
http://www.theregister.co.uk/2016/11/22/facebooks_ai_paper_machines_cant_reason_yet/

The bAbI task – which is a collection of 20 question-answering datasets that test reasoning abilities – was the only exercise where the researchers didn’t encode the entities first, but the answers were given.

Although the agent correctly stored information about the objects and characters as entities in its memory slots, the bAbI task is simple. Statements are about four or five words long, and they all have the same structure, where the name of the character comes first and the action and object later.

If a pronoun like "he" or "she" is used, humans would know who the pronoun was referring to and would still be able to track an event – for example, when they read books. But EntNet would just be dumbfounded.

It doesn’t know how to connect the right pronouns to a character, and a name is locked as an entity. So the agent can answer questions correctly to a decent accuracy, but only if the relevant statements are structured in a certain way.

Although EntNet shows machines are far from developing automated reasoning, and can’t take over the world yet, it is a pretty nifty way of introducing memory into a neural network.

Storing information that can be accessed and updated in gates means that researchers don’t have to attach an external memory, unlike DeepMind’s differentiable neural computer.
 
At a US Senate Committee on Commerce, Science, and Transportation hearing this week chaired by Senator Ted Cruz (R-TX), artificial intelligence experts were grilled on how to keep the US ahead of its competitors.

AI is advancing at an increasing pace. Although research began in the 1940s, recent advances in computational power and big data have propelled AI to the emerging field it is today.

As the biggest technology companies continue to pump money into AI research, promising to revolutionize everything from the automotive to the healthcare industries, governments around the world are starting to take notice.

Following the White House’s report, Preparing for the Future of Artificial Intelligence, Wednesday's meeting was the first Senate hearing on AI, and was given the title "The Dawn of Artificial Intelligence."
http://www.theregister.co.uk/2016/12/01/ai_senate_hearing/

http://www.commerce.senate.gov/publ...nounces-first-artificial-intelligence-hearing

The US should “compete on applications, but remain open and collaborative in research,” senators were told. Publishing research papers allows companies to “pool resources to make faster breakthroughs” and attracts the best talent, Brockman added.

Some companies are better at that than others. Research from Google and Facebook is often public, but there aren’t any papers from Apple or any that show how IBM Watson or Microsoft’s Cortana work.

All the witnesses on the panel agreed that the US government had to be willing to spend more money on AI research. Moore said some of his colleagues working on using AI to build better prosthetic hands were struggling to secure funding for their research, while industry offers “two- to three-million-dollar start-up packages” to switch from academia to industry.

Fei-Fei Li is the latest major researcher to be snared away from academia. Li was director of AI research at Stanford University and a prominent expert in computer vision, and has just left her position to head up Google’s Cloud platform.

Brockman warned that the government has a role to play in democratizing AI, to stop knowledge being locked away in the hands of a few major companies, and it should continue funding universities.

AI is also slowly creeping into the space industry, an area where the US prides itself on its leadership. “We need AI to explore the nooks and crannies of the Solar System,” but there is “no clear financial motive,” said Steve Chien, Senior Research Scientist and Group Supervisor at the Artificial Intelligence Group at NASA’s Jet Propulsion Laboratory.

Pushing AI doesn’t come without risks, however. The committee questioned the panel on imminent problems and long-term risks, referencing Elon Musk, who compared building AI to “summoning the demon.”
 

The rapid progress of AI in the last few years are largely the result of advances in deep learning and neural nets, combined with the availability of large datasets and fast GPUs. We now have systems that can recognize images with an accuracy that rivals that of humans. This will lead to revolutions in several domains such as autonomous transportation and medical image understanding. But all of these systems currently use supervised learning in which the machine is trained with inputs labeled by humans.

The challenge of the next several years is to let machines learn from raw, unlabeled data, such as video or text. This is known as unsupervised learning. AI systems today do not possess "common sense", which humans and animals acquire by observing the world, acting in it, and understanding the physical constraints of it. Some of us see unsupervised learning as the key towards machines with common sense. Approaches to unsupervised learning will be reviewed. This presentation assumes some familiarity with the basic concepts of deep learning.
 
ut0dv.jpg


Jos ja kun jokin valtiollinen toimija kyseisenlaisen aparaatin rakentaa, ihmiskunnan vuoksi toivon että se on tukevasti sidottu yhteen keskusjärjestelmään ilman tarvittavia fyysisiä yhteyksiä jotta se voisi levitä halutessaan. Jotta silloin sen alkaessa osoittamaan pelottavia merkkejä voidaan kyseinen tekoäly räjäyttää helvettiin 10megatonnin voimalla ennenkuin paska osuu tuulettimeen.
 
Much has been written about how artificial intelligence (AI) will put white-collar workers out of a job eventually. Will robots soon be able to do what programmers do best -- i.e., write software programs? Actually, if you are or were a developer, you've probably already written or used software programs that can generate other software programs.
http://www.zdnet.com/article/developers-will-ai-run-you-out-of-your-job/

In the short term, AI will most likely help you be more productive and creative as a developer, tester, or dev team rather than making you redundant. Don't be afraid. Take advantage of this opportunity and you'll get an immediate return: It will give you more time to be more creative and to deliver more innovation -- which will help you save your job in the long term!
 
Tekoäly biologisessa sodankäynnissä? PANIIKKI. :cool:

Born of research in the Amazon forest, the Plantix mobile app is helping farmers on three continents quickly identify plant diseases using artificial intelligence.

For several years in the Brazilian rain forest, a team of young German researchers studied the emission and mitigation of greenhouse gases due to changing land use. The team’s analysis was yielding new knowledge, but the farmers they worked with weren’t interested in those findings. They wanted to know how to treat crops being ravaged by pathogens.
https://blogs.nvidia.com/blog/2016/12/13/ai-fights-plant-disease/
 
Back
Top