Kohti Skynettiä - Tekoäly sodankäynnissä

Musk believes that it is better to try to get super-A.I. first and distribute the technology to the world than to allow the algorithms to be concealed and concentrated in the hands of tech or government elites—even when the tech elites happen to be his own friends, people such as Google founders Larry Page and Sergey Brin. “I’ve had many conversations with Larry about A.I. and robotics—many, many,” Musk told me. “And some of them have gotten quite heated. You know, I think it’s not just Larry, but there are many futurists who feel a certain inevitability or fatalism about robots, where we’d have some sort of peripheral role. The phrase used is ‘We are the biological boot-loader for digital super-intelligence.’ ” (A boot loader is the small program that launches the operating system when you first turn on your computer.) “Matter can’t organize itself into a chip,” Musk explained. “But it can organize itself into a biological entity that gets increasingly sophisticated and ultimately can create the chip.”


Musk has no intention of being a boot loader. Page and Brin see themselves as forces for good, but Musk says the issue goes far beyond the motivations of a handful of Silicon Valley executives.

“It’s great when the emperor is Marcus Aurelius,” he says. “It’s not so great when the emperor is Caligula.”
http://www.vanityfair.com/news/2017/03/elon-musk-billion-dollar-crusade-to-stop-ai-space-x
 
  • Tykkää
Reactions: krd
Artificial intelligence these days is sold as if it were a magic trick. Data is fed into a neural net – or black box – as a stream of jumbled numbers, and voilà! It comes out the other side completely transformed, like a rabbit pulled from a hat.

That's possible in a lab, or even on a personal dev machine, with carefully cleaned and tuned data. However, it is takes a lot, an awful lot, of effort to scale machine-learning algorithms up to something resembling a multiuser service – something useful, in other words.

Interest in AI is soaring. There's a lot of hype, and the reality is that the technology is still very raw and difficult to implement in production. Making the jump from prototype to product introduces new challenges. Where does the training data come from? How is it stored, organized, sanitized, and prepared for teaching a system? How do people query the system? Who can query it? What about security: how is sensitive information managed and protected? How fast does my hardware need to be to deliver results? Where are the performance bottlenecks and concurrency hurdles?

It goes on and on. Suddenly, your AI code, your supposed crown jewels, is just a small cog in a very large and complex and buggy machine.
http://www.theregister.co.uk/2017/03/31/ai_infrastructure/
 
Innovation in artificial intelligence and robotics could force governments to legislate for quotas of human workers, upend traditional working practices and pose novel dilemmas for insuring driverless cars, according to a report by the International Bar Association.

The survey, which suggests that a third of graduate level jobs around the world may eventually be replaced by machines or software, warns that legal frameworks regulating employment and safety are becoming rapidly outdated.

The competitive advantage of poorer, emerging economies – based on cheaper workforces – will soon be eroded as robot production lines and intelligent computer systems undercut the cost of human endeavour, the study suggests.

While a German car worker costs more than €40 (£34) an hour, a robot costs between only €5 and €8 per hour. “A production robot is thus cheaper than a worker in China,” the report notes. Nor does a robot “become ill, have children or go on strike and [it] is not entitled to annual leave”.
https://www.theguardian.com/technol...governments-introduce-human-quotas-study-says
 
AI researchers at Chinese tech beast Baidu have attempted to teach virtual bots English in a two-dimensional maze-like world.

The study “paves the way for the idea of a family robot,” a smart robo-butler that can understand orders given by its owner, it is claimed. This ability to handle normal language is essential to creating machines with human-level intelligence, the researchers argue in a paper now available on arXiv.

Teaching bots language by describing the simulated world around them gives the software knowhow and knowledge that can be transferred from task to task – that's surprisingly hard to do correctly and a sign of general intelligence. The researchers compare their method to parents using language to coach a baby who is learning to walk and talk.

In the simulated 2D world, known as XWORLD, the baby is the agent and the parents are the teacher. The baby agent perceives the environment as a sequence of raw-pixel images and is given a command in English by a teacher.

“By exploring the environment, the agent learns simultaneously the visual representations of the environment, the syntax and semantics of the language, and how to navigate itself in the environment,” the paper said.
http://www.theregister.co.uk/2017/04/05/baidu_teaching_ai_bots_english/
 
Mars maanpuolustuskursseille luontaista tyhmyyttä tekoälyttämään!

Nokian ja Teknologiateollisuuden hallituksen puheenjohtaja Risto Siilasmaa sanoi Tech Day 2017 -tapahtumassa Helsingissä, että Suomeen kaivattaisiin päättäjille tekoälyn maanpuolustuskurssi. "Olen yrittänyt ajaa ajatusta, että johtavat poliitikot ja päättäjät ja virkamiehet pitäisi passittaa tekoälyn maanpuolustuskurssille", sanoi Risto Siilasmaa Tekniikka&Talous-lehden, Teknologiateollisuuden ja VTT:n Tech Day 2017 -tapahtumassa Helsingissä. "Pitäisi olla mekanismi pakottaa päättäjät kuuntelemaan ja oppimaan uutta." Siilasmaan mukaan tekoälyn kehitys etenee ekspontentiaalisesti, koska se perustuu laskentatehon lisääntymiseen Mooren lain mukaan. Kurssilla pitäisi esimerkiksi opettaa mitä "deep learning stack" tarkoittaa.

Siilasmaa sanoi, että tekoälyn eksponentiaalinen kehitys mullistaa maailmaa tietotekniikan lisäksi kaikilla aloilla fysiikasta, juristeihin ja lainsädääntöön. "Juristien neuvot tavallisten ihmisten asioissa osuvat 75-80-prosenttisesti oikein. Tekoäly pääsee jo 80 prosenttiin." Siilasmaa totesi, että tekoäly on "musta laatikko", josta voi tulla hyviä tai huonoja päätöksiä, ja siksi tekoälyn toimintaa on ymmärrettävä. Siilasmaan mukaan teknologian nopea kehitys pakottaa miettimään uusiksi niin juristien kuin insinöörien koulutuksen.

"Kaikki me tiedämme tämän trendin, mutta teemmekö asialle mitään", Siilasmaa herätteli yleisöä. http://www.kauppalehti.fi/uutiset/risto-siilasmaa-passittaisi-suomalaispaattajat-tekoalyn-maanpuolustuskurssille/neZ54CWd
 
  • Tykkää
Reactions: ctg
Like most technologies, AI has a big problem: as it gains marketing appeal, so many people begin using the term that it can lose its meaning. Slapping an AI label on something gives it a magical sheen that marketers hope will translate into more sales.

This can happen with analytics if we’re not careful. What’s the difference between an analytics program that simply throws lots of computing power at a bunch of numbers, and an algorithm containing some kind of magic sauce?
http://www.theregister.co.uk/2017/0...bers_now_it_can_chew_over_what_they_mean_too/

“These AI systems can also work with unstructured data and ambiguity that differentiates it from the pure data mining and analytics,” she explains.

So it’s the unstructured part that the AI applies to. This makes sense, because the branches of AI gaining most traction today – machine learning and deep learning – typically have non-deterministic outputs. They’re "fuzzy", producing confidence scores relating to their inputs and outputs.

This makes AI-based analytics systems good at analysing the kind of data that has sprung up since the early 2000s; particularly social media posts. You’ll see companies applying sentiment analysis to it, where AI algorithms "read" Twitter feeds and try to work out whether ‘LOL bae dat #Beyonce tune be SIIIIICK” is a good or a bad thing.

“Combining analytics with AI can help workers in the finance sector by bringing unstructured and structured information together,” explains Landers. She cites wealth managers as an example. Using conventional structured analytics, a financial advisor might visualise a customer’s trades and investments over time, offering useful information about their investment yields and the level of attention they pay to their account. That might help when rebalancing their assets.

An AI system might go further, though, exploring emails written to the advisor, along with social media posts on the client’s account, and could even factor information about life events that the client is experiencing.

Now you’re expanding that data set outside of the organization. Then you start layering in things like personality insights,” says Landers. So an AI-enhanced analytics system might surface client sentiments that they aren’t revealing to the advisor, or perhaps even aware of. Sentiment analysis is easy when you’re gawping in disbelief over the latest Trump tweet. It’s more difficult when you’re preparing a thousand client meetings for your army of wealth management professionals that week.

A dynamic segmentation algorithm might use all of this data to understand just how aggressive that client is likely to be in their trades, and suggest ETFs and mutual funds accordingly. “It all helps the wealth manager build a better picture of their customer without spending hours poring over their communication history,” suggests Landers. “You’re trying to understand more layers of what that person’s relationship is with your organization and what it could be.

Using machine learning in combination with analytics can also take companies beyond basic insights in other areas such as information security, explains Nick Patience, founder and research vice president of software at analyst firm 451 Research.

Applied to information security, conventional analytics can help IT leaders to find their biggest risk areas and work out where to invest their cybersecurity dollars, but this leaves them tracking a moving target.

“People are constantly buying security products to fix a problem or get a patch to update something after it’s already happened, which you have to do, but that’s table stakes,” he says. Machine learning is good at spotting things as they’re happening (or in the case of predictive analytics, beforehand). Their anomaly detection can surface the ‘unknown unknowns’ - problems that haven’t been seen before, but which could pose a material threat. In short, applying this branch of AI to security analytics could help you understand where attackers are going, rather than where they’ve been.
 
Last year, a strange self-driving car was released onto the quiet roads of Monmouth County, New Jersey. The experimental vehicle, developed by researchers at the chip maker Nvidia, didn’t look different from other autonomous cars, but it was unlike anything demonstrated by Google, Tesla, or General Motors, and it showed the rising power of artificial intelligence. The car didn’t follow a single instruction provided by an engineer or programmer. Instead, it relied entirely on an algorithm that had taught itself to drive by watching a human do it.

Getting a car to drive this way was an impressive feat. But it’s also a bit unsettling, since it isn’t completely clear how the car makes its decisions. Information from the vehicle’s sensors goes straight into a huge network of artificial neurons that process the data and then deliver the commands required to operate the steering wheel, the brakes, and other systems. The result seems to match the responses you’d expect from a human driver. But what if one day it did something unexpected—crashed into a tree, or sat at a green light? As things stand now, it might be difficult to find out why. The system is so complicated that even the engineers who designed it may struggle to isolate the reason for any single action. And you can’t ask it: there is no obvious way to design such a system so that it could always explain why it did what it did.
https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/
 
MOVE over, Sherlock. UK police are trialling a computer system that can piece together what might have happened at a crime scene. The idea is that the system, called VALCRI, will be able to do the laborious parts of a crime analyst’s job in seconds, freeing them to focus on the case, while also provoking new lines of enquiry and possible narratives that may have been missed.

“Everyone thinks policing is about connecting the dots, but that’s the easy bit,” says William Wong, who leads the project at Middlesex University London. “The hard part is working out which dots need to be connected.”

VALCRI’s main job is to help generate plausible ideas about how, when and why a crime was committed as well as who did it. It scans millions of police records, interviews, pictures, videos and more, to identify connections that it thinks are relevant. All of this is then presented on two large touchscreens for a crime analyst to interact with
https://www.newscientist.com/articl...yses-police-data-to-learn-how-to-crack-cases/
 
IS:
Pizzerian mainoskyltti paljastui vakoiluvälineeksi – pysäyttävä muistutus kaikille
Julkaistu: 14.5. 19:44
YKSITYISYYS
Tietosuojaa vesitetään nykyään salakavalasti melkein missä tahansa ja välillä lain rajoja koetellen. Viimeisin esimerkki löytyi heti Suomen naapurista.
Oslossa toimivasta Peppes Pizza -ketjun ravintolasta paljastui kadulle päin sijoitettu mainosnäyttö, joka samalla vakoilee katsojiaan. Asiasta kertoi alun perin Reddit-käyttäjä forsaken75, joka norjalaislehti Dinsidenmukaan on oikealta nimeltään Jeff Newman.

Newman sanoo huomanneensa, että mainosnäytön mainokset olivat kaatuneet paljastaen niiden taustalla käynnissä olevan prosessin näyttöä katsovien ihmisten tunnistamiseksi ja lajittelemiseksi.

Hän itse astui näytön eteen ja totesi sen arvioivan hänet reaaliajassa nuoreksi mieheksi, jolla on silmälasit. Tämän lisäksi näyttö tarkkaili, minne hän katsoi ja kuinka paljon hän hymyili.



http://imgur.com/m2Tn93L
– Aikomuksenani oli vain [...] tuoda esille, että ihmiset eivät välttämättä tiedä tällaisia demografisia tietoja kerättävän heistä pelkästään mainosta lähestymällä ja sitä katsomalla, Newman kirjoittaa Redditissä.

Hän lisäsi, että kamera ei ollut nopealla vilkaisulla selvästi näkyvissä.

"Ei se ole käytössä missään"
Tarkoituksena on kohdentaa mainokset paremmin katsojan mukaan, kasvontunnistusteknologian näyttöön toimittanut ProntoTV selvitti Dinsidelle. Jos näytön edessä esimerkiksi on nainen, hänelle esitetään terveellisempiä ruokia kuin miehelle.

Tiedot pidetään nimettöminä, eikä niillä väitetysti pystytä tunnistamaan yksittäisiä henkilöitä, asiasta kirjoittava Naked Security -tietoturvablogi toteaa.

Kyseessä on testiprojekti, jolle ProntoTV väittää saaneensa luvan Norjan tietosuojaviranomaiselta Datatilsynetiltä. Mutta viraston edustajan mukaan seurantaan voi hyvin liittyä ongelmia ja se voi olla jopa laitonta.

ProntoTV:n omistaa ruotsalainen ZetaDisplay AB, jonka yksi tytäryhtiö on ZetaDisplay Finland. Sen asiakkaina ovat muun muassa Veikkaus, Tikkurila ja Starkki.

ZetaDisplay Finlandin toimitusjohtaja Jens Helin ei tunne Norjassa tehdyn testin yksityiskohtia, mutta korostaa, etteivät he ole tehneet vastaavaa Suomessa.

– Ei se ole käytössä missään. Siihen on esitetty jonkun verran kiinnostusta, mutta siinä on omia haasteita. Läpikäytävänä on huomattavan hienovarainen prosessi ennen kuin tällainen seuranta on perusteltua loppuasiakkaan näkökulmasta, Helin selvittää IS Digitodaylle.

Norjassa pohditaan seurannan toteutustapaa. Pizzerian tapauksessa seurantaa ei välttämättä tuotu tarpeeksi selvästi esille mainostaulun katsojille.

Vastaavaa teknologiaa on kuitenkin vähintään testattu Suomessa. Affecto on kokeillut ihmiset tunnistavia mainosnäyttöä esimerkiksi elektroniikkaketju Powerin joissakin liikkeissä, Helsingin Sanomat kertoo.
Tuomas Linnake
http://www.is.fi/digitoday/tietoturva/art-2000005208257.html
Kaupunkisodankäynnissä ripotellaan kameroita ympäriinsä, ja opetetaan tietokoneohjelmalle arvomerkit jolloin se voi juoruta vihollisen liikkeistä että Asemakadulla on havaittu kalju majuri sekä himonussijan viiksillä varustettu kapteeni, Kauppakadulla taas liikkuu 5 jääkäriä ja 1 kersantti.
 
Viimeksi muokattu:
Tuossa on selvää Sherlokkia. Positiivinen osa on tuo "yhdistää pisteitä ja laatii skenaarion". Arveluttava osa on tämä mikä skannaa vaikka mitkä arkistot. Nimittäin, huonoimmillaan toteutettuna tuo on vaan digitaalistettu "line up the usual suspects", jossa haettaisiin sopiva syyllinen, ehkä todennäköisin, mutta ei mitenkään varmin.
Siitä en toki osaa sanoa onko todennäköisten tekijöiden puhuttelu toimivin keino nykyäänkin, onko taparikollisen toiminta niin toistuvaa. Kun en ole polliisi...
 
Arveluttava osa on tämä mikä skannaa vaikka mitkä arkistot.

Ehkä tässä se pelko on siinä kiinni että me tiedostamme tiettyjen arkistojen arvon, mutta tekoälylle sama arkisto voi olla yhtä arvokas kuin joku sellainen mitä me emme tiedosta olevan sellainen. Mutta mitä he tekevät Scotland Yardilla on sama asia kuin mitä Royal Society tekee NHS datoilla ja samanlaisilla arkistoilla.
 

Google's use of Brits' medical records to train an AI and treat people was legally "inappropriate," says Dame Fiona Caldicott, the National Data Guardian at the UK's Department of Health.

In April 2016 it was revealed the web giant had signed a deal with the Royal Free Hospital in London to build an artificially intelligent application called Streams, which would analyze patients' records and identify those who had acute kidney damage.

As part of the agreement, the hospital handed over 1.6 million sets of NHS medical files to DeepMind, Google's highly secretive machine-learning nerve center. However, not every patient was aware that their data was being given to Google to train the Streams AI model. And the software was supposed to be used only as a trial – an experiment with software-driven diagnosis – yet it was ultimately used to detect kidney injuries in people and alert clinicians that they needed treatment.

Dame Caldicott has told the hospital's medical director Professor Stephen Powis that he overstepped the mark: it's one thing to create and test an application, it's another thing entirely to use in-development code to treat people. Proper safety trials must be carried out for medical systems, she said.

"It is my view, and that of my panel, that purpose for the transfer of 1.6 million identifiable patient records to Google DeepMind was for the testing of the Streams application, and not for the provision of direct care to patients," she wrote in a letter dated February, which was leaked to Sky News on Monday.

The pact between the hospital and Google raised eyebrows at the time, but was sold as a legal way to develop apps using sensitive data. It now appears that the US tech goliath and the Royal Free blew it.

"This letter shows that Google DeepMind must know it had to delete the 1.6 million patient medical records it should never have had in the first place," said Phil Booth, coordinator for medical privacy pressure group medConfidential.
http://www.theregister.co.uk/2017/05/16/google_deepmind_using_uk_medical_records/
 

An outfit called Aurora Flight Sciences is trumpeting the fact that one of its robots has successfully landed a simulated Boeing 737.

Aviation-savvy readers may well shrug upon learning that news, because robots – or at least auto-landing systems - land planes all the time and have done so for decades.

Aurora's excitement is justified by the fact its robot sits in the co-pilot's seat and used various protuberances to wield the simulator's physical controls.

Which is just what US military DARPA wants to see under its Aircrew Labor In-Cockpit Automation System (ALIAS) program. The idea behind ALIAS is that military air crew can often find themselves with a lot to do under very stressful circumstances, but that automating their jobs with software and avionics is going to be very costly and time-consuming. Military craft are also likely to find themselves heading for destinations that lack ground-based augmentation systems to facilitate automated landings.

DARPA's thinking therefore turned to systems that could interact with crew, take advantage of existing on-board automation facilities and offer “Easy-to-use touch and voice interfaces would facilitate supervisor-ALIAS interaction.”

Which is what Aurora has built in the form of the rig depicted above (here for readers of m.reg) that uses “in-cockpit machine vision, robotic components to actuate the flight controls, an advanced tablet-based user interface, speech recognition and synthesis” to do the job.

Aurora's done this stuff before in actual flight, but for light aircraft. Simulating a 737 landing gets it closer to ALIAS' goal of adding a helping hand to crews of large military aircraft.

As ever, this stuff's a long way from being ready to fly. Which is just as well, given recent airborne shenanigans and perpetually-rubbish IoT security.
http://www.theregister.co.uk/2017/05/17/robot_lands_a_737_iby_handi_on_a_dare_from_darpa/
 
Brain-inspired computing is having a moment. Artificial neural network algorithms like deep learning, which are very loosely based on the way the human brain operates, now allow digital computers to perform such extraordinary feats as translating language, hunting for subtle patterns in huge amounts of data, and beating the best human players at Go.

But even as engineers continue to push this mighty computing strategy, the energy efficiency of digital computing is fast approaching its limits. Our data centers and supercomputers already draw megawatts—some 2 percent of the electricity consumed in the United States goes to data centers alone. The human brain, by contrast, runs quite well on about 20 watts, which represents the power produced by just a fraction of the food a person eats each day. If we want to keep improving computing, we will need our computers to become more like our brains.

Hence the recent focus on neuromorphic technology, which promises to move computing beyond simple neural networks and toward circuits that operate more like the brain’s neurons and synapses do. The development of such physical brainlike circuitry is actually pretty far along.
http://spectrum.ieee.org/computing/hardware/we-could-build-an-artificial-brain-right-now
 
Artificial intelligence technology is accelerating forward at a blistering pace, and a trio of scientists are calling for more accountability and transparency in AI, before it's too late.

In their paper, the UK-based researchers say existing rules and regulations don't go far enough in limiting what AI can do – and recommend that robots be held to the same standards as the humans who make them.

There are a number of issues, say the researchers from the Alan Turing Institute, that could lead to problems down the line, including the diverse nature of the systems being developed and a lack of transparency about the inner workings of AI.

"Systems can make unfair and discriminatory decisions, replicate or develop biases, and behave in inscrutable and unexpected ways in highly sensitive environments that put human interests and safety at risk," the team reports in their paper.

In other words: how do we know we can trust AI?
https://www.sciencealert.com/scient...ound-rules-to-stop-the-march-of-creepy-robots
 
The best artificial intelligence still has trouble visually recognizing many of Homer Simpson’s favorite behaviors such as drinking beer, eating chips, eating doughnuts, yawning, and the occasional face-plant. Those findings from DeepMind, the pioneering London-based AI lab, also suggest the motive behind why DeepMind has created a huge new dataset of YouTube clips to help train AI on identifying human actions in videos that go well beyond “Mmm, doughnuts” or “Doh!”
http://spectrum.ieee.org/tech-talk/...s-ai-has-trouble-seeing-homer-simpson-actions
 
IS:
Pizzerian mainoskyltti paljastui vakoiluvälineeksi – pysäyttävä muistutus kaikille

Kaupunkisodankäynnissä ripotellaan kameroita ympäriinsä, ja opetetaan tietokoneohjelmalle arvomerkit jolloin se voi juoruta vihollisen liikkeistä että Asemakadulla on havaittu kalju majuri sekä himonussijan viiksillä varustettu kapteeni, Kauppakadulla taas liikkuu 5 jääkäriä ja 1 kersantti.

Siinäpä tappokoneelle vielä pseudokoodiin tarkennusta copyleft-lisenssillä:

/* Copyleft, steal with pride */
if(kohdehenkilo.hasViikset()) {
kohdehenkilo.setLuokitus(Luokitus.RESERVILAINEN);
}

edit: paranneltu koodia :D
 
Viimeksi muokattu:
The United States appears poised to heighten scrutiny of Chinese investment in Silicon Valley to better shield sensitive technologies seen as vital to U.S. national security, current and former U.S. officials tell Reuters.

Of particular concern is China's interest in fields such as artificial intelligence and machine learning, which have increasingly attracted Chinese capital in recent years. The worry is that cutting-edge technologies developed in the United States could be used by China to bolster its military capabilities and perhaps even push it ahead in strategic industries.

The U.S. government is now looking to strengthen the role of the Committee on Foreign Investment in the United States (CFIUS), the inter-agency committee that reviews foreign acquisitions of U.S. companies on national security grounds.

An unreleased Pentagon report, viewed by Reuters, warns that China is skirting U.S. oversight and gaining access to sensitive technology through transactions that currently don't trigger CFIUS review. Such deals would include joint ventures, minority stakes and early-stage investments in start-ups.

"We're examining CFIUS to look at the long-term health and security of the U.S. economy, given China's predatory practices" in technology, said a Trump administration official, who was not authorized to speak publicly.

Defense Secretary Jim Mattis weighed into the debate on Tuesday, calling CFIUS "outdated" and telling a Senate hearing: "It needs to be updated to deal with today's situation."
http://www.reuters.com/article/us-usa-china-artificialintelligence-idUSKBN1942OX
 
Once cloud was accepted as something with various meanings, none of which our customers understood, the IT industry searched for the next big buzzword. It came up with not one but three terms often used interchangeably by people who don't know any better – bots, artificial intelligence and machine learning.

This is great news for software developers, who can write some code and slap an AI label on it – right?

Maybe we're not giving users enough credit here. You would hope they understood a chatbot returning an answer match from an FAQ set is different to facial recognition or sentiment analysis. An interesting thing about the rise of the bots is the pre-conditioning of users that had to happen first.

We're not a 1968 audience watching HAL in 2001: A Space Odyssey singing Daisy (based on an IBM 704 computer singing the same song). Instead, we've grown up alongside technology to the point where instant messaging, search algorithms and targeted advertising are a normal part of our day. We're so used to communicating in text that it's not a big, scary leap to think a computer is answering us back instead of another human. In our instant world, we expect information when it's convenient and we don't care who serves it – as long as it's accurate.

Inside our industry, we've been putting the machines to work for a while now. IT pros pieced together batch files to build machines back in DOS and Windows 3.1 days. Imaging and automated application deployment used to be the bastion of the enterprise and is now common (and affordable) in SMB environments. We understood automation, variables and APIs. Then we unleashed them on the world. The world was ready.
http://www.theregister.co.uk/2017/07/14/rise_of_the_business_bots/
 
Back
Top