Kohti Skynettiä - Tekoäly sodankäynnissä

Jos ja kun jokin valtiollinen toimija kyseisenlaisen aparaatin rakentaa, ihmiskunnan vuoksi toivon että se on tukevasti sidottu yhteen keskusjärjestelmään ilman tarvittavia fyysisiä yhteyksiä jotta se voisi levitä halutessaan. Jotta silloin sen alkaessa osoittamaan pelottavia merkkejä voidaan kyseinen tekoäly räjäyttää helvettiin 10megatonnin voimalla ennenkuin paska osuu tuulettimeen.
Kyse ei ole pelkästään tuosta mutta myös siitä että aina löytyy anarkisteja ja maailmanlopun kannattajia, jotka pyrkisivät vapauttamaan supertekoälyn, muunneltuna toki. Mutta, aina tietysti voidaan luoda vastatekoäly taistelemaan sitä ensimmäistä vastaan.

Ehkäpä pitäisi luoda useita erilaisia tekoälyjä? Miljoonia? Voisiko 999M tekoälyä pitää 1M vinoutunutta tekoälyä kurissa ilman suurempaa vahinkoa yhteiskunnalle?

Että niinku jos yhden tekoälyn luominen on kauhea virhe, niin onko miljoonan tekoälyn luominen vielä kauheampi virhe vai juuri oikea ratkaisu tasapainoksi? :D

Ks. myös Person of Interest (TV-sarja)
 
Drone boats belonging to the U.S. Navy have begun learning to work together like a swarm with a shared hive mind. Two years ago, they would have individually reacted to possible threats by all swarming over like a chaotic group of kids learning to play soccer for the first time. Now the drone boats have showed that they can cooperate intelligently as a team to defend a harbor area against intruders.

The U.S. Office of Naval Research (ONR) held its latest robot swarm demonstration in the lower Chesapeake Bay off the Virginia coast for about a month. Four drone boats showed off their improved control and navigation software by patrolling an area of 4 nautical miles by 4 nautical miles.

If they spotted a possible threat, the swarm of roboboats would collectively decide which of them would go track and trail the intruder vessel. In the future, such drone boats could act as a first line of defense by scouting and screening for larger Navy warships manned by sailors.
http://spectrum.ieee.org/automaton/...avy-drone-boat-swarm-practices-harbor-defense

Each of the drone boats was controlled by a system called Control Architecture for Robotic Agent Command and Sensing (CARACaS). Such software enables each individual drone boat to plot their own paths to reach certain destinations and avoid collisions.

The CARACaS software has gone through a major upgrade since those earlier demonstrations, said Robert Brizzolara, an ONR program officer. In 2014, the software already enabled the drone boats to share any data their radar and cameras collected about potential intruders. But ONR engineers have now expanded the library of behaviors that can direct the actions of the swarm.

“Now they’re operating as a team rather than individuals,” Brizzolara said.

Human supervisors standing on shore were able to watch the drone boat swarm and issue broad mission orders if needed. But for the most part, they simply assigned a general mission to the drone boats and let the robotic CARACaS software do its thing.

Another improvement was the swarm’s ability to identify intruder vessels as possible threats. Such “automated vessel recognition” currently relies on a library of images showing certain boats or ships that could be potential threats. The ONR declined to say if the classification capabilities of the CARACaS software used specific AI techniques such as machine learning algorithms for image recognition.
 
Niille jotka haluavat oppia enemmän tekoäly koodauksesta


Deep Learning — the use of neural networks with modern techniques to tackle problems ranging from computer vision to speech recognition and synthesis — is certainly a current buzzword. However, at the core is a set of powerful methods for organizing self-learning systems. Multi-layer neural networks aren’t new, but there is a resurgence of interest primarily due to the availability of massively parallel computation platforms disguised as video cards.

The problem is getting started in something like this. There are plenty of scholarly papers that can be hard to wade through. Or you can grab some code from GitHub and try to puzzle it out.
http://hackaday.com/2016/12/21/practical-deep-learning/
 
Varoitusten antaminen ja automaagisten toimenpiteiden hoito on sotaa käyvä koneälyn yksi edellytyksistä, sillä ihmisaivot on liian hitaita tajuamaan. Alla esimerkki miksi

Just a few weeks ago, we published a report about how Tesla’s new radar technology for the Autopilot is already proving useful in some potentially dangerous situations. We now have a new piece of evidence that is so spectacularly clear that it’s worth updating that report.

The video of an accident on the highway in the Netherlands caught on the dashcam of a Tesla Model X shows the Autopilot’s forward collision warning predicting an accident before it could be detected by the driver.

With the release of Tesla’s version 8.0 software update in September, the automaker announced a new radar processing technology that was directly pushed over-the-air to all its vehicles equipped with the first generation Autopilot hardware.

One of the main features enabled by the new radar processing capacity is the ability for the system to see ahead of the car in front of you and basically track two cars ahead on the road. The radar is able to bounce underneath or around the vehicle in front of the Tesla Model S or X and see where the driver potentially can not because the leading vehicle is obstructing the view.

That’s demonstrated clearly in this real world situation on the Autobahn today.
https://electrek.co/2016/12/27/tesla-autopilot-radar-technology-predict-accident-dashcam/

 
Lisää tekoälyn signaalin prosessoinnista ja varoitusten tekemisestä.

Driving your car until it breaks down on the road is never anyone’s favorite way to learn the need for routine maintenance. But preventive or scheduled maintenance checks often miss many of the problems that can come up. An Israeli startup has come up with a better idea: Use artificial intelligence to listen for early warning signs that a car might be nearing a breakdown.

The service of 3DSignals, a startup based in Kefar Sava, Israel, relies on the artificial intelligence technique known as deep learning to understand the noise patterns of troubled machines and predict problems in advance. 3DSignals has already begun talking with leading European automakers about possibly using the deep learning service to detect possible trouble both in auto factory machinery and in the cars themselves. The startup has even chatted with companies about using their service to automatically detect problems in future taxi fleets of driverless cars.

“If you’re a passenger in a driverless taxi, you only care about getting to your destination and you’re not reporting maintenance problems,” says Yair Lavi, a co-founder and head of algorithms for 3DSignals. So actually having the 3DSignals solution in autonomous taxis is very interesting to the owners of taxi fleets.”
http://spectrum.ieee.org/automaton/...g-ai-listens-to-machines-for-signs-of-trouble
 
Miten Googlen kääntäjä toimii

Google’s researchers think their system achieves this breakthrough by finding a common ground whereby sentences with the same meaning are represented in similar ways regardless of language – which they say is an example of an “interlingua”. In a sense, that means it has created a new common language, albeit one that’s specific to the task of translation and not readable or usable for humans.

Cho says that this approach, called zero-shot translation, still doesn’t perform as well as the simpler approach of translating via an intermediary language. But the field is progressing rapidly, and Google’s results will attract attention from the research community and industry.

“I have no doubt that we will be able to train a single neural machine-translation system that works on 100 plus languages in the near future,” says Cho.

Google Translate currently supports 103 languages and translates over 140 billion words every day.

So will human translators soon find themselves out of work? Neural translation technology can already do a good job for simple texts, says Andrejs Vasiļjevs, co-founder of technology firm Tilde, which is developing neural translation services between Latvian or Estonian and English. But a good human translator understands the meaning of the source text, as well as its stylistic and lexical characteristics, and can use that knowledge to give a more accurate translation.

“To match this human ability, we have to find a way to teach computers some basic world knowledge, as well as knowledge about the specific area of translation, and how to use this knowledge to interpret the text to be translated,” says Vasiļjevs.
https://www.newscientist.com/articl...i-invents-its-own-language-to-translate-with/
 
“Fake news” vexed the media classes greatly in 2016, but the tech world perfected the art long ago. With “the internet” no longer a credible vehicle for Silicon Valley’s wild fantasies and intellectual bullying of other industries – the internet clearly isn’t working for people – “AI” has taken its place.

Almost everything you read about AI is fake news. The AI coverage comes from a media willing itself into a mind of a three year old child, in order to be impressed.

For example, how many human jobs did AI replace in 2016? If you gave professional pundits a multiple choice question listing these three answers: 3 million, 300,000 and none, I suspect very few would choose the correct answer, which is of course “none”.
http://www.theregister.co.uk/2017/01/02/ai_was_the_fake_news_of_2016/
 
Alkaa tänään.

In 2015, several of the world’s top poker players faced down a supercomputer-powered artificial intelligence named Claudico during a grueling 80,000 hands of no-limit Texas Hold’em. Beginning tomorrow, a rematch of humans versus AI will test whether humanity can hold its own against an even more capable challenger.

The human margin of victory from the past event was not large enough to statistically prove whether humans or the Claudico AI were really the better poker players. This year’s rematch features four human poker pros playing for a prize pot of $200,000 against an AI called Libratus in the “Brains Vs. Artificial Intelligence: Upping the Ante” event being held at the Rivers Casino in Pittsburgh starting on 11 January.
http://spectrum.ieee.org/automaton/.../meet-the-new-ai-challenging-human-poker-pros

Game-playing AI has found solutions to some versions of poker. But heads-up, no-limit Texas Hold’em represents an especially complex challenge with 10160 possible plays at different stages of the game (possibly more than the number of atoms in the universe). Such complexity exists because this two-player version of poker allows for unrestricted bet sizes.

To deal with such a game, many AI rely on a technique called counterfactual regret minimization (CFR). Typical CFR algorithms try to solve games such as poker through several steps at each decision point. First, they come up with counterfactual values representing different game outcomes. Second, they apply a regret minimization approach to see which strategy leads to the best outcome. And third, they typically average the most recent strategy with all past strategies.

The challenge with the CFR approach is that no supercomputer could solve for all the different game outcomes at any given point in heads-up, no-limit Texas Hold’em. Instead, CFR algorithms usually solve simplified versions of poker and use the resulting strategies to imperfectly play the full versions of poker games. Even these simplified “game trees” must map out many different paths branching out from each decision point.

https://www.riverscasino.com/pittsburgh/BrainsVsAI

https://www.twitch.tv/brains_vs_ai
 
The European parliament has urged the drafting of a set of regulations to govern the use and creation of robots and artificial intelligence, including a form of “electronic personhood” to ensure rights and responsibilities for the most capable AI.

In a 17-2 vote, with two abstentions, the parliament’s legal affairs committee passed the report, which outlines one possible framework for regulation.

“A growing number of areas of our daily lives are increasingly affected by robotics,” said the report’s author, Luxembourgish MEP Mady Delvaux. “In order to address this reality and to ensure that robots are and will remain in the service of humans, we urgently need to create a robust European legal framework”.

The proposed legal status for robots would be analogous to corporate personhood, which allows firms to take part in legal cases both as the plaintiff and respondent. “It is similar to what we now have for companies, but it is not for tomorrow,” said Delvaux. “What we need now is to create a legal framework for the robots that are currently on the market or will become available over the next 10 to 15 years.”

The broad report identifies a number of areas in need of specific oversight from the European Union, including:

  • The creation of a European agency for robotics and AI;
  • A legal definition of “smart autonomous robots”, with a system of registration of the most advanced of them;
  • An advisory code of conduct for robotics engineers aimed at guiding the ethical design, production and use of robots;
  • A new reporting structure for companies requiring them to report the contribution of robotics and AI to the economic results of a company for the purpose of taxation and social security contributions
  • A new mandatory insurance scheme for companies to cover damage caused by their robots.
https://www.theguardian.com/technology/2017/jan/12/give-robots-personhood-status-eu-committee-argues
 
Laitan tänne koska video esittää miten tekoäly käyttäytyy rakentaessaan juttuja. SpiderFab on suunniteltu kiertoradalla olevalle avaruustelakalle rakennusbotiksi.

 
Miksi tekoälyt eivät juttele meille kuten me olemme tottuneet näkemään SF kirjoista.

...

Computers don’t have brains – they can’t think and they lack common sense – and they don’t understand and learn language in the same way humans do. Frederick Jelinek, a prominent natural language processing researcher, famously said: “Every time I fire a linguist, the performance of the speech recognizer goes up.” It all boils down to clever engineering, and coming up with the most effective ways to model human communication.

“The main problem is that humans are very good at chatting; they’re good at talking about things that don’t have a specific goal. But with machines it’s harder: you wouldn’t say to Cortana where’s the best place to get a coffee, you’d say where is the nearest cafe?,” Black told The Register.

“Humans ask Cortana or Alexa a very targeted question and it gives an answer. It’s not necessary fun, it’s not making people want to use this ... and they have no affiliation to any particular personal assistant. They need to have a more natural conversation, so people can build a rapport with the machine and feel more content with it.”

It helps to train your model on large datasets containing real conversations such as online forums or movie scripts. Computers then use pattern recognition to learn how certain combinations of words are associated with one another, so it can match it with appropriate responses.

There’s a limit to how well that will work, however, Black says. To really push chatbots to become more human-like, it “needs to know an awful lot more about the world beyond referencing the weather. It needs to understand humans and predict how they will act, what they should do to build a useful relationship.”

Rudnicky agrees. “Humans come packed with experience about the world, and machines need that knowledge too. It needs to keep track of what’s going on rather than just focusing on content,” he said.
http://www.theregister.co.uk/2017/01/18/chatbot_battle/?page=2
 
Now leading researchers are finding that they can make software that can learn to do one of the trickiest parts of their own jobs—the task of designing machine-learning software.

In one experiment, researchers at the Google Brain artificial intelligence research group had software design a machine-learning system to take a test used to benchmark software that processes language. What it came up with surpassed previously published results from software designed by humans.

In recent months several other groups have also reported progress on getting learning software to make learning software. They include researchers at the nonprofit research institute OpenAI (which was cofounded by Elon Musk), MIT, the University of California, Berkeley, and Google’s other artificial intelligence research group, DeepMind.

If self-starting AI techniques become practical, they could increase the pace at which machine-learning software is implemented across the economy. Companies must currently pay a premium for machine-learning experts, who are in short supply.
https://www.technologyreview.com/s/603381/ai-software-learns-to-make-ai-software/
 
Tekoälytutkimus ja -tekniikka on hyvinkin voinut jo saavuttaa eksponentiaalisen kasvun vaiheen. Shakkitaiturien kaaduttua Go-mestari kukistui yllättävän nopeasti koneälyn kynsissä. Muitakin siviilisovelluksia tuntuu tulevan markkinoille päivittäin.

On vain ajan kysymys, milloin sotilassovellukset alkavat vallata alaa: autonomiset multikopterit, robottitornit, ohjaus- ja maalinosoitusjärjestelmät, kyberrobotit ja verkkosodat... Edessä on todellinen teknologinen harppaus asekuilun partaalla.

https://futureoflife.org/open-letter-autonomous-weapons/

Teuvo Kohosen kartalta siirrytään tuntemattomille vesille.

https://fi.wikipedia.org/wiki/Teuvo_Kohonen
 
Nettipokeri varmaankin kuolee jossain vaiheessa, kun pokeritekoälyohjelmat tulevat tavallisen pulliaisen ulottuville. Online-sakissahan tuo on ollut ongelma jo pidempään. Siellä on sitten pyrkimyksiä tunnistaa huijauksia kyllä vastaavasti.

Suomalaisprofessorin keksintö jyräsi maailman parhaat pokerinpelaajat – "Se tuntui näkevän minun korttini"
Pelikasinon pokeriammattilaiset mittelivät taitojaan kolme viikkoa tietokonetta vastaan. Nyt se on varmaa: ihminen ei enää pärjää korttipöydässä tekoälyä vastaan.

http://yle.fi/uutiset/3-9434687

Vielä kaksi vuotta sitten maailman parhaat pokerinpelaajat voittivat Libratuksen edeltäjän "Claudion".

Sen jälkeen suomalainen Tuomas Sandholm, Carnegie Mellonin yliopiston professori, teki yhdessä tohtorinväitöskirjaa valmistelevan Noam Brownin kanssa ohjelmaan joukon parannuksia.

Virittelyn ansiosta "Libratus" saatiin ohjelmoitua niin, että se pystyy oppimaan monia asioita itsekseen.

– Tämä oli niin monimutkainen kuvio, että edes teköälyn parissa työskentelevät tutkijat eivät olleet käsittäneet sitä, Sandholm kertoi Frankfurter Allgemeine -lehdessä(siirryt toiseen palveluun) julkaistussa artikkelissa.
 
Nettipokeri varmaankin kuolee jossain vaiheessa, kun pokeritekoälyohjelmat tulevat tavallisen pulliaisen ulottuville. Online-sakissahan tuo on ollut ongelma jo pidempään. Siellä on sitten pyrkimyksiä tunnistaa huijauksia kyllä vastaavasti.

Suomalaisprofessorin keksintö jyräsi maailman parhaat pokerinpelaajat – "Se tuntui näkevän minun korttini"
Pelikasinon pokeriammattilaiset mittelivät taitojaan kolme viikkoa tietokonetta vastaan. Nyt se on varmaa: ihminen ei enää pärjää korttipöydässä tekoälyä vastaan.

http://yle.fi/uutiset/3-9434687

Vielä kaksi vuotta sitten maailman parhaat pokerinpelaajat voittivat Libratuksen edeltäjän "Claudion".

Sen jälkeen suomalainen Tuomas Sandholm, Carnegie Mellonin yliopiston professori, teki yhdessä tohtorinväitöskirjaa valmistelevan Noam Brownin kanssa ohjelmaan joukon parannuksia.

Virittelyn ansiosta "Libratus" saatiin ohjelmoitua niin, että se pystyy oppimaan monia asioita itsekseen.

– Tämä oli niin monimutkainen kuvio, että edes teköälyn parissa työskentelevät tutkijat eivät olleet käsittäneet sitä, Sandholm kertoi Frankfurter Allgemeine -lehdessä(siirryt toiseen palveluun) julkaistussa artikkelissa.


Tuomas Sandholm ja Libratuksen voittoisa käsi ovat saaneet laajalti mediahuomiota.

An artificial intelligence system developed at Carnegie Mellon University has just racked up $1,766,250 worth of chips against four of the world’s top professional players after a marathon 20-day poker binge.

Tuomas Sandholm, professor of computer science at CMU, called the poker win “the last frontier” in the creeping series of victories that intelligent machines have recorded in human games, dating back to an IBM supercomputer’s defeat of chess grandmaster Garry Kasparov 20 years ago.

“This is not just about poker. The algorithms we have developed . . . can take any imperfect information situation and output a good strategy for that setting,” said Mr Sandholm, who developed the system with Noam Brown, a PhD student... The technology could be used to compete against humans in business negotiations, military strategy, and the high-frequency trading systems used by the biggest banks, he said.

https://www.ft.com/content/e9f4aae2-e7d2-11e6-893c-082c54a7f539?ftcamp=published_links%2Frss%2Fworld%2Ffeed%2F%2Fproduct
 
Mielenkiintoinen otanta tämän päivän ongelmaan: "Valeuutiset."

Two AI researchers are behind a daring open challenge to fight the spread of outrageous headlines that are completely detached from reality. (As if anyone would write such things, tut-tut.)

The Fake News Challenge (FNC) is organized by Dean Pomerleau, an entrepreneur and adjunct professor at Carnegie Mellon University, and Delip Rao, an employee at Joostware. The aim is to explore how AI – particularly machine learning and natural language processing – might be used to combat the negative effects of false information.

The problem of fake news has been bubbling away for some time, but reached a climax as Donald Trump was sworn in as the 45th President of the United States. People were quick to blame dodgy websites for pushing lies – such as the Pope backing Donald – that potentially skewed the election results in the telly celebrity's favor.

As machine learning advances, the scope of problems it’s being applied to has expanded. Classification algorithms are particularly useful in computer vision and healthcare, helping doctors diagnose diseases.

But curing fake news is not as simple as telling apart cancerous moles from noncancerous ones. There is no AI system powerful enough to spit out the words “FAKE NEWS” with a red flashing light as an output. The FNC admits that truth labelling is “virtually impossible” with existing AI and natural language processing knowledge for the following reasons:

Truth labeling also poses several large technical and logistical challenges for a contest like the FNC:

  • There exists very little labeled training data of fake vs real news stories.
  • The data that does exist (eg, fact checker website archives) is almost all copyright protected.
  • The data that does exist is extremely diverse and unstructured, making it hard to train on.
  • Any dataset containing claims with associated “truth” labels is going to be contested as biased.
Instead, the goal is to help human fact checkers with “stance detection.” The headline and contents of a story are pitted against each other.

Claims made in the headlines are tested relative to the stance of the contents. The output will be split into four categories:

  • Agrees: The body text agrees with the headline.
  • Disagrees: The body text disagrees with the headline.
  • Discusses: The body text discusses the same topic as the headline, but does not take a position.
  • Unrelated: The body text discusses a different topic than the headline.
The idea is that human fact checkers could then quickly scan through the article looking for arguments for and against the claim to judge the article’s accuracy.

Registration for the competition closes in May and 72 teams have signed up so far. They must not deviate from the training dataset, as using extra data jeopardizes the chances of judging the system’s performance fairly. Winners will be announced in June. The financial details of the prize are still to be determined.

Building a stance detection may lead to a false news labelling system by taking into account the credibility of news organization, the FNC said.

“For example, if several high-credibility news outlets run stories that disagree with a claim (eg, “Denmark Stops Issuing Travel Visas to US Citizens”), the claim would be provisionally labeled as False. Alternatively, if a highly newsworthy claim (eg, “British Prime Minister Resigns in Disgrace”) only appears in one very low-credibility news outlet, without any mention by high-credibility sources despite its newsworthiness, the claim would be provisionally labeled as False by such a truth labeling system.

“In this way, the various stances (or lack of a stance) news organizations take on a claim, as determined by an automatic stance detection system, could be combined to tentatively label the claim as True or False. While crude, this type of fully-automated approach to truth labeling could serve as a starting point for human fact checkers, eg, to prioritize which claims are worth further investigation.”

The FNC isn’t the only project that hopes to use machine intelligence to combat fake news. Google awarded a total of €150,000 to two British fact checking companies – Full Fact and Factmata – and The Ferret, a Scottish investigative journalism site
http://www.theregister.co.uk/2017/02/03/ai_challenge_to_help_fight_fake_news/
 
Last summer the Pentagon staged a contest in Las Vegas in which high-powered computers spent 12 hours trying to hack one another in pursuit of a $2 million purse. Now Mayhem, the software that won, is beginning to put its hacking skills to work in the real world... Teams entered software that had to patch and protect a collection of server software, while also identifying and exploiting vulnerabilities in the programs under the stewardship of its competitors... ForAllSecure, cofounded by Carnegie Mellon professor David Brumley and two of his PhD students, has started adapting Mayhem to be able to automatically find and patch flaws in certain kinds of commercial software, including that of Internet devices such as routers.

Tests are underway with undisclosed partners, including an Internet device manufacturer, to see if Mayhem can help companies identify and fix vulnerabilities in their products more quickly and comprehensively. The focus is on addressing the challenge of companies needing to devote considerable resources to supporting years of past products with security updates... Last year, Brumley published results from feeding almost 2,000 router firmware images through some of the techniques that powered Mayhem. Over 40%, representing 89 different products, had at least one vulnerability. The software found 14 previously undiscovered vulnerabilities affecting 69 different software builds. ForAllSecure is also working with the Department of Defense on ideas for how to put Mayhem to real world use finding and fixing vulnerabilities.
https://it.slashdot.org/story/17/02/05/1833235/can-the-mayhem-ai-automate-bug-patching
 
Kolme videota mitkä selittävät robotiikkaa, automaatiota ja tekoälyä


Science fiction is a great inspiration for science. How can we build reconfigurable robots like Transformers or Terminator 2? How can we build Star Trek-style replicators that duplicate or mass-produce a given shape at the nano scale? How can we orchestrate the motion of a large swarm of robots? Recently we’ve been exploring possible answers to these questions through computational geometry, in the settings of reconfigurable robots (both modular and folding robots that can become any possible shape), robot swarms (which may be so small and simple that they have no identity), and self-assembly (building computers and replicators out of DNA tiles).


This talk will be a fast, firmly-shepherded tour of work on vision, robotics, algebra and HCI. It will cover several decades of work, and draw connections between earlier work and the state-of-the-art now. Many of these problems were challenges in the definition phase - figuring out the real problem to be solved, and then finding the right methods to solve it. Several of them required digging deeply into other disciplines to discover acctionable principles. In all cases they involved trying new things from the great self-service counter of ideas. Some highlights include mapping high-dimensional sets, moving sets of objects by shaking, flying robots, 3D video and language-learning games. I’ll close with recent work on scalable machine learning and deep learning. Much of the work to date in ML and DL is framed as an optimization problem. In ongoing work, we are exploring an alternative framing as Monte-Carlo simulation.


The Minkowski sum of two sets P and Q in Euclidean space is the result of adding every point in P to every point in Q. Minkowski sums constitute a fundamental tool in geometric computing, used in a large variety of domains including motion planning, solid modeling, assembly planning, 3d printing and many more. At the same time they are an inexhaustible source of intriguing mathematical and computational problems. We survey results on the structure, complexity, algorithms, and implementation of Minkowski sums in two and three dimensions. We also describe how Minkowski sums are used to solve problems in an array of applications, and primarily in robotics and automation.
 
Pikseleistä enemmän irti:
"First, take a look at the image on the right. The left column contains the pixelated 8×8 source images, and the centre column shows the images that Google Brain's software was able to create from those source images. For comparison, the real images are shown in the right column. As you can see, the software seemingly extracts an amazing amount of detail from just 64 source pixels."
upload_2017-2-8_10-30-37.png
https://arstechnica.co.uk/information-technology/2017/02/google-brain-super-resolution-zoom-enhance/
 
Back
Top