Kohti Skynettiä - Tekoäly sodankäynnissä


Mickin mikistä:

Tammikuussa maailmalle levisi uutinen Suomesta. Helsingin Porkkalankadulla oli osoitettu ensimmäistä kertaa, miten tietokoneet voivat olla uudella tavalla luovia.

Porkkalankadun ihme rakennettiin niin, että koneet laitettiin pelaamaan itseään vastaan. Materiaaliksi otettiin vapaasti verkosta ladattava, 30 000 julkisuuden henkilöstä koostuva tietokanta. Ohjaamattoman oppimisen mahdollisti Generative Adversial Networks GAN-teknologia, joka sisältää kaksi erillistä neuroverkkoa. Toinen verkko yritti tehdä mahdollisimman totuudenmukaisen valokuvan ja toinen paljasti väärennöksen. Tietokone opetteli tekemään kuvia pelaamalla kissa-hiiri-leikkiä itsensä kanssa. Vastaavalla tavalla olisi voitu tuottaa eläimiä, kasveja, kulkuneuvoja, huonekaluja tai kokonaisia ympäristöjä.

”Rauta on jumalattoman kallista ja sitä pitää päivittää parin vuoden välein. Siksi useimmat yliopistot eivät pysy enää vauhdissa mukana, valtiollisista toimijoista puhumattakaan.”

Riittävästi potkua on lähinnä yritysjäteillä kuten Facebookilla, Amazonilla, Googlella, Microsoftilla tai GE:llä.

”Tarvitaan syvällistä osaamista, paljon omaa tai julkista dataa sekä saavikaupalla laskentatehoa. Sitten alkaa syntyä ihmeellisiä sovelluksia uskomattoman nopealla tahdilla.”

Työvoiman tarve alalla on niin kovaa, että yliopistot imetään heti kuiviin.

”Mielikuvitus on nyt ainoa raja, ei enää teknologia”, Honkavaara sanoo.

”Itseohjautuvat autot, laivat, bussit ja lentokoneet. Liikkuminen, syöminen ja asumisen palvelut. Logistiikkaketjujen automatisointi. Lääketiede, öljynetsintä, sään ennustaminen”, hän luettelee.

Suomen Nvidia sai alkunsa 12 vuotta sitten, kun amerikkalaisyritys osti Honkavaaran johtaman Hybrid Graphicsin 3D-osaajineen.


Greatest Leader
Google’s artificial intelligence technologies are being used by the US military for one of its drone projects, causing controversy both inside and outside the company.

Google’s TensorFlow AI systems are being used by the US Department of Defense’s (DoD) Project Maven, which was established in July last year to use machine learning and artificial intelligence to analyse the vast amount of footage shot by US drones. The initial intention is to have AI analyse the video, detect objects of interest and flag them for a human analyst to review.

Drew Cukor, chief of the DoD’s Algorithmic Warfare Cross-Function Team, said in July: “People and computers will work symbiotically to increase the ability of weapon systems to detect objects. Eventually we hope that one analyst will be able to do twice as much work, potentially three times as much, as they’re doing now. That’s our goal.”

Project Maven forms part of the $7.4bn spent on AI and data processing by the DoD, and has seen the Pentagon partner with various academics and experts in the field of AI and data processing. It has reportedly already been put into use against Islamic State.

A Google spokesperson said: “This specific project is a pilot with the Department of Defense, to provide open source TensorFlow APIs that can assist in object recognition on unclassified data. The technology flags images for human review, and is for non-offensive uses only.”

While Google has long worked with government agencies providing technology and services, alongside cloud providers such as Amazon and Microsoft, the move to aid Project Maven has reportedly caused much internal debate at the search company. According to people talking to Gizmodo, some Google employees were outraged when they discovered the use of the company’s AI.

“Military use of machine learning naturally raises valid concerns. We’re actively discussing this important topic internally and with others as we continue to develop policies and safeguards around the development and use of our machine learning technologies,” said Google.

Both former Alphabet executive chairman, Eric Schmidt, and Google executive Milo Medin are members of the Defense Innovation Board, which advises the Pentagon on cloud and data systems.

Google has a mixed history with defence contracts. When it bought robotics firm Shaft, it pulled the company’s systems from a Pentagon competition, while it cut defence-related contracts on buying the satellite startup Skybox. When it owned robotics firm Boston Dynamics, the company was attempting to make a robotic packhorse for ground troops, which was ultimately rejected by the US marines because it was too noisy.

The company’s cloud services division currently does not offer systems designed to hold information classified as secret, where its competitors Amazon and Microsoft do.

When Google bought the UK’s artificial intelligence firm DeepMind in 2014 for £400m, the company set up an AI ethics board, which was tasked with reviewing the company’s use of AI, although details of the board were still not made public three years later.


Greatest Leader
An emergent type of conflict in recent years has been coined "gray zone," because it sits in a nebulous area between peace and conventional warfare. Gray-zone action is not openly declared or defined, it's slower, and is prosecuted more subtly-using social, psychological, religious, information, cyber and other means to achieve physical or cognitive objectives with or without violence. The lack of clarity of intent-the grayness-makes it challenging to detect, characterize, and counter an enemy fighting this way.

To better understand and respond to an adversary's gray-zone engagement, DARPA's Strategic Technology Office has announced a new program called COMPASS, which stands for Collection and Monitoring via Planning for Active Situational Scenarios. The program aims to develop software that would help clarify enemy intent by gauging an adversary's responses to various stimuli.

COMPASS will leverage advanced artificial intelligence technologies, game theory, and modeling and estimation to both identify stimuli that yield the most information about an adversary's intentions, and provide decision makers high-fidelity intelligence on how to respond--with positive and negative tradeoffs for each course of action.
"The ultimate goal of the program is to provide theater-level operations and planning staffs with robust analytics and decision-support tools that reduce ambiguity of adversarial actors and their objectives," said Fotis Barlos, DARPA program manager.

"As we see increasingly more sophistication in gray-zone activity around the world, we need to leverage advanced AI and other technologies to help commanders make more effective decisions to thwart an enemy's complex, multi-layered disruptive activity."

Current military decision-making follows a well-understood and effective OODA loop-Observe, Orient, Decide and Act. This is how planning is done in various geographic areas around the world, which works for traditional kinetic scenarios, Barlos said.

This process, however, is not effective in gray zone warfare. Signals in the environment are typically not rich enough to draw any conclusions, and, just as often, adversaries could implant these signals to induce ambiguity. COMPASS aims to add a dynamic, adaptive element to the OODA loop for complex, gray-zone environments.

The COMPASS program will leverage game theory for developing simulations to test and understand various potential actions and possible reactions by an adversary employing gray-zone activity. Barlos quickly noted, however, that the program is not about developing new sensory technologies, virtual reality systems or other advanced hardware.

The program focuses rather on advanced software that would quickly present options to decision makers by assimilating a large amount of intelligence collected using existing, state of the art systems (such as standard video exploitation, or textual analysis tools) related to rapidly changing scenarios.

"We're looking at the problem from two perspectives: Trying to determine what the adversary is trying to do, his intent; and once we understand that or have a better understanding of it, then identify how he's going to carry out his plans-what the timing will be, and what actors will be used," Barlos said. "The first is the what, and second is the where, when, and how.

"But in order to decide which of those actions is important you need to analyze the data, and you need to understand what different implications are and build a model of what you think the adversary will do," he said.

"That's where game theory comes in. If I do this, what will the adversary do? If I do that, what might he do? So it is using artificial intelligence in a repeated game theory process to try to decide what the most effective action is based on what the adversary cares about."


Greatest Leader
military experts say artificial intelligence could change the strategies, planning and organization of Russia’s military. They expect AI to help automate the analysis of satellite imagery and radar data, by quickly identifying targets and picking out unusual behavior by a enemy ground or airborne forces. AI would also allow the Russian military to obtain a so-called “library of goals,” which will help weapons with recognition and guidance.

But the biggest and fastest breakthroughs based on machine learning can be expected in the realm of electronic warfare. Last year saw the deployment of Russian EW units to Syria, eastern Ukraine, and Crimea, where they are amassing data about the performance and electronic signals and signatures of American and other western assets in the region: aircraft and airborne sensors, naval vessels, missiles, etc. This data will be fed to machine-learning systems and used to improve Russian EW.

The Russian military openly admits that AI is already used by certain weapons, yet questions about whether and how to use it are still being debated by policymakers, experts, and designers. Recent discussion has focused on the potential fallibility and unpredictability of AI-driven combat machines and the possibility that they might be disabled or even turned against Russian forces by adversary hackers. For now, the official Russian position highlights the “inadmissibility of loss of meaningful human control,” placing it in line with the rest of the international community’s emerging viewpoint, and the debate seems to be moving toward a requirement to maintain a human in the loop on all lethal decisions.

What kind of military uses for AI is the Russian establishment openly discussing? The future MiG-41 combat aircraft “will be provided with elements of artificial intelligence that will help the pilot to control the aircraft at speeds four to six times higher than the speed of sound,” RT reports. The Su-35 jet reportedly already has AI that can match available weapons to potential targets. And starting this year, the Russian military will acquire the “Bylina” electronic warfare system, touted as capable of “independent analysis” and “choosing ways to suppress enemy electronic signals.” Russian military experts have said the Bylina is “close to being an actual artificial intelligence system.”

Russian designers and military officials are also working on aerial drones that adjust to emerging battlefield conditions and coordinate among themselves when deployed in swarms, though progress appears to trail similar efforts in the United States and China.

Recently, the Russian “Galtel” unmanned underwater vehicle was successfully used to hunt for unexploded ordnance in the port area of Tartus, Syria. It was reportedly equipped with “artificial intelligence,” allowing the UUV to problem-solve on the go.

Certain AI-driven successes are also visible in Russia’s burgeoning unmanned ground military vehicle sector. For example, the Foundation for Advanced Studies is using the “Nerekhta” unmanned ground vehicle as an AI test bed, learning to cooperate with other ground and aerial unmanned systems.

The Tactical Missiles Corporation has said Russia will roll out artificial intelligence-powered missiles in a few years. A former Russian Air Force chief, Gen. Viktor Bondarev, has said that Russian combat aircraft would get AI-infused cruise missiles that will analyze the air and radar situation and improve its altitude, speed, and direction. The famed arms manufacturer Kalashnikov recently announced that it will launch a range of “autonomous combat drones” which will use artificial intelligence to identify targets and make decisions “on their own.”

Based on the available evidence, Western militaries need not be immediately alarmed about the arrival of AI-infused Russian weapons with next-generation capabilities — except, perhaps, in the field of EW. Western and Chinese efforts are currently well ahead of Russian initiatives, in terms of funding, infrastructure, and practical results. But the Russian government is clearly aiming to marshal its existing academic and industrial resources for AI breakthroughs — and just might achieve them.


Tämä menee vähän sodankäynnistä sivuun mutta osaisiko joku selittää kansantajuisesti miten joku TensorFlow tai vastaava tekoäly ja oppivat hermoverkot oikein toimivat?

Ajauduin töissä ihmettelemään miten eräässä Suomea huomattavasti isommassa maassa eräs museo tahtoo käyttää biotunnisteita, tekoälyä ja lisätyn todellisuuden mobiilisovelluksia tarjotakseen vanhaan kaupunkiin tuleville miljoonille turisteille yksilöllisiä palveluita ja ohjatakseen asiakasvirtojen liikkeitä.

Edit: Yritän tässä samalla edes vähän seurata alan kehitystä; maailmalla tähän liittyvästä keskustelusta käytetään termiä digital humanities. Ja liittyyhän tämä sodankäyntiin kun vähän miettii. Jos tekoälylle annetaan vaikka kaikki Suomen puolustusvoimien simulaattorisotien raakadata niin mitä se voi siitä oppia ja miten?
Viimeksi muokattu:


Greatest Leader
Jos tekoälylle annetaan vaikka kaikki Suomen puolustusvoimien simulaattorisotien raakadata niin mitä se voi siitä oppia ja miten?
Ensinnäkin Tensonflow on kirjasto, joka pitää sisällään koneellisen oppimisen algoritmit. Jos kuvitteellisesti tekoäly ohjemalle annettaisiin tuo data, tarkoituksena löytää ne parempi malli kuin mihin ihmiset ovat pystyneet tässä skenaariossa, niin sen pitää alkaa pohjalta.

Tekoäly ei sinänsä syö sitä raakadataa, vaan se ohjelmaympäristö missä rivisotilaasta panssarivaunuun on symboliset arvot ja mitä se niillä voi tehdä. Tämän jälkeen tekoäly rupeaa simuloimaan dataa. Se liikuttaa ukkoja virtuaalisella kartalla toisia vastaan ja homma voi mennä että

  • 0 tuntia = 0 % tavoitteesta (ei voittoa)
  • 10 tuntia = 5 % tavoitteesta (ei voittoa)
  • ...
  • 100 tuntia = 85 % tavoitteesta (voitto, mutta ei hyvä tulos)
  • 110 tuntia = 90 % tavoitteesta (voitto, kohtalainen tulos)
Homma on käytännössä saman tilanteen ajamista kerta toisensa perään kunnes tekoäly osoittaa haluttua tulosta. Tässä välissä kuitenkin tilanteita missä tekoäly voi osoittaa asioita mitä ihminen ei olisi koskaan ajatellut, koska meidän aivot raakaa tuloksia lennossa ja mieli sitten päätyy siihen parhaimpaan mielestämme.

Tällä hetkellä tekoäly on parhaimmillaan suljetuissa ympäristöissä kun taas oikea maailma aiheuttaa vaikeuksia koska dataa on yltiömäisesti tarjolla. Tässä naamakirjan tekoäly tutkija selittää asiaa

tässä heidän toinen tutkija selittää asiaa syvällisemmin



Mikä siinä raakadatassa on tekoälylle olennaista? Sitähän tulee ihan hemmetisti kun sitä ruvetaan keräämään. Ihminenhän sen raakadatan sille tekoälylle kait syöttää ja valikoi? Missä se frame-ongelma varsinaisesti ilmenee?

Näistähän puhuttiin jo aikanaan flow-algoritmien yhteydessä ja tekoäly ilmestyi mukaan kun akateemikko Kohonen tutki itseohjautuvia karttoja. Silloin niitä ei kai saatu toimimaan, nyt kai toimivat syistä joita en ymmärrä. Enkä ymmärrä tästä muuten mitään muutenkaan, kunhan kyselen.


Greatest Leader
Mikä siinä raakadatassa on tekoälylle olennaista?
Arvot. Ajattele esimerkiksi hetken vaikka ilmakehästä otettuja näyttejä. Jokainen arvo on raakadata yksikkö ja isoina määrinä nuo mitatut arvot muodostavat tietokannan mitä tekoäly ohjelma voi sitten käyttää oppimiseen. Tietyn pisteen jälkeen tekoäly rupeaa ennustamaan miten data rupeaa käyttäytymään ja tuota tietoa ihminen taikka kone voi käyttää hyväkseen. Sää esimerkissä tekoäly sovellus voi antaa hyvin nopean ja melko tarkan ennustuksen mittausalueesta käyttäjille.


Greatest Leader
US Army researcher believes that wars will be fought with human soldiers commanding a team of ‘physical and cyber robots’ to create a network of “Internet of Battle Things” in the future.

“Internet of Intelligent Battle Things (IOBT) is the emerging reality of warfare,” as AI and machine learning advances Alexander Kott, chief of the Network Science Division of the US Army Research Laboratory.

He envisions a future where physical robots are able to fly, crawl, walk, or ride into battle. The robots as small as insects can be used as sensors, and the ones as big as large vehicles can carry troops and supplies. There will also be “cyber robots”, basically autonomous programmes, used within computers and networks to protect communications, fact-check, relay information, and protect other electronic devices from enemy malware.


Greatest Leader
US army researchers have developed a convolutional neural network and a range of algorithms to recognise faces in the dark.

"This technology enables matching between thermal face images and existing biometric face databases or watch lists that only contain visible face imagery,” explained Benjamin Riggan on Monday, co-author of the study and an electronics engineer at the US army laboratory.

"The technology provides a way for humans to visually compare visible and thermal facial imagery through thermal-to-visible face synthesis."

The thermal images are processed and passed to a convolutional neural network to extract facial features using landmarks that mark the corners of the eyes, nose and lips to determine its overall shape. Next, a non-linear regression model maps these features into a corresponding visible representation.

The system, dubbed “multi-region synthesis” is trained with a loss function so that the error between the thermal images and the visible ones is minimized, creating an accurate portrayal of what someone’s face looks like despite only glimpsing it in the dark.

But to be effective, however, it requires the US army to match the recreated image to a face that is previously known in a database such as a watch list or criminal records in order to identify the target.

"When using thermal cameras to capture facial imagery, the main challenge is that the captured thermal image must be matched against a watch list or gallery that only contains conventional visible imagery from known persons of interest," Riggan said.

"Therefore, the problem becomes what is referred to as cross-spectrum, or heterogeneous, face recognition. In this case, facial probe imagery acquired in one modality is matched against a gallery database acquired using a different imaging modality."


Greatest Leader
Researchers at Endgame, a cyber-security arm based in Virginia, have published what they believe is the first large open-source dataset for machine learning malware detection known as EMBER.

EMBER contains metadata describing 1.1 million Windows portable executable files: 900,000 training samples evenly split into malicious, benign, and unlabeled categories and 200,000 files of test samples labelled as malicious and benign.

“We’re trying to push the dark arts of infosec research into an open light. EMBER will make AI research more transparent and reproducible,” Hyrum Anderson, co-author of the study to be presented at the RSA conference this week in San Francisco, told The Register.

Progress in AI is driven by data. Researchers compete with one another by building models and training them on benchmark datasets to reach ever increasing accuracies.

Computer vision is flooded with numerous datasets containing millions of annotated pictures for image recognition tasks, and natural language processing has various text-based datasets to test machine reading and comprehension skills. this has helped a lot in building AI image processing.

Although there is a strong interest in using AI for information security - look at DARPA’s Cyber Grand Challenge where academics developed software capable of hunting for security bugs autonomously - it’s an area that doesn’t really have any public datasets.


Greatest Leader
Pitkä artikkeli siitä, että Kiina tajusi tämän pelin hengen aikaisemmin kuin muut. Meille tämä koulutusasia on melko itsestään selvyys, mutta heille ajatusten ja kulttuurin muutos.

Late on the night of October 4, 1957, Communist Party Secretary Nikita Khrushchev was at a reception at the Mariinsky Palace, in Kiev, Ukraine, when an aide called him to the telephone. The Soviet leader was gone a few minutes. When he reappeared at the reception, his son Sergei later recalled, Khrushchev’s face shone with triumph. “I can tell you some very pleasant and important news,” he told the assembled bureaucrats. “A little while ago, an artificial satellite of the Earth was launched.” From its remote Kazakh launchpad, Sputnik 1 had lifted into the night sky, blasting the Soviet Union into a decisive lead in the Cold War space race.

News of the launch spread quickly. In the US, awestruck citizens wandered out into their backyards to catch a glimpse of the mysterious orb soaring high above them in the cosmos. Soon the public mood shifted to anger – then fear. Not since Pearl Harbour had their mighty nation experienced defeat. If the Soviets could win the space race, what might they do next?

Keen to avert a crisis, President Eisenhower downplayed Sputnik’s significance. But, behind the scenes, he leapt into action. By mid-1958 Eisenhower announced the launch of a National Aeronautics and Space Administration (better known today as Nasa), along with a National Defense and Education Act to improve science and technology education in US schools. Eisenhower recognised that the battle for the future no longer depended on territorial dominance. Instead, victory would be achieved by pushing at the frontiers of the human mind.
Sixty years later, Chinese President Xi Jinping experienced his own Sputnik moment. This time it wasn’t caused by a rocket lifting off into the stratosphere, but a game of Go – won by an AI. For Xi, the defeat of the Korean Lee Sedol by DeepMind’s Alpha Go made it clear that artificial intelligence would define the 21st century as the space race had defined the 20th.

The event carried an extra symbolism for the Chinese leader. Go, an ancient Chinese game, had been mastered by an AI belonging to an Anglo-American company. As a recent Oxford University report confirmed, despite China’s many technological advances, in this new cyberspace race, the West had the lead.

Xi knew he had to act. Within twelve months he revealed his plan to make China a science and technology superpower. By 2030 the country would lead the world in AI, with a sector worth $150 billion. How? By teaching a generation of young Chinese to be the best computer scientists in the world.


Greatest Leader
The Pentagon’s research chief is deep in discussions about the newly announced Joint Artificial Intelligence Center, or JAIC, a subject of intense speculation and intrigue since Defense Undersecretary for Research and Engineering Michael Griffin announced it last week. Griffin has been sparse in his public comments on what the center will do. But its main mission will be to listen to service requests, gather the necessary talent, and deliver AI-infused solutions, according to two observers with direct knowledge of the discussions. Little else about the center has been decided, they say.

“We are looking right now, as we speak, at things like how we structure it, who should lead it, where it should be, how we should structure our other research. These are ongoing questions we are addressing this week,” Griffin said at a House Armed Services Committee hearing.

To prove its worth to service leaders, the center’s first projects will follow the model of the Air Force’s AI-powered Project Maven. Services will bring problems that might be eased by AI — say, reducing human workload in classifying objects discovered in intelligence, surveillance, and reconnaissance data — and the center will marshal computing resources, contractors, and academics toward a solution.


Greatest Leader
Human spies will soon be relics of the past, and the CIA knows it. Dawn Meyerriecks, the Agency’s deputy director for technology development, recently told an audience at an intelligence conference in Florida the CIA was adapting to a new landscape where its primary adversary is a machine, not a foreign agent.

Meyerriecks, speaking to CNN after the conference, said other countries have relied on AI to track enemy agents for years. She went on to explain the difficulties encountered by current CIA spies trying to live under an assumed identity in the era of digital tracking and social media, indicating the modern world is becoming an inhospitable environment to human spies.

But the CIA isn’t about to give up. America’s oldest spy agency is transforming from the kind of outfit that sends people around the globe to gather information, to the type that uses computers to accomplish the same task more efficiently.

This transition from humans to computers is something the CIA has spent more than 30 years preparing for.

Government documents from 1984 describe an “AI Steering Group,” founded the previous year. The group was responsible for providing CIA bosses with monthly reports concerning the state of artificial intelligence research and development.

Kirjoitan tästä samasta asiasta necromorphosis trilogiassa. Tekoäly on paras työkalu tiedusteluelimille, koska se ei koskaan välitä siitä mitä se vakoilee. Se toimii ympäri vuoden valvoen, analysoiden ja pilkkoen eri asioita koskaan kysymättä taukoa taikka palkan korotusta.


Greatest Leader
AI could kick start a nuclear war by 2040, according to a dossier published this month by the RAND Corporation, a US policy and defence think tank.

The technically light report describes several scenarios in which machine-learning technology tracks and sets the targets of nuclear weapons. This would involve AI gathering and presenting intelligence to military and government leaders, who make the decisions to launch weapons of mass destruction.

But there is danger in developing and deploying intelligent software that has its finger halfway on the red button. Other nations may interpret this as an escalation, ultimately leading to one launching a preemptive strike or a “doomsday machine” before it is totally destroyed first.

When computers are programmed to competently recognize threats and recommend retaliations, even as a deterrent force, the mere presence of this technology could cause the world to spiral into catastrophe – as no nuclear state wants to be annihilated first by a robot and thus must be first to launch.

In a way, a technological leap of AI could end the balance or parity essential to mutually assured destruction, leading to all out thermonuclear war. Fearing a machine is about to go crazy and order or recommend a devastating strike, a nation could jump the gun.


Liukkaasti liikkeelle.

Linda Liukkaan lastenkirjasarja Hello Ruby levittäytyy jälleen uusiin maihin 100 000 dollarin apurahan turvin.

Linda Liukas on saanut Expo 2020 Dubai -maailmannäyttelyä järjestävältä organisaatiolta merkittävän apurahan. Expo Live -apuraha on 100 000 dollaria.

Apurahalla on tarkoitus auttaa Liukkaan Hello Rubyn vientiä uusiin maanosiin. Liukkaan Hello Ruby -lastenkirjoilla ja verkkosivustolla tutustutaan tekoälyn, koodaamisen ja tietojenkäsittelyn maailmaan leikin ja luovuuden kautta.

"Ohjelmointi on 2000-luvun kieli, jolla on maailmassa yhä suurempi merkitys. Siksi Hello Rubyn kaltaiset projektit ovat tärkeitä. Olemme iloisia voidessamme auttaa sitä laajentumaan ja tavoittamaan yhä useampia lapsia", Expo Liven Vice President Yousuf Caires sanoo tiedotteessa.

Expo 2020 Dubai -maailmannäyttelyn järjestäjäorganisaatio panostaa Expo Live -ohjelmallaan maailmanlaajuiseen yhteiskunnalliseen vaikuttamiseen. Apurahoja jaetaan projekteille, joilla kehitetään luovia ratkaisuja ihmisten ja yhteiskuntien kohtaamiin haasteisiin. Yksittäisen apurahan määrä voi olla suurimmillaan Liukkaankin saaman 100 000 dollarin verran.

"Apuraha antaa Hello Rubylle mahdollisuuden laajentua uusille markkinoille kuten Lähi-itään, Afrikkaan ja Etelä-Aasiaan. Sen avulla voimme luoda dynaamista, viihdyttävää ja pedagogista verkkosisältöä, joka innostaa opettajia ja lisää heidän käytännön osaamistaan", Liukas sanoo.

Liukkaan kirjoja, sovelluksia ja verkkosivustoa on käännetty jo 25 kielelle. Apurahaa Liukas aikoo käyttää kymmenen videon eri kielille käännettävään sarjaan, uusien oppimateriaalien luomiseen sekä uusien sisällönjakotapojen kartoittamiseen.

Hello Ruby on suunnattu 4–10-vuotiaille lapsille. Opettajat voivat sen avulla suunnitella tietojenkäsittelyn ja teknologian opetusta luovalla tavalla.


Greatest Leader
A team of computer scientists have built the smallest completely autonomous nano-drone that can control itself without the need for a human guidance.

Although computer vision has improved rapidly thanks to machine learning and AI, it remains difficult to deploy algorithms on devices like drones due to memory, bandwidth and power constraints.

But researchers from ETH Zurich, Switzerland and the University of Bologna, Italy have managed to build a hand-sized drone that can fly autonomously and consumes only about 94 milliWatts (0.094 W) of energy. Their efforts were published in a paper on arXiv earlier this month.


Greatest Leader
Computer boffins have devised a potential hardware-based Trojan attack on neural network models that could be used to alter system output without detection.

Adversarial attacks on neural networks and related deep learning systems have received considerable attention in recent years due to the growing use of AI-oriented systems.

The researchers – doctoral student Joseph Clements and assistant professor of electrical and computer engineering Yingjie Lao at Clemson University in the US – say that they've come up with a novel threat model by which an attacker could maliciously modify hardware in the supply chain to interfere with the output of machine learning models run on the device.

Attacks that focus on the supply chain appear to be fairly uncommon. In 2014, reports surfaced that the US National Security Agency participates in supply chain interdiction to intercept hardware in transit and insert backdoors. More recently, Microsoft last year recounted a supply chain attack on software tools.

While there may be easier attack vectors, Clements and Lao argue that integrated circuit design touches so many different companies – circuit designers, circuit fabricators, third-party IP licensing firms and electronic design automation tool vendors – that supply chain security has become difficult to manage.