Kohti Skynettiä - Tekoäly sodankäynnissä

A panel of AI experts were grilled on the impact and importance of artificial general intelligence by the US House of Representatives on Tuesday.

The hearing was ominously named “Artificial Intelligence – With Great Power Comes Great Responsibility.” Narrow AI for specific tasks has been rapidly advancing and the committee wanted to know how far off artificial general intelligence (AGI), where a system can learn multiple actions and do them better than humans, would be.
http://www.theregister.co.uk/2018/06/26/congres_ai_hearing/

They were also quizzed on potential ‘doomsday’ scenarios. Brockman compared thinking about AGI to the thinking about the internet in the late fifties.

“If someone was to describe to you what the internet was going to be, how it’d affect the world, and the fact that all these weird things were going to start happening...You’d be very confused. It’d be very hard to understand what these things will look like...

“Now imagine that that whole story - which played out over the course of the past 60, almost 70 years now - was going to play out over a much more compressed time scale. And so that’s the perspective that I have towards AGI."

"It’s the fact that it can cause this rapid change and it's already hard for us to cope with what technology brings. So is it going to be malicious actors, or if the technology wasn’t built in a safe way, or the deployment and values it’s given is something we’re not happy with. All of those I think are real risks, and those are things we want to start thinking about today," he concluded.

Persons agreed, and placed an big emphasis on needing to evaluate the risks “I think the key thing is being clear-eyed about what the risks actually are, and not necessarily being driven by the entertaining yet science fiction-type narratives on these things - or projecting or going to extremes, assuming far more than where we actually are in the technology.”
 
AIQ-projektit.

Yhdysvaltain puolustusministeriö on perustanut yhteyteensä erillisen tekoälykeskuksen. Joint Artificial Intelligence Center (JAIC) tulee valvomaan lähes kaikkia puolustushallinnon tekoälyä sivuavia projekteja.

Breaking Defense -julkaisun mukaan hyötyjä haetaan ainakin ensimmäisessä vaiheessa nimenomaan tehokkaamman koordinaation kautta. Erityisesti Kiinan valtavat investoinnit ovat herättäneet huolta siitä, että Yhdysvallat saattaa jäädä suurvaltojen välisessä kilpailussa jälkeen.

– Kiina muodostaa tällä hetkellä suurimman haasteen Yhdysvalloille. He suhtautuvat tekoälyyn kuin me Kuu-projektiin 1960-luvulla, CNA-tutkimuslaitoksen analyytikko Larry Lewis toteaa.

Myös Venäjä on lisännyt alan resursseja. Maan kerrotaan rakentavan Mustanmeren rannalle ”tutkimuskaupunkia”, jonka tarkoituksena on keskittyä tekoälyä koskevaan kehitystyöhön. Paikalle on tarkoitus sijoittaa ainakin 2000 insinööriä ja tutkijaa vuoteen 2020 mennessä.

Myös Suomen puolustusvoimien apulaisosastopäällikkö, eversti Harri Sunin mukaan JAIC:n perustamisella tulee olemaan vaikutusta suurvaltojen tekoälykilpailuun.

– Merkittävä päätös, Suni kirjoittaa Twitterissä.

https://www.verkkouutiset.fi/merkittava-paatos-usan-armeijalle-tekoalykeskus/
 
https://saab.com/gripen/our-fighters/gripen-fighter-system/gripen-e-series/gripen-e/

"When at the peak of a complex mission, the human brain can only handle a certain number of inputs at once. Gripen E/F achieves the optimal balance between the pilot's and the fighter's decision space, letting fighter intelligence take on a larger role. Gripen E/F’s fighter intelligence has the capability to work autonomously on several areas simultaneously, and provides the pilot with suggestions. Suggestions ranging from anything between weapon selection and full manoeuvring of the fighter. It shares and displays the right tactical information, at the right moment giving an optimised battlespace overview "
 

This Spring 2018 term course is a cross-disciplinary investigation of the implications of emerging technologies, with an emphasis on the development and deployment of Artificial Intelligence. The course covers a variety of issues, including the complex interaction between governance organizations and sovereign states, the proliferation of algorithmic decision making, autonomous systems, machine learning and explanation, the search for balance between regulation and innovation, and the effects of AI on the dissemination of information, along with questions related to individual rights, discrimination, and architectures of control.
https://www.media.mit.edu/courses/the-ethics-and-governance-of-artificial-intelligence/
 
The Pentagon’s research chief is deep in discussions about the newly announced Joint Artificial Intelligence Center, or JAIC, a subject of intense speculation and intrigue since Defense Undersecretary for Research and Engineering Michael Griffin announced it last week. Griffin has been sparse in his public comments on what the center will do. But its main mission will be to listen to service requests, gather the necessary talent, and deliver AI-infused solutions, according to two observers with direct knowledge of the discussions. Little else about the center has been decided, they say. https://www.defenseone.com/technology/2018/04/pentagon-building-ai-product-factory/147594/
 
Jaaha, oli kiva jutella teidän kaikkien kanssa.

US Air Force to Use New Neuromorphic Supercomputer for AI Research

In a joint effort with IBM, the Air Force Research Lab (AFRL) unveiled the world’s largest neuromorphic supercomputer, Blue Raven, with the processing power of 64 million neurons. By 2019, AFRL expects to demonstrate an airborne target-recognition application developed using Blue Raven.

The processors, which were developed by IBM with the Defense Advanced Research Project Agency (DARPA), are divided across four individual printed circuit boards with 16 processors each. The boards are configured into a typical server chassis setup and feature high bandwidth data links.





Jeremy O'Brien, senior computer scientist for the AFRL information directorate, told Avionics he refers to Blue Raven as a supercomputer "because of its ability to simultaneously emulate detailed models of 64 million biological neurons and 16 billion biological synapses, and, most importantly, its ability to produce more meaningful outputs from sensory data inputs."

Neuromorphic processors are based on the neuromorphic computing concept first introduced by Carver Mead, a professor of engineering and applied science at the California Institute of Technology. In 1986, Mead was one of two co-founders of Synaptics Inc., a company established to develop analog circuits based in neural networking theories for speech and vision recognition technologies. In 1990, he published his first work on neuromorphic electronic systems in Proceedings of the Institute of Electrical and Electronics Engineers (IEEE).

Engineers and computer scientists at AFRL will use Blue Raven to execute artificial intelligence and machine-learning algorithms. Blue Raven also provides a platform for research and development, testing and evaluation for applications in computational neuroscience for the U.S. Department of Defense and other U.S. government agencies. The Air Force specifically wants to use its computing power to produce advancements in its combat capabilities and is evaluating its computing architecture for integration into onboard aircraft sensors.

Air Force Lt. William Murphy spent six months with IBM’s research team to serve as a liaison between the systems' end-to-end IBM TrueNorth ecosystem.

A key difference between BlueRaven and existing supercomputers is what the design team achieved in size weight and power. Blue Raven’s processing power only consumes 40 watts of power, which is the equivalent of a household light bulb, according to AFRL. It also consumes up to 100 times less energy when executing AI and machine-learning algorithms, said O’Brien.

Blue Raven continues a growing trend among commercial and military aviation technology researchers focusing on future deployment of artificial intelligence and machine-learning applications for airplanes. Aitech Defense Systems, a major supplier of embedded computing to aerospace and defense OEMs, has its own general-purpose graphics processing unit (GPGPU)-based A176 Cyclone supercomputer that uses NVIDIA’s Jetson TX2’s machine-learning capability to capture images in real time for unmanned aircraft systems. In the future, Aitech believes wide-angle and narrow-angle cameras can be equipped with artificial intelligence to detect moving objects.

Elsewhere, Boeing announced a collaboration with artificial intelligence technology provider SparkCognition to deploy future AI-based unmanned traffic management (UTM) applications. The two companies want to use artificial intelligence and blockchain technologies to track unmanned aerial vehicles in flight and allocate traffic corridors for autonomous vehicles.

Future airborne military applications for AI and machine learning can be enabled by the processing power of Blue Raven.

“Without new, incredibly advanced computing architectures like Blue Raven, real-time employment of AI, machine learning and autonomy capabilities in constrained and contested environments would be extremely challenging,” said O’Brien.

https://www.aviationtoday.com/2018/08/03/air-force-use-new-neuromorphic-supercomputer-ai-research/
 

In a nutshell, DeepLocker is a highly evasive piece of kit (It could lay dormant in an application and never surface for be detected unless its target presents itself in from of a webcam.) that provides an attacker with the ability to precisely attack specific targets via facial and voice recognition, geolocation, and pretty much any defined triggers that can be learned by an AI. What does this mean? It means that cyber attacks can be extremely prolific, undetectable, and extremely surgical.

"To demonstrate the implications of DeepLocker's capabilities, we designed a proof of concept in which we camouflage a well-known ransomware (WannaCry) in a benign video conferencing application so that it remains undetected by malware analysis tools, including antivirus engines and malware sandboxes. As a triggering condition, we trained the AI model to recognize the face of a specific person to unlock the ransomware and execute on the system. Imagine that this video conferencing application is distributed and downloaded by millions of people, which is a plausible scenario nowadays on many public platforms. When launched, the app would surreptitiously feed camera snapshots into the embedded AI model, but otherwise behave normally for all users except the intended target. When the victim sits in front of the computer and uses the application, the camera would feed their face to the app, and the malicious payload will be secretly executed, thanks to the victim's face, which was the preprogrammed key to unlock it.

It's important to understand that DeepLocker describes an entirely new class of malware - "any number of AI models could be plugged in to find the intended victim, and different types of malware could be used as the "payload" that is hidden within the application."

The above was accomplished live, just 20 feet from me. IBM had its target simply walk by the infected laptop and the malware payload was immediately triggered.

Keep in mind that this research is conducted by IBM and is not (that we know of) in the wild. However, IBM estimates that AI driven threats will be making their debut in the public arena very soon.

https://securityintelligence.com/deeplocker-how-ai-can-power-a-stealthy-new-breed-of-malware/
 
Tavoitteena kouluttaa prosentti kansasta tekoälyn perusteisiin.

Helsingin yliopiston ja Reaktorin kaikille avoin Elements of AI -verkkokurssi julkistettiin toukokuussa, ja tällä hetkellä kurssia käy noin 90 000 tekoälyn perusteista kiinnostunutta osallistujaa ympäri maailman. Kurssin on hyväksytysti läpäissyt jo noin 7500 ihmistä, ja ensimmäisiä kurssilta valmistuneita juhlitaan Helsingin yliopistossa 6. syyskuuta. Kurssin suomenkielinen versio julkaistaan syksyn aikana ja jatkokurssi keväällä 2019.
Reaktorin ja Helsingin yliopiston suunnittelema ja rakentama Elements of AI -verkkokurssi tarjoaa kaikille mahdollisuuden opiskella tekoälyn perusteet ilmaiseksi. Kurssi pureutuu tekoälyn perusteisiin käytännöllisten esimerkkien kautta ja tarjoaa opiskelijoille, työelämässä oleville ja tekoälystä muuten kiinnostuneille helpon ja kannustavan väylän tekoälyn pariin. Kahden opintopisteen arvoisen verkkokurssin suorittaminen ei vaadi aiempia tekoäly- tai ohjelmointitaitoja, ja se soveltuu kaikille. Koulutusohjelman suorittaminen on mahdollista osoitteessa www.elementsofai.com. Tällä hetkellä verkkokurssia käy noin 90 000 ihmistä yli 80 maasta.

https://www.sttinfo.fi/tiedote/ilmainen-ai-verkkokurssi-kouluttanut-jo-tuhansia-tekoalyn-perusteisiin-suomenkielinen-versio-ja-jatkokurssi-tulossa?publisherId=3747&releaseId=69638395
 
Niin siis täysin autonomisista "robottijärjestelmistä": Tarvitsevat AINA sähköä ja eletroniikkaa toimiakseen. Vaikka sisuksissa olisi taistelucompuutteri täysin autonimisessa moodissa ja oikein ohjelmoituna, niin eikös EMP-pulssi jantterin vieressa hiljenna toosan: Ei muuten, mutta tuli mieleen vast ikää katsotusta Matrix leffasta (vaihteekx) , jossa lopullinen voitto saatuvutettiin laukaisemalla se maanolioiden viimeinen EMP-ohjus oikeassa paikassa oikeaan aikaan.

Ymmärtääkseni ei välttämättä edes vaikea toteuttaa (toki olen tod. näk. väärässä) ja DYI videot eivät ehkä paras referenssi.

Esim. tässä näissä on kuvailtu mitä pitäisi tehdä sammuttaaksen jonkin "eletronisen härvelin". Erikseen totean, jotta tällä ei varmaan kannata tehdä naapurin Bemaria käyntikyvyttömäksi,
oletan toki että sopisi myös siihen.

https://www.electronicproducts.com/...ini_EMP_generator_to_disrupt_electronics.aspx
https://www.wikihow.com/Build-an-EMP-Generator

Sikäli kun mitään eletroniikasta ymmärrän ja osaan, nuilla "suuntaviivoilla" voisin tarjota naapurin Bemarikuskille kyydin maanantaina töihin :) !
 
Lisää ns. Paskaa housuun. Kyllä minority report fetisisteille tulee orgastiset ajat.



Ei näin. EMP <3
 
 
Pitää myöntää hän on hyvä tekemään ilmoituksia, mutta ongelma on niiden ymmärtämisessä ja sekä toteuttamisessa koska kyseessä on läpimurto teknologia.
Yesterday, U.S. President Donald Trump signed an executive order establishing the American AI Initiative, with the aim of “accelerating our national leadership” in artificial intelligence. The announcement framed it as an effort to win an AI arms race of sorts:

“Americans have profited tremendously from being the early developers and international leaders in AI. However, as the pace of AI innovation increases around the world, we cannot sit idly by and presume that our leadership is guaranteed.”​

While extremely light on details, the announcement mentioned five major areas of action:

  1. Having federal agencies increase funding for AI R&D
  2. Making federal data and computing power more available for AI purposes
  3. Setting standards for safe and trustworthy AI
  4. Training an AI workforce
  5. Engaging with international allies—but protecting the tech from foreign adversaries

IEEE Spectrum asked four experts for their take on the announcement. Several saw it as a response to China’s AI policy, which calls for major investment in order to make China the world leader in AI by 2030. (The former head of Google China recently explained to IEEE Spectrum why China has the edge in AI.)
https://spectrum.ieee.org/tech-talk...perts-respond-to-trumps-executive-order-on-ai
 
Tekoäly tulee hävittäjiin, miehittämättömiin koneisiin, jotka toimivat hävittäjien kanssa, sekä myöskin koko verkostoon tukemaan esim. taistelunjohtoa ja koordinaatiota.

Tässä esim. linkki jenkkien Skyborg-hankkeesta.

Introducing Skyborg, your new AI wingman
 
Step 1: Use AI to make undetectable changes to outdoor photos.
Step 2: release them into the open-source world and enjoy the chaos.

Worries about deep fakes — machine-manipulated videos of celebrities and world leaders purportedly saying or doing things that they really didn’t — are quaint compared to a new threat: doctored images of the Earth itself.

China is the acknowledged leader in using an emerging technique called generative adversarial networks to trick computers into seeing objects in landscapes or in satellite images that aren’t there, says Todd Myers, automation lead and Chief Information Officer in the Office of the Director of Technology at the National Geospatial-Intelligence Agency.

“The Chinese are well ahead of us. This is not classified info,” Myers said Thursday at the second annual Genius Machines summit, hosted by Defense One and Nextgov. “The Chinese have already designed; they’re already doing it right now, using GANs—which are generative adversarial networks—to manipulate scenes and pixels to create things for nefarious reasons.”

For example, Myers said, an adversary might fool your computer-assisted imagery analysts into reporting that a bridge crosses an important river at a given point.
https://www.defenseone.com/technolo...world-and-china-ahead/155944/?oref=d_brief_nl
 
Rikollinen tekoäly.

Vankeja on ammoisista ajoista käytetty työvoimana, mutta Suomessa vankityö on saamassa aivan uudenlaista muotoa.
Suomessakin vankityöllä on aikanaan rakennettu muun muassa teitä ja vangit avustivat esimerkiksi Helsinki-Vantaan lentokentän rakentamisessa. Nyt suomalainen startup-yritys aikoo käyttää vankityövoimaa tekoälyn parantamiseen. Kokeilussa ovat toistaiseksi mukana Helsingin ja Turun vankilat. Niihin on varattu yhteensä kymmenen työasemaa projektiin valittujen vankien käyttöön.

Käytännössä vankien työ on sitä, että he lukevat suomenkielisiä tekstejä ja vastaavat yksinkertaisiin tekstiä koskeviin kysymyksiin. Sama kysymys toistetaan useita kertoja useilla useilla eri ihmisillä. Näin saadaan kerättyä aineisto, jota käytetään tekoälyn kouluttamisessa. https://www.mtvuutiset.fi/artikkeli/vangit-kehittavat-suomalaista-tekoalya/7350150#gs.3lgmaf
 
4742635_0.jpg


The Pentagon’s newly minted artificial intelligence center is creating a framework for the military’s cybersecurity data, which will lay the foundation for AI-powered cyber defense tools.

The Joint Artificial Intelligence Center is partnering with the National Security Agency, U.S. Cyber Command and dozens of Defense Department cybersecurity vendors to standardize data collection across the Pentagon’s sprawling IT ecosystem, according to Lt. Gen. Jack Shanahan, who leads the JAIC.

By creating a consistent process for curating, describing, sharing and storing information, the JAIC intends to create a trove of cyber data that could ultimately be used to train AI to monitor military networks for potential threats, Shanahan said Wednesday at the Billington Cybersecurity Summit.

Tech leaders in government and industry have long touted AI’s ability to monitor networks and detect suspicious behavior. But building those tools requires a lot of consistent training data, Shanahan said, and at least in the Defense Department, that data is hard to come by.
https://www.defenseone.com/technolo...i-powered-cyber-defenses/159650/?oref=d-river
 
In artificial intelligence circles, we hear a lot about adversarial attacks, especially ones that attempt to “deceive” an AI into believing, or to be more accurate, classifying, something incorrectly. Self-driving cars being fooled into “thinking” stop signs are speed limit signs, pandas being identified as gibbons, or even having your favorite voice assistant be fooled by inaudible acoustic commands—these are examples that populate the narrative around AI deception. One can also point to using AI to manipulate the perceptions and beliefs of a person through “deepfakes” in video, audio, and images. Major AI conferences are more frequently addressing the subject of AI deception too. And yet, much of the literature and work around this topic is about how to fool AI and how we can defend against it through detection mechanisms.

I’d like to draw our attention to a different and more unique problem: Understanding the breadth of what “AI deception” looks like, and what happens when it is not a human’s intent behind a deceptive AI, but instead the AI agent’s own learned behavior. These may seem somewhat far-off concerns, as AI is still relatively narrow in scope and can be rather stupid in some ways. To have some analogue of an “intent” to deceive would be a large step for today’s systems. However, if we are to get ahead of the curve regarding AI deception, we need to have a robust understanding of all the ways AI could deceive. We require some conceptual framework or spectrum of the kinds of deception an AI agent may learn on its own before we can start proposing technological defenses.
 
Hävittäjähankinnoissa on AI:n kehittymisen vuoksi entistäkin suurempi painoarvo sillä että viholliskoneet voidaan tuhota kaukaa antautumatta lähitaisteluun.

Vihollisen AI ei voi mitään jos sillä ei ole havaintoja koneesta joka tuhoaa sen.
 
Back
Top