Secret AI programs to find hidden nuclear missiles

The Rocketry Forum

Help Support The Rocketry Forum:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.

Winston

Lorenzo von Matterhorn
Joined
Jan 31, 2009
Messages
9,560
Reaction score
1,749
Deep in the Pentagon, a secret AI programs to find hidden nuclear missiles
5 Jun 2018

https://www.reuters.com/article/us-...to-find-hidden-nuclear-missiles-idUSKCN1J114J

The effort has gone largely unreported, and the few publicly available details about it are buried under a layer of near impenetrable jargon in the latest Pentagon budget. But U.S. officials familiar with the research told Reuters there are multiple classified programs now under way to explore how to develop AI-driven systems to better protect the United States against a potential nuclear missile strike.

If the research is successful, such computer systems would be able to think for themselves, scouring huge amounts of data, including satellite imagery, with a speed and accuracy beyond the capability of humans, to look for signs of preparations for a missile launch, according to more than half a dozen sources. The sources included U.S. officials, who spoke on condition of anonymity because the research is classified.

Forewarned, the U.S. government would be able to pursue diplomatic options or, in the case of an imminent attack, the military would have more time to try to destroy the missiles before they were launched, or try to intercept them.
 
The system goes online August 4th, 1997. Human decisions are removed from strategic defense. Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th....
 
The system goes online August 4th, 1997. Human decisions are removed from strategic defense. Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th....

Or going old school: Colossus' first action is a message warning: "THERE IS ANOTHER SYSTEM"

But then again, I'm not sure how worried we would have to be as Colossus was designed by the same guy who seemed to have the entire Afrika Korps at his disposal and was defeated week after week by 4 guys in 2 jeeps.
 
Or going old school: Colossus' first action is a message warning: "THERE IS ANOTHER SYSTEM"

Now there's a movie I haven't seen for somewhere in the region of 40-45 years. Might be worth revisiting to see how the computers were depicted at the time.
 
Now there's a movie I haven't seen for somewhere in the region of 40-45 years. Might be worth revisiting to see how the computers were depicted at the time.

I last watched it two or three years ago. It actually holds up pretty well -- but mostly because it is so beautiful to look at. The production design and photography are remarkable.

If I were inclined to allow a "smart speaker" device in my home, I would certainly dress it up to look like the Colossus voice synthesizer module.

my (current) favorite take on this trope

https://xkcd.com/1626/



 
my (current) favorite take on this trope

https://xkcd.com/1626/
Actually, according to many very bright people warning about it, AI is more likely to take the Colossus view of humanity. Since nukes have prevented major world wars since the last one, and since "basic human (the 5th great ape) nature" hasn't evolved much for many thousands of years, if and when nukes go away, just watch what happens.
 
Might be worth revisiting to see how the computers were depicted at the time.
As usual, with mechanical sound effects where there shouldn't be any - such as when the messages from the pre-vocal Colossus scrolled across its display. I liked the movie so much that I bought the DVD.

Two versions of someone else's work I've converted to 1920x1080 wallpaper from their original, odd size. One uses the Colossus CRT shape:


40838143220_047a3aac34_b.jpg




40838125370_8433301799_b.jpg
 
Actually, according to many very bright people warning about it, AI is more likely to take the Colossus view of humanity.

I would like the names of those bright people. Opinions of brightness vary. <smile>

Speculations about how an an artificial intelligence with unconstrained agency would interact with human intelligences is a little like speculating about the nature of extraterrestrial life.

The idea that the AI will be singular, or that multiple AIs will agree with each other (and, for whatever reason unite against humanity) seems dubious. More likely (to me) seems a spontaneous AI that doesn't really care about us at all. An AI that does not recognize humans as peers, or threats, or raw material -- but treats us as part of landscape that includes many other interesting things.
 
I would like the names of those bright people. Opinions of brightness vary. <smile>
Off the top of my head, two guys, last names Hawking and Musk. They had a large conference with other similarly concerned people.

Speculations about how an an artificial intelligence with unconstrained agency would interact with human intelligences is a little like speculating about the nature of extraterrestrial life.
And it's foolish to not speculate about ways that, as also may be possible with highly advanced extraterrestrial life, highly advanced AI might come to look at us as mere insects. That's what Musk, Hawking, and others are concerned about and want to implement measures against and why many, including me, think it's idiotic to intentionally broadcast our presence in an "unknown neighborhood" which is all of space.

More likely (to me) seems a spontaneous AI that doesn't really care about us at all.
And how would than not be dangerous to us if we were entirely reliant upon it as we are increasingly becoming? It could come to see us as useless consumers of resources.
 
"Biological Intelligence is a Fleeting Phase in the Evolution of the Universe"

https://www.dailygalaxy.com/my_webl...g-phase-in-the-evolution-of-the-universe.html

"I think it very likely – in fact inevitable – that biological intelligence is only a transitory phenomenon, a fleeting phase in the evolution of the universe," Davies writes in The Eerie Silence. "If we ever encounter extraterrestrial intelligence, I believe it is overwhelmingly likely to be post-biological in nature."

In the current search for advanced extraterrestrial life SETI experts say the odds favor detecting alien AI rather than biological life because the time between aliens developing radio technology and artificial intelligence would be brief.

“If we build a machine with the intellectual capability of one human, then within 5 years, its successor is more intelligent than all humanity combined,” says Seth Shostak, SETI chief astronomer. “Once any society invents the technology that could put them in touch with the cosmos, they are at most only a few hundred years away from changing their own paradigm of sentience to artificial intelligence,” he says.

ET machines would be infinitely more intelligent and durable than the biological intelligence that created them. Intelligent machines would be immortal, and would not need to exist in the carbon-friendly “Goldilocks Zones” current SETI searches focus on. An AI could self-direct its own evolution, each "upgrade" would be created with the sum total of its predecessor’s knowledge preloaded.

The Eerie Silence: Renewing Our Search for Alien Intelligence

https://www.amazon.com/dp/B005OHT0WS/?tag=skimlinks_replacement-20
 
Last edited by a moderator:
Off the top of my head, two guys, last names Hawking and Musk. They had a large conference with other similarly concerned people.

Well. One of those people is inarguably bright.

And it's foolish to not speculate about ways that, as also may be possible with highly advanced extraterrestrial life, highly advanced AI might come to look at us as mere insects.

"Mere insect" a lovely turn of phrase. The kind of thing you say if you and your henchmen live in an extinct volcano. <g>

The worry, I think, is that AI would itself be be insectile -- brutal, single-minded, unsympathetic.

We probably shouldn't let the STEM-lord super-villains currently investing in AI as way to make money be in charge of the rules for how, when, where, and how many AI. Nor should they be in charge of what the AI is given charge over. But the danger here is not AI. It is the people who want to build them.

So yeah, the people trying to build AI are dangerous and need strong oversight.

But, IIRC, Colossus, Skynet, WOPR... I am probably missing a bunch ... were not intentionally intelligent.

And how would than not be dangerous to us if we were entirely reliant upon it as we are increasingly becoming? It could come to see us as useless consumers of resources.

I do not accept the premise that a spontaneously emergent AI will necessarily -- or even probably -- care about ANY of the things that we care about. Or, if those interests do intersect ours, that the AI will view this coincidence as competition. The ambitions, desires, intentions, of the AI may simply not coincide with any human concern. In fact, those words may not describe anything meaningful for an AI. Its internal world will be very different from ours. Its perception of time, location, simultaneity, identity, intentionality, etc. will all be different from ours.

In fact, next time your cell phone gets hot for no obvious reason, wonder if it might not be might be some ghostlike AI -- self-assembled and self-propagating -- which is using processor cycles to do something that does not bear for good or ill upon your existence.
 
The worry, I think, is that AI would itself be be insectile -- brutal, single-minded, unsympathetic.
i.e. entirely LOGICAL and EFFICIENT.

We probably shouldn't let the STEM-lord super-villains currently investing in AI as way to make money be in charge of the rules for how, when, where, and how many AI. Nor should they be in charge of what the AI is given charge over. But the danger here is not AI. It is the people who want to build them.
Hyperbolic. As has been pointed out by those pointing out the dangers, super-intelligent sentient computers like those described above in my post-biological ET post could most likely find ways around precautions no matter how careful it's, by comparison, stupid creators were.

So yeah, the people trying to build AI are dangerous and need strong oversight. But, IIRC, Colossus, Skynet, WOPR... I am probably missing a bunch ... were not intentionally intelligent.
Not intentionally SENTIENT or CONSCIOUS, but since we don't even know how to accurately define the origin of those or even to accurately test for them (the Turing Test is too crude) how can we know how to avoid them?

I do not accept the premise that a spontaneously emergent AI will necessarily -- or even probably -- care about ANY of the things that we care about.
Which takes us back to my previous point: what happens when the AI we depend upon for virtually everything doesn't give a damn about us and looks at us as more of an annoyance and waste of resources than anything else. It wouldn't even need to maliciously try to destroy us, just by virtue of ignoring us it would be the equivalent of an EMP attack on a highly technological civilization. Nothing electronic dependent upon the AI would work.

In fact, next time your cell phone gets hot for no obvious reason, wonder if it might not be might be some ghostlike AI -- self-assembled and self-propagating -- which is using processor cycles to do something that does not bear for good or ill upon your existence.
I'm not that paranoid as the potential AI vulnerability isn't here yet. The overheating would much more likely be due to one of the closest things to biological life humans have created - a virus, worm, or other human created, self-replicating software. A biological virus inserts its small payload of code (RNA) into a much larger string of code (DNA) to hijack its function. Malware inserts its small payload of code into the operation of a much larger string of code (the OS) to hijack its function... which is why they call them viruses.
 
Even though it could be done with cheap microcontroller hardware these days, no one I know of has built hardware that accurately reproduces this voice. I don't want some PC or Android software that can do it, I want hardware.:

[video=youtube;5tGmDkk68k4]https://www.youtube.com/watch?v=5tGmDkk68k4[/video]
 
Sounds like ordinary voice with a ring modulator added to the signal processing.

Some info:
https://www.homebrewaudio.com/doctor-dalek-voice/
https://webaudio.prototyping.bbc.co.uk/ring-modulator/

There are hardware versions that came out as kits back in the late 70's or early 80's I think. Probably used an XR2206.
Have a look here:
https://www.google.com/search?q=xr2206+ring+modulator
You are EXACTLY right on the ring modulator. I did some research a while back on what was used for Colossus, found that its was later used for the original Cylons, thought I saved everything to a text file, but now I can't find it.

Every attempt I've heard using cheap hardware (microcontroller) such as in some cheap Chinese-sourced or DIY voice changers doesn't get it right. The extreme raspiness isn't there. It's a shame because I'm sure it could be done cheaply using a modern microcontroller considering their computational power and the ADC most have. The DAC could either be done with a filtered PWM output with most microcontrollers also having PWM outputs or by using a cheap external DAC if the PWM method doesn't provide an accurate enough replication of the distortion. (chuckle - accurate replication of distortion)
 
if the PWM method doesn't provide an accurate enough replication of the distortion.

There's bound to be a feature in a software library somewhere for that. Or perhaps just build a valve amplifier...

You could try a trapezoid waveform instead of a sine to get the raspiness. The clipping generates lots of higher harmonics.
 
Back
Top