Self driving vehicles and infrastructure vulnerability

The Rocketry Forum

Help Support The Rocketry Forum:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.

Winston

Lorenzo von Matterhorn
Joined
Jan 31, 2009
Messages
9,560
Reaction score
1,748
New Self-Driving Shuttle in Las Vegas Crashes Just Hours After Launch
Nov 8, 2017

https://www.thedrive.com/news/15914...-in-las-vegas-crashes-just-hours-after-launch

Multiple humans at fault here - the car's programmers (it didn't back up when it OBVIOUSLY should have) and the truck driver. Imagine how many obvious bugs are going to be found in vehicle programming and not so obvious avenues to hacking which will not be found. Also, when the car's choice is to hit two or more pedestrians or cyclists by avoiding an oncoming semi or plow you into the semi, what choice will it make for YOU?

Considering the already insane level of vulnerability of our national infrastructure to hack attacks, what will it be like if our driverless transportation systems are shut down when people's cars begin to run off the road or into each other or semi trucks due to just SUSPECTED hacks? How long can large cities survive without stuff like food and fuel constantly being trucked in?

Self-driving vehicles without the ability for human-controlled MANUAL driving aren't smart at all, IMO, but that's exactly the route being taken.

The outstanding documentary, "Zero Days" reveals amazing leaks from various NSA personnel (combined into one digitally created female) about OUR cyberwarfare capabilities:

[video=youtube;J50bUcf8gfc]https://www.youtube.com/watch?v=J50bUcf8gfc[/video]

Just the comments on Nitro Zeus begin at 1:45:25. WE are more vulnerable to this sort of thing than any of our adversaries which is undoubtedly why this capability is, like nuclear weapons, held in reserve as a last resort. You can just bet that Iran isn't our only target for such malware, some of it apparently already in place. Now, add the potential ability in the not so distant future for adversaries to shut down OUR entire transportation system.

Nitro Zeus

https://en.wikipedia.org/wiki/Nitro_Zeus

Nitro Zeus is a project name for a well funded comprehensive cyber attack plan created as a mitigation strategy after the Stuxnet malware campaign and its aftermath.[1] Unlike Stuxnet, that was loaded onto a system after the design phase to affect its proper operation, Nitro Zeus's objectives are built into a system during the design phase unbeknownst to the system users. This builtin feature allows a more assured and effective cyber attack against the system's users[2].

The information about its existence was raised during research and interviews carried out by Alex Gibney for his Zero Days documentary film. The proposed long term widespread infiltration of major Iranian systems would disrupt and degrade communications, power grid, and other vital systems as desired by the cyber attackers. This was to be achieved by electronic implants in Iranian computer networks. [3] The project was seen as one pathway in alternatives to full-scale war. There was no requirement for this type of plan after the Iran Nuclear Deal was signed.


This is not at all new. Here's what we caused to happen in the USSR 35 years ago:

https://www.damninteresting.com/the-farewell-dossier/

Excerpts:

In 1982, operatives from the USSR’s Committee for State Security—known internationally as the KGB—celebrated the procurement of a very elusive bit of Western technology. The Soviets were developing a highly lucrative pipeline to carry natural gas across the expanse of Siberia, but they lacked the software to manage the complex array of pumps, valves, turbines, and storage facilities that the system would require. The United States possessed such software, but the US government had predictably turned down their Cold War opponent’s request to purchase the product.

Never ones to allow the limitations of the law to dictate their actions, the KGB officials inserted an agent to abduct the technology from a Canadian firm. Unbeknownst to the Soviet spies, the software they stole sported a little something extra: a few lines of computer code which had been inserted just for them.

After the US government denied the USSR’s request to buy the software to automate their new trans-Siberian pipeline, a KGB agent was covertly sent to a Canadian company to steal the software. A new batch of Farewell Dossier documents brought these efforts to the attention of the CIA, prompting US agents to tailor a special version of the software for the Soviets, and plant it at the company in question. Delighted at the ease of procuring the program, the Soviets tested their complete pipeline automation system and everything seemed to hum along smoothly. By about the middle of 1982, the pipeline was pumping massive amounts of natural gas across Kazakhstan and Russia to Eastern Europe, bringing in a tidy profit for the USSR government.

Some weeks after going online, in the summer of 1982, the clandestine code in the pipeline control program asserted itself. Disguised as an automated system test, the software instructed a series of valves, turbines, and pumps to increase the pipeline’s pressure far beyond its capacity, putting considerable strain on the line’s many joints and welds over a period of time. One day, somewhere in the cold loneliness of Siberia, the overexerted pipeline finally succumbed to the pressure.

As satellites for the North American Aerospace Defense Command (NORAD) watched from orbit, a massive explosion rocked the Siberian wilderness. The fireball had an estimated destructive power of three kilotons, or about 1/4 the strength of the Hiroshima bomb. Initially NORAD suspected a nuclear test, but there was only silence from the satellites which would have detected the telltale electromagnetic signature. US military officials who were not privy to the Farewell Dossier activities were understandably concerned about the event—one of the largest non-nuclear blasts ever recorded—but the CIA quietly assured them that there was nothing to worry about. It would be fourteen years before the real cause of the event would be revealed.
 
We kind of need to go all-in on self driving if we're going to do it. Humans are extremely bad at needing to pay attention and make split second decisions after a long period of not having to do so. That's a big reason for the issues with the Tesla autopilot. They expect people to take over, but the people don't.

As for attacks, everything being connected means everything is vulnerable. Even the very best code will have bugs. How many and how severe they are depends on the amount of time and money spent on development. In theory at least, it's possible to write mathematically provable code. In practice, it's hideously expensive and time consuming for anything more complex than "hello world". And this leaves out intentional backdoors and such. Something like the 1982 pipeline issue could happen, and if it does, it's game over. We've already seen attacks on connected cars. So far, only non-critical systems could be affected. But if you could convince, say, the transmission to switch to reverse at highway speeds or drift the timing of the spark plugs....

Much of our infrastructure, at the moment, is so poorly managed that it's unlikely even an intentional attack could knock out a large area. Even known incidents haven't been able to do much damage. For example, the power grid, isn't. It's actually a number of regional grids with various interconnects. A successful attack on one doesn't have to affect the others. In my opinion, such systems should be completely air-gapped from the public internet. It's more expensive to have dedicated lines, but you need physical access to connect to them.
 
Multiple humans at fault here - the car's programmers (it didn't back up when it OBVIOUSLY should have)

Really?
Some idiot tractor trailer driver clipped a car that stopped on the road. Could have happened to you, me, or anyone else in any other car, self-driving or not.
It's the responsibility of the driver initiating a turn to complete it safely, not for everyone else to run away and make room for a delivery truck...

Imagine how many obvious bugs are going to be found in vehicle programming and not so obvious avenues to hacking which will not be found.
[...]
Considering the already insane level of vulnerability of our national infrastructure to hack attacks, what will it be like if our driverless transportation systems are shut down when people's cars begin to run off the road or into each other or semi trucks due to just SUSPECTED hacks?

Sorry, but this is the classical neo-Luddism.

There be always bugs, and all sorts of error conditions. All can be caught and handled without too much drama (pull to the side of the road in the worst case, revert control to driver, etc).
Will there be hacking attempts - sure. No more or fewer than those currently underway against your home router (everyone has internet connection at home), or servers at work.
All of them can be managed correctly, and are much more likely to be repelled when the routers and configured properly. Again - goes to the competence of those managing the in-vehicle and wireless networks. Exception being lazy admins working for the 2nd tier government agencies (GSA, sub-contractors) and 2nd tier private companies (Equifax, Yahoo).

Wireless vehicle connectivity, and various forms of control, vehicle software updates over the air, vehicle tracking, have been in the market for 10-15 years. Virtually every car sold on the road sold over the past 5 years has it.
All of them could be disabled/bricked if the security layers were not implemented or configured correctly. Starting ~15 years ago. The only one you know about is Sprint + Chrysler @#$%-ed (never should have allowed peer-to-peer vehicle IP traffic). The rest of the vehicles have been rolling down the roads safely for a decade plus.

Partial autonomous driving is here, now.
Level 2 autonomous driving features are standard (usually no cost) option on all modern cars.
Tesla has level 3.
Everyone is testing levels 4-5 mules right now. Many will go on sale in a year or two.

30,000+ people die in car crashes in the US, every year.
Few notice, other than those directly effected.
1 person dies in a Tesla crash = freak out.

Haman brain has a well documented tendency to over-estimate the thread of low-probability events.
Well documented behavior trait.
Some more info here:
https://www.schneier.com/blog/archives/2007/05/rare_risk_and_o_1.html
https://globalriskinsights.com/2015...ct-events-why-are-people-afraid-of-terrorism/
https://www.wired.com/2007/03/security-matters0322/




a
 
And they still put the lateral stability acceleration sensors for dynamic stability traction control systems mostly with coding that manned vehicles have no override for in the bottom most part of most vehicle frames with no concept of communication between mechanical engineers and electrical engineers on a car frame design for say having drain holes for water near the most sensitive electronics. The wires weren't even water proof on a Volvo S40T5. 07...

And over decades of time you have faults with what was a reliable robust electronic system. I had to pull wet carpet out. And dry and entire car interior. Some people reported higher end Euro cars doing things like stability system got wet, car lurched into median on interstate. Un commanded input by traction system meant to prevent roll over ends up hurting people. And you want a human designed electronic circuit to control your direction.

Airbus flight control laws on manned aircraft can't even tell the difference between a mechanical pitot tube blockage and an over speed condition the pressure reads the same for two situations, electronics dorks couldn't program a physical mechanical limitation to disappear. It's programmed to pull up and cut throttle on over speed with zero manual overrides the pilots could do incase computer goofed up, because of one arrogant designer sh&t hole who never had any real flight experience as pilot. He thought pilots would over speed and rip wings off. So let computer auto correct. Bad idea is clog pitot tube reads as overspeed as plane is climbing and sensors are indicating a false reading as in actually it is slowing down. Meanwhile computer stalls aircraft with hard coded correction feature of elevator up, throttle idle, the overspeed correction. Spin recovery tactics on light airplanes don't work for heavy airliners with Low power reserves and high sweep angles. NTSB loves to blame the dead pilots, how sad.

And numbskulls want a automated car. They can't design safe systems in manned systems yet.
Prof in auto engineering had a story of dodging a tumbling ladder with a car. He doesn't think any computer controlled car could have reacted to that scenario as the load fell off a truck. /Rant Automated Car.
 
I trust the engineering far more than I trust the average driver. Will there be issues, sure. But it doesn't have to be perfect to be better than most of the drivers most of the time.

What will be -very- interesting is when motorheads rolling coal realize they can be even more jerkish than usual due to the cautious safety margins. I expect that's mostly a self-solving problem on a long enough timescale.

I look forward to regaling my grandkids with horror stories from the Bad Old Days of 10-car pileups and not even being able to sleep on overnight interstate drives.
 
There be always bugs, and all sorts of error conditions. All can be caught and handled without too much drama (pull to the side of the road in the worst case, revert control to driver, etc).

What happens when the bug is not known nor testable? Then a human being loses a life over it? This risk is unacceptable. That is why people freaked out about Tesla crash. The car did not pull the f--- over like your sarcasm suggests. The sensor height did not sense a SEMI truck was in the vehicle's path due to physics and reflection patterns of white shiny surfaces and real world conditions not lab testable. And a REAL person died. But you act as wreck less and happy do dah as the IEEE conference on automated cars I attended. All I remember was the AT&T CEO repeating she made $700 billion a year. Those cold people don't care about your personal safety. They want to sell automated car systems at any cost to human life as long as they get richer with Bosch.
 
Unmanned aerial drones are easier to design from miles of spacing. In a car it's literally six inches from crossing a double yellow line in all sorts of weather and road conditions with all sorts of various traffic skill levels.
 
I trust the engineering far more than I trust the average driver.

What I don't trust is the mechanical engineers sign off all of the mechanical systems with a licensed PE meanwhile the electrical engineers rarely sign off their documents due to "glitches" that can happen. And the PE is the very experienced dork who puts his or her name on a line to fire when the design has a fault if anyone gets seriously hurt by the bad design.
I really enjoyed circuits in college, considered changing majors to Ee, but... Had a EE prof laugh when the projector glitched to blue screen once randomly and he said that's why he wasn't in "industry" with electronics, that he preferred teaching it. There's a chance an electronic system will glitch even with well design.
 
afadeev said:
There be always bugs, and all sorts of error conditions. All can be caught and handled without too much drama (pull to the side of the road in the worst case, revert control to driver, etc).

What happens when the bug is not known nor testable? Then a human being loses a life over it? This risk is unacceptable.

~30+K Americans loose their lives behind the wheel every year.
That risk is known, and perfectly acceptable to everyone who drives, which is 90+% of the eligible population in the US.

Please spare us "lives are irreplaceable", and "one death is too many" fluff.
That's BS.

Life has value, but it's not infinite.
Here is a link to actuarial values, per country. In the US, it's between $5-9Mil, as per the "dialysis standard":
https://www.livescience.com/15855-dollar-human-life.html
https://en.wikipedia.org/wiki/Value_of_life

The car did not pull the f--- over like your sarcasm suggests. The sensor height did not sense a SEMI truck was in the vehicle's path due to physics and reflection patterns of white shiny surfaces and real world conditions not lab testable. And a REAL person died.

No sarcasm intended.

Yes, a real person who was not paying attention to the road, and may have been watching a DVD in his Model S, died.
There is a reason Tesla forces you to keep the hand on the wheel, to acknowledge ultimate responsibility for control of your vehicle. If you fail to exercises that control, you increase your risk of dying. And that's OK, as long as people are free to make that choice.

Similar to what many do these days by crossing the road with the eyes glued to their cell phone screens.
Are they going to get run over frequently? No.
Will it happen to some? Absolutely. Free country - their choice.

Those cold people don't care about your personal safety. They want to sell automated car systems at any cost to human life as long as they get richer with Bosch.

True/True/So what.

You are right, they don't give a flying @#$% about your, or my, personal safety. That's not what a business does, beyond a basic legal obligation. They do care about making more money/getting richer, for the benefit of their shareholders, present company included.

Actually no-one really truly cares about you, your safety, or your well-being, other than people who love you.
Hopefully, at least them.
Thinking otherwise is delusional.

Certainly not car companies.
Car companies' products kill 30+K Americans every year.
Anyone here wants to ban those killer bullets on wheels?
:eyeroll:

If not, lets please stop freaking out about it!


There's a chance an electronic system will glitch even with well design.

Absolutely. Nothing is fool-proof.
Alas, the same holds equally true for human piloted cars that rely on electric steering, electronic throttle, fly-by-wire brakes, etc - all features of the modern cars!


a
 
Not sure who really wants this. I enjoy driving. I can't imagine a society who willing gives up every last bit of control they have over their daily lives.....oh wait.....never mind.
 
Actually one of the tougher problems will be integration between human and non-human drivers.

I wish I could find the SciFi story where in the future ALL vehicles are robot controlled by "brains" that actually are interactive and have personality. Traffic runs MUCH smoother since all traffic is integrated/coordinated and thus can run at much higher speeds safely controlled by computer. Some nutcase human decides he wants to drive his car himself, rips out the car brain (given the cars have personality this is considered close to "murder") and drive recklessly causing multiple injuries. Was a well thought out story probably written in the 60s or early 70s.

I think I would feel a lot more comfortable in a robotic system where ALL vehicles were robotically controlled (and coordinated) than a mish-mash of human and robotic drivers.
 
Not sure who really wants this. I enjoy driving. I can't imagine a society who willing gives up every last bit of control they have over their daily lives.....oh wait.....never mind.
I don't particularly enjoy driving any more than I do mowing the lawn or taking inventory. I want freedom -from- these tasks, which is where my robot buddies hopefully come in.
 
I for one hope it fails and fails big. There are far more important issues society faces that need attention rather than a problem of convenience. I will never understand the rush to needless technology, don't like change. Ultimately this if this mess works out, someone else will have the say of where, when and if you can go somewhere and that is deeply troubling.
 
I for one hope it fails and fails big. There are far more important issues society faces that need attention rather than a problem of convenience. I will never understand the rush to needless technology, don't like change. Ultimately this if this mess works out, someone else will have the say of where, when and if you can go somewhere and that is deeply troubling.

X-1000...
 
I for one hope it fails and fails big. There are far more important issues society faces that need attention rather than a problem of convenience. I will never understand the rush to needless technology, don't like change. Ultimately this if this mess works out, someone else will have the say of where, when and if you can go somewhere and that is deeply troubling.
X-1000...
I love my refrigerator and the Internet for the same reasons I dislike yardwork. To each their own, c'est la vie!
 
I love my refrigerator and the Internet for the same reasons I dislike yardwork. To each their own, c'est la vie!

It's not a matter of c'est la vie. If this crap happens, it changes everyones lives. You can choose to have a fridge and internet but you can't avoid this if it's pushed on society. What ever happen to follow the money? Who benefits from this? Same people who benefit from government funded green tech that's not economically viable on it's own.
 
Really?
Some idiot tractor trailer driver clipped a car that stopped on the road. Could have happened to you, me, or anyone else in any other car, self-driving or not.
It's the responsibility of the driver initiating a turn to complete it safely, not for everyone else to run away and make room for a delivery truck...
Seems to me I included the truck driver but your quote of what I said did not include that.

Sorry, but this is the classical neo-Luddism.
Nope, I was simply predicting the easily possible future if current, obvious trends continue... and they will. You can read about them daily if you follow such things. Only a very small percentage ever reach the mainstream news.

On self-driving cars being safer that the many idiots who currently kill themselves and others, absolutely! I wasn't commenting about that. I was commenting about the extreme vulnerability of such a system to catastrophic hacking if care isn't taken and, as one can read about daily, care isn't being taken even now. NOT even close. As more things become interconnected, the vulnerability will only increase.

For instance, just a few of so many on this topic, these being from the Internet of Things Institute itself:

Could IoT Hacks Lead to a Planet of the Apes Scenario?
What’s the worst thing that could happen with IoT security? The U.S. Cyber Defense Advisor to NATO fears an IoT-induced Armageddon.
Brian Buntz | Mar 30, 2017

https://www.ioti.com/security/could-iot-hacks-lead-planet-apes-scenario

This one unfortunately requires registration to view:

Armageddon! How a Cyber Breach Can Disable a City
What happens when the breech shuts down a PSAP, disables traffic lights and/or crashes a utility? Public safety agencies and city managers must be aware of vulnerabilities in their infrastructure. Learn about potential attack points, and how to be prepared.

https://education.ioti.com/courses/armageddon-how-a-cyber-breach-can-disable-a-city
 
Reads like a short spy novel and is one example of many which shows that for anything connected to the Internet, and even for many other things which aren't, "security" is an illusion, an example of why if great care isn't taken... and so far it ISN'T... we are setting ourselves up to be extremely vulnerable.:

THE PIECE OF CODE THAT STOLE THE WORLD’S PASSWORDS
He Perfected a Password-Hacking Tool - Then the Russians Came Calling (along with everyone else)
9 Nov 2017

https://www.wired.com/story/how-mimikatz-became-go-to-hacker-tool/

An ANCIENT example of a small fraction of our current vulnerability extracted from James Burke's 1978 TV series "Connections":

[video=youtube;lKELMR6wACw]https://www.youtube.com/watch?v=lKELMR6wACw[/video]
 
Some thoughts on self-driving cars, and then about machine learning and neural nets. Spoiler: I'm "all in" with self-driving.

1. In general, people are not very good drivers. Some are awful, some are dangerous, and some are deadly. Sure, you're a good driver; the guy who plows into you is not.
2. We all lose our mental and visual faculties eventually, but we all want to keep our independence. I'm NOT going to have that moment when the kids take my keys away. You?
3. People have a limited set of sensors that they don't always point the right direction or even diligently. Machines don't get bored, or drunk. They can look in 4 directions at the same time.
4. Machines know where they are even in the dark and rain. They don't miss a turn and panic and cut across three lanes of traffic. I know you've seen that. Or done that.
5. Machines can react in ms. People react in 0.7-never seconds.
6. Machines can see the car in front of the car in front of you (bouncing radar)
7. Machines can be fed traffic and road condition updates in real-time; they can always know the speed limit (some regular cars know this already)
8. When I'm in my car, I'd rather be talking to family or catching up on email, or finding a restaurant, or watching the scenery, or a movie
9. I'll be candid: I enjoy socializing and drinking with friends and don't want to drive or have to rely on waiting for a taxi/Uber/Lyft to get home.
10. Neural nets are not "programming," in the traditional sense. So if it's set in your mind that all software has bugs and you'd never trust it, this is different.

Machine learning and neural nets can already do spooky things amazingly, astoundingly better than people. Like beat a roomful of experts simultaneously at a game, Go, that even 5 years ago no one thought a computer could EVER beat a champion at (there are, and I may get this slightly wrong, more Go game combinations and outcomes than molecules in the universe.) Now it's doubtful that a champion will EVER beat the latest neural net, and he's admitted it. He commented that the neural net began doing fascinatingly creative things that blew his mind. You could never write a program in the traditional programming sense to do this. But a machine learned how to; you should understand the fundamental difference to appreciate this fully.

If you thought the internet disrupted things, wait until you see what neural nets will do in our lifetimes. This will be even more disruptive, because as they finally do what only educated humans have done in the past, they will be better by orders of magnitude. Silicon Valley board meetings are dominated right now by talk of neural nets and machine learning, and Facebook, Google, Tesla, Amazon, Apple and others are "all in." The company that assembles the circuit boards for my company has sales people focused on a myriad of companies who are focused on the first popular consumer area: self-driving.

It's fine to be dubious now about what a neural net can (ever) do. But those who study them are already way beyond doubting they will drive cars. That's one of the more straightforward things they'll do.

They will do things in seconds that it takes people years—if ever—to do. A neural net will watch every movie ever made in less than a month, as well as everything written about them. It will then make a movie—in partnership with a neural net that has been trained to critique movies amazingly well—that will be the best movie you see that year. There will be no paid actors, they will all be generated. None of them will be standing in Iceland waiting for the light to get right, and then interrupting filming with a stint in rehab. As a result, movies won't cost $200MM to make and take 10 years (I'm looking at you, James Cameron.) In fact, those actors can act in 100 movies at the same time. For a while they will have mannerisms or accents or gestures that remind you of famous actors—because the neural net knows that people like them. Your favorite neural-net designed actor will have a fascinating (fake) back-story. And he will never be involved in a public scandal. Unless that would be a good thing. Each movie will come out in every language immediately, complete with local idioms, landmarks, and culturally-appropriate actors and references. There will be partnerships between companies with movie generating neural nets and local companies with very good movie critiquing neural nets. If you live in Mozambique, you will know that if "Studio Spielberg NN" and "What Mozambique Likes" collaborate on a movie it will be perfect, because SSNN is know to make fascinating movies, and you trust WML to make sure they are perfect for your local tastes. The two neural nets will go back and forth thousands of times silently on a server making it better and better until WML gives a thumbs up, and what results is awesome (for you in Mozambique if nowhere else). One side effect is that if two people from different places in the world compare notes on a film, it is unlikely that any of the actors or even all of the plot points will match. Instead, people will say things like, "I heard Space Story is a hit almost everywhere; you should try your version. We loved ours!"

If you have a favorite TV show, it will never get cancelled, and it will never end. Every time you tune back in, it continues where it left off. It can keep going forever, assuming you never lose interest. By why would you? You could always ask for changes. At first, this may be a little expensive if the show is only watched by you. Eventually, almost everyone can have their own version of any show for the same price. You can even ask for a redo, where a certain major plot element goes another way. That will be a fun activity in itself, because it's more interactive. "Take me back to Lost when the plane crashes, but have the Smoke Monster eat half the people immediately. Make it so that the Smoke Monster can never come within 5 feet of Jack for some mysterious reason. Go." Or you can—I'm not saying you or I would do this (wink)—have two actors get it on whenever you want. With whatever detail you request. Hey, to each his own, I'm not judging.
 
Last edited:
The intersection of neural networks, quantum computing, and augmented reality are going to make Internet look like Telegram.

I, for one, cannot wait.
 
Recent headline , e.g.: "Uber's Driverless Cars Were Running Red Lights And Terrorizing Cyclists"

When a self-driving car causes a fatality, who is responsible ?

A. The owner of the car

B. The company who wrote the faulty software

C. The locality with the faulty RF link infrastructure

D. The testing facility that approved the released version of code that still had unknown SW bugs resident in the executable

E. The authority having jurisdiction which approved the vehicle to operate 'driver-less' on public roads

F.

G.


All motorized vehicles of any kind eventually cause fatalities. Any vehicle you can name has been involved or instrumental in human death(s).

Knowing our litigious culture (and lack of personal responsibility), someone or some organization is going to be held responsible. Just wondering who it is going to be. The tort attorneys (read: $$$$) must be very interested in these questions also.

It is just a matter of time.
 
Some thoughts on self-driving cars, and then about machine learning and neural nets. Spoiler: I'm "all in" with self-driving.

1. In general, people are not very good drivers. Some are awful, some are dangerous, and some are deadly. Sure, you're a good driver; the guy who plows into you is not.
2. We all lose our mental and visual faculties eventually, but we all want to keep our independence. I'm NOT going to have that moment when the kids take my keys away. You?
3. People have a limited set of sensors that they don't always point the right direction or even diligently. Machines don't get bored, or drunk. They can look in 4 directions at the same time.
4. Machines know where they are even in the dark and rain. They don't miss a turn and panic and cut across three lanes of traffic. I know you've seen that. Or done that.
5. Machines can react in ms. People react in 0.7-never seconds.
6. Machines can see the car in front of the car in front of you (bouncing radar)
7. Machines can be fed traffic and road condition updates in real-time; they can always know the speed limit (some regular cars know this already)
8. When I'm in my car, I'd rather be talking to family or catching up on email, or finding a restaurant, or watching the scenery, or a movie
9. I'll be candid: I enjoy socializing and drinking with friends and don't want to drive or have to rely on waiting for a taxi/Uber/Lyft to get home.
10. Neural nets are not "programming," in the traditional sense. So if it's set in your mind that all software has bugs and you'd never trust it, this is different.
Self-driving cars will be great... until they are hacked. That's my point. We are rushing headlong into total automation without, as usual, an organized effort related to secure design.

Just as Snowden described our current national intelligence infrastructure as a turn-key tyranny depending entirely upon the "good intentions" of those who run it, we are setting up a potential physical tyranny to be activated by governments or by hackers acting as individuals or groups, foreign or domestic, some brilliant kid in his parents' basement in the Ukraine significantly harming this country with a few keystrokes from many thousands of miles away.

A cashless society even based upon "highly secure" biometric data? What happens when your biometric data (facial analysis, fingerprints, DNA) is stolen and you can't buy anything, hire a ride anywhere, or go anywhere without being tracked. We are setting ourselves up for both individual and national technology trap dystopias which can be enabled by governments or by hackers as individuals or groups. Every effort should be made to make that as difficult and unlikely as possible, but it isn't, not even slightly.

A recent study has shown that even the blockchain method used for cryptocurrencies will be highly vulnerable to future quantum computers as will be every form of automated encryption currently used. Imagine then what could happen when virtually everything is networked together.

We're being collectively stupid on this issue, the same sort of stupidity and bureaucratic screwups and inaction that allowed 9/11 to take place even though far more than enough clues were available to prevent it. The eventual result in this case could easily be far worse than 9/11.
 
What ethics do we program into the self-driving cars? Mercedes have already stated that they will program their vehicles to prioritise the safety of the occupants of the vehicle, to the possbile detriment of anyone else involved in the collision.
 
Not sure who really wants this. I enjoy driving. I can't imagine a society who willing gives up every last bit of control they have over their daily lives.....oh wait.....never mind.

I love driving as well.
I've been hitting the track in my cars for 2-5 DE weekends/year, for the last 20+ years.

Having said that, I hate creeping forward in stop-and-go traffic. Or riding the peloton at exactly the posted speed limit, with no way to pass the drones at the wheel, who honk and flash high-beams at you when you finally do get past them.
No fun in that at all. Not one bit.

THAT, I will gladly automate, as long as I can turn ALL nannies OFF whenever I want to!


It's not a matter of c'est la vie. If this crap happens, it changes everyones lives. You can choose to have a fridge and internet but you can't avoid this if it's pushed on society.

No-one talks about pushing anything on anyone.
Semi-autonomous driving is a (premium) feature.
As long as you can (a) afford it; (b) turn it on/off as desired, it's all good!

Free choice is good.
Don't assume that someone will "gift" it to you or "force" you to use it.
No-one is forcing you to use the internet, or the electricity, or gasoline, or indoor plumbing, etc, etc.


Nope, I was simply predicting the easily possible future if current, obvious trends continue... and they will. You can read about them daily if you follow such things. Only a very small percentage ever reach the mainstream news.
[...]
Could IoT Hacks Lead to a Planet of the Apes Scenario?
Armageddon! How a Cyber Breach Can Disable a City

Full-on tinhat mode!

a
 
What ethics do we program into the self-driving cars? Mercedes have already stated that they will program their vehicles to prioritise the safety of the occupants of the vehicle, to the possbile detriment of anyone else involved in the collision.
I touched on that in my original post. "...when the car's choice is to hit two or more pedestrians or cyclists by avoiding an oncoming semi or plow you into the semi, what choice will it make for YOU?" Imagine the popularity of vehicles which make the latter choice - "NOTE: this vehicle will choose to sacrifice YOU without your consent." Of course, once everything is automated the likelihood that that choice will need to be made is greatly reduced. The primary problem on that will be during the transition to 100% automation when there are still human drivers on the road. The threat of hacking will always be present which is the main point of my posts. A totally networked and automated society is an incredibly vulnerable one. We're not even adequately preparing for the effects of a major solar flare CME. Why? The same reason as always - it costs money and doesn't make money.
 
Back
Top