Internet Of Dangerous And Not-Ready-For-Prime-Time Things

I’m not really looking forward to the “Internet of Things”.

Partly it’s because of the nightmarish security risks – which we’re already seeing.

Partly I think it’s because it promotes a formo of “connectedness” that isn’t very connected.

But mostly it’s because, workign in the software industry as I do, I know that software doesn’t just work. The more complex it is, the longer it takes to debug, and the more byzantine the errors.

In fact, the recent Ethiopian Air and Lion Airways crashes have reinforced my desire to fly only in planes controlled by hydraulics and, if possible, mechanical cables.

Because the problems aren’t even especially new.

13 thoughts on “Internet Of Dangerous And Not-Ready-For-Prime-Time Things

  1. It’s not only big things like airliners crashing that make modern life annoying, it’s also the accumulation of little things.

    The office switched to cloud-based computing but I still need to log into my work station, at least 10-character code that can’t repeat or contain words, must contain upper and lower case and special characters, and must be changed every four weeks, and re-entered if I don’t keep touching the keyboard every 10 minutes. Why? To prevent people from stealing our data. Which is in the cloud. Hello, ever heard of “hackers?” They don’t really bypass the electronic key card at the door, sneak into bureaucrats’ offices with thumb drives, clatter a few stokes on the keyboard and announce “I’m in.” Not in real life. That’s Mission Impossible. Nobody would do that here. You people in IT are annoying me for no reason.

    Oh, and just so you know, my password is complicated enough that I can’t remember it and had to write it on a piece of paper under my keyboard: Fdsajkl;9*(

    Don’t worry, it’ll get changed again in two days. I’ll write the new one on a Post-It for you.

  2. You are correct, Mitch.

    There has been a lot of speculation that China has stolen enough of our technology to attack the on board systems of our combat aircraft and other weapons systems with viruses or through bugs. In fact, there has been some internet chatter about that being the factor in recent fighter crashes.

  3. It doesn’t take much either; introduce an out of spec (and untested) value to the right variable and you can initiate unrecoverable cascading failures – its fairly common when you graft older (proven?) code into newer systems.

  4. Mitch, the least you can do is invoke Weinberg’s Law! (“If builders built buildings they way computer programmers write programs, the first woodpecker that came along would have destroyed all civilization.”) Although I believe Borenstein is closer to the truth: “The most likely way for the world to be destroyed, most experts agree, is by accident. That’s where we come in; we’re computer professionals. We cause accidents.”

    People think hardware is hard. It is, but good software is essentially impossible. You don’t believe me? Look at Boeing in this case. They had every incentive possible: people would die with bad software, a software failure could either bankrupt or destroy the company and the programmer’s career. But still, Boeing let the bean counters tell them to do the software on the cheap and to make marketing happy. ONE sensor that even Boeing knew failed often, software that could drive the plane into the ground and that the won’t give up control no matter how many times it’s reset without being manually disconnected and that disconnect wasn’t documented to the pilots … seriously?! And all so they could claim that no retraining would be required?! Boeing is f*cked if half of what has been reported is true and almost everyone in this decision chain should be sacked and charged with reckless homicide at a minimum.

    Fundamentally, there is a HUGE difference between the incentives in hardware vs. software. Hardware goes out after huge, capital intensive investment, and in most cases it can’t be replaced economically after it’s out there, so it goes out as close to 100% as it can be made. Software doesn’t take anywhere near the capital, tends to employ mercenary programmers who shift companies often and tend not to understand the underlying hardware as much, and is released “when it’s good enough” at 80%. We once had an IBM manager who came from the software group tell us to release a chip when test coverage hit 80% and he said that we’d fix it later. We had to explain that you can’t release a hardware “patch” and that you were stuck with the hardware you released and that it was punitively expensive to recall products.

  5. This is one reason I love my 1997 GMC with manual transmission. Less stuff for the software guys to mess with. Telling, by the way, that the guy least amenable to letting software and firmware run things are….the guys who develop it for a living.

  6. Mac, you have pointed out why one of the biggest dangers to the U.S. is the vulnerability of our power grid. Aging SCADA systems and outdated software. Several nations, hostile or potentially hostile to the U.S. have penetrated our grid enough times to scare any IT security person.

  7. I got a client phone call while driving my 2017 Hyundai Sonata. It was a complex issue so I pulled over to complete the call. When finished, I turned the key to start the car – nothing. It was already running but the starter didn’t grind gears, didn’t engage at all. Because the key is not connected to the starter, it’s connected to the computer, which is smart enough to know the car’s already running so it doesn’t need starting.

    What else isn’t connected anymore? My brakes pump even though I hold the pedal down – computer. My transmission won’t let me jam it into Reverse when I’m rolling forward – computer. The turning resistance of my steering wheel varies depending on whether I select “Economy” or “Sport” driving mode – computer.

    Those people who claim their car suddenly took off and they couldn’t stop it? I’m beginning to suspect they might be right. If my car’s computer suddenly decided to take me for a ride, my inputs would not force it to power down, steer, disengage or brake. I’d simply be along for the ride.

    I know the seat belt lock is mechanical. What deploys the air bags?

  8. I’ve often heard, with the level of automation in today’s cockpits, that the pilot doesn’t fly the plane– the computer flies the plane, and the pilot flies the computer. That said, as details emerge, it was the reliance on the software as a band-aid for a myriad of issues. This Twitter thread does an excellent job of mapping out how we got to two 737 Max 8 crashes and the loss of life. I also work in software, on the testing side of things. As such, I’m reluctant to defend developers, who can be some of the biggest prima donnas I’ve ever encountered– tell them there’s an issue with their software, they sometimes react like you poisoned their firstborn and cackled maniacally about it. But in this case, the software appears to have performed per the inputs given to it, though the input was likely faulty. There was a manual-override, and the pilots were flying in VFR.

    It’d be understandable if the pilots were in IFR and had no outside references to tell them not to trust their instrumentation: JFK Jr put his airplane into a graveyard spiral because he trusted his inner ear over the instruments. Mid-air collisions and near-misses have occurred because one set of pilots followed the advice of an air traffic controller staring at a 2D scope instead of the TCAS. I don’t envy the pilot whom, after being told to “trust your instruments” must now make a life-and-death choice as to whether to trust what they’re telling him/her.

    There’s another accident that wasn’t listed in the Popular Mechanics article: An Airbus A320 that crashed into the Med Sea after its own brand of stall-prevention software ignored 1 AOA sensor with the right value in favor of 2 AOA sensors showing bad values. The AOA sensors were malfunctioning because the fuselage had been washed without protecting the sensors and the water got in and froze at altitude. The flight was a test flight and was mercifully only carrying 7 people at the time of the crash.

  9. A few years ago you had a couple of Teslas burst into flames on the highway. The company investigated and discovered the fires were caused by pieces of metal on the highway that hit the full-body battery under the car, largely due to how close the frame of the car got to the highway at high speed using ground-effects to reduce drag and increase range. Rather than a physical recall, Tesla’s sent a software update via the Cloud that raised the ground effect level by one inch. No more fires.

    That was fast, safe and comparatively inexpensive. Of course, some smart guy might figure out a way someday to send an update that makes your Tesla do the macarena when you hit 80 mph, but you have to admit that was pretty cool.

  10. This is one reason I love my 1997 GMC with manual transmission.

    Right there with you, bb. I’ve got an ’04 CR-V with a manual. Starting with 2005, Honda CR-Vs employed drive-by-wire. Such technological advancements are a double-edged sword: Better fuel economy, better safety, etc. But no malicious software will ever cause such a vehicle to suddenly accelerate and override the operator’s control. I had a throttle cable stick on my ’86 Prelude when I was in college– I simply moved the stick and let the engine rev harmlessly (well, to me) in neutral and pulled over. Years later, I remember Toyotas/Lexuses having reports of uncommanded acceleration, and the theory that made the most sense to me was the cars were passing under high-voltage lines running perpendicular to the highway, and as the car’s ECU passed through the magnetic field, a current was induced in the ECU that was interpreted as commanded acceleration.

  11. NW is correct that you can fix things quickly with software, but that’s really the hazard–you think you can fix everything a priori, and you write all those functions into the code so it’s hard to (a) get them all working right and (b) keep them from interfering with one another. The power of the “KISS” principle can hardly be overemphasized.

    Also worth noting; the first thing I noticed about a coworker’s Tesla was how low it was to the ground. With a normal car, you put a robust shield below the gas tank, but apparently that wasn’t an option for the Tesla–it would mess with the position of the driver.

  12. Until this 737 failure one of the single most expensive software failures in the aerospace industry was the Maiden Flight 501 Ariane 5 failure that cost $370+ million on June 4 1996. In simple terms they took the SRI (Inertial Reference System ) software from the Ariane 4 and installed it on the Ariane 5. 40 seconds into the launch things went wrong. I suspect this failure will cost Boeing more than $370M.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.