Be Careful with Your Dropdowns (And Other Lessons from the 2018 Missile Crisis)

keep cruising

Be Careful with Your Dropdowns (And Other Lessons from the 2018 Missile Crisis)

Have you ever had the need to evacuate a building? If so, then you know just how much work it involves. You’ve got to find a pulldown box. You’ve got to break the glass—potentially cutting yourself quite severely in the process. And you’ve got to press the fire alarm button. It’s a multi-step process that’s impossible to do by accident; if you sound that alarm, it’s because you intended to—and if your intent is deemed to be malicious, you could face felony charges.

Meanwhile, if you’d like to evacuate the entire State of Hawaii and alert the world to its imminent destruction, it’s apparently as easy as selecting a certain option on a dropdown menu.

Of course, I’m referring to the great missile crisis of early 2018, where an entire state spent two hours panicking over its pending destruction. Would that it was an isolated incident: Days later, a similar false alarm was sounded over in Japan.

These stories, which made international headlines, surely represent modern technology at the peak of its ridiculousness. The capriciousness of this system—the capriciousness of its coding and its deployment—is simply mind-boggling. That an individual could simply select the wrong item on a dropdown menu and then cause widespread panic is absurd; that said individual was merely “reassigned” is mind-boggling. (Remember: Pulling that fire alarm prematurely can land you with felony charges.)

God forbid we ever actually need an alarm to evacuate the State of Hawaii—because by this point, we’re squarely in boy-who-cried-wolf territory. I’m reminded of the subway emergency alarms that you hear all the time in New York City; if you live in the city you know these “alerts” are routine, and roundly ignored.

Then again, part of me wonders if the shoddy design of this missile alert system is a tacit admission that, if ever there is a warhead coming our way, there’s not much that can be done about it. That’s the great irony of this particular technology: If it truly needs to be used, then it likely doesn’t matter anymore.

On some level, maybe the missile alert system is designed as a security blanket. Sure, there’s little we can do to stop a missile once it’s launched, but it makes us feel better to have the alert system in place. Here again, though, the system has utterly failed: It may be intended to make us feel better, but thus far all it’s done is induce panic.

We live in an era where we can quickly and easily track our UPS delivery to anywhere in the world. We have smart refrigerators that tell us when it’s time to buy milk. Surely there is a way we can use technology to test itself, and to safeguard against something as silly as a dropdown snafu.

Some will say the answer lies in artificial intelligence—but I’m starting to think we’re the artificial intelligence. When it comes to the way we use technology, maybe real intelligence is the ingredient we’re missing.