“I fear that machines are ahead of morals by some centuries and when morals catch up perhaps there’ll be no reason for any of it.”
– Harry Truman
This was penned by President Harry Truman in his diary on the morning of the first atomic bomb test, Trinity, in 1945.
The President was overseas at the time near Berlin for the Potsdam Conference negotiating the the German restoration plan with Churchill and Stalin. Another agenda item was demanding the “unconditional surrender” of Japan. Meanwhile, Truman was secretly awaiting news from Los Alamos, he soon received a coded message:
"Dr. Groves is pleased."
I first read this quote in a graphic novel and it has stuck with me since. I think embodies my feelings around the relationship between technology and morality. Technology pushes forward without fully evaluating the ramifications.
A few shocking things happened in tech this week. Were this a normal timeline and not this awful bizarro paradox, I’m sure these stories would be something of note. But, alas.
Last week a woman was killed by an Uber self driving car in Tempe, Arizona. First accident of its kind. Surely not the last. While some may say self-driving cars have a better driving record than humans, and this is true, the failure here is that software engineers didn’t anticipate the possibility of someone jay-walking.
I actually own the same Volvo XC90 involved in the crash. What’s interesting here and I haven’t seen mentioned is that this model of car has an IntelliSafe feature which will abruptly halt the car if a human passes in front of it. I know from experience it works quite well. It would appear from the video this safety feature was disabled. Probably because it confused the self-driving lidar software. It’s probable that a developer did this.
As long as cars and humans share the same physical space, it seems at a minimum redundant safety systems should be required.
It was reported this week that Cambridge Analytica harvested private information from 50 million Facebook users without their consent. I have followed this Project Alamo story for awhile because it scares me how companies are willing to steal and use big data to sway people’s opinions. With AI on the cusp of becoming commonplace, surely we need to be pragmatic about this potential threat model.
It angers me Facebook repeatedly downplays their responsibility in the 2016 election.
It angers me Facebook’s own research shows they can effect our emotions.
It angers me Facebook doesn’t seem to understand their role in this.
It angers me Facebook has no way to prevent abuse, only suspending accounts after the fact when someone leaks a story to the Guardian.
It angers me Facebook (and YouTube and Twitter) willingly lead you down a path towards more extremist ideas.
It angers me Facebook wants us to believe they are a “neutral platform”, our buddy, but they have no incentive to stop the destruction of privacy.
“We stole 50 million users’ data.”
“Oh no, that’s bad.”
“But we’ll use that data to buy targeted ads.”
“Oh. If that’s the case… 🤑🤑🤑”
“Excellent. We’ll have to pay in rubles.”
“Excellent. Hope no one tells the newspaper.”
This machine which we feed with baby photos, 👍 likes, and our web browsing history profits from selling our personal information and privacy.
Truman would go to his deathbed saying the atomic bombs dropped on Hiroshima and Nagasaki were a “necessary evil”. If I was responsible for the instant eradication and horrific radiation poisoning of hundreds of thousands of people, I would probably conjure up some self-justification as well.
Although, Truman’s diary seems to show some admission that atomic warfare would be immoral. He knew he built an immoral machine, a “Destroyer of Worlds”, but yet still made the decision to drop the bomb. That phrase rattles around in my brain like a ghost covered in shackles…
I think about people, friends, who program self-driving cars, algorithms, AI, or went to Facebook to make small fortunes and garner stock options. They are great people, not malicious, but yet they helped build this immoral machine. Perhaps not intentionally immoral, but easily exploitable by immoral beings. I worry too that my involvement in tech makes me complicit to these actions.
My sentiments feel similar to what Kenneth Bainbridge told Robert Oppenheimer that morning after the Trinity test…
“Now we’re all sons of bitches.”