If you watch one video about AI/LLM responsibility, I recommend watching The AI Dilemma by Aza Raskin and Tristan Harris, creators of the Netflix documentary The Social Dilemma. I found it to be a sober, non-doomerist look at the problems AI/LLMs pose when paired with the capitalistic race to rollout this technology in the hands of all consumers.
Three rules of technology outline their nearly hour long talk:
- When you invent a new technology, you uncover a new class of responsibilities
- If that tech confers power, it starts a race
- If you do not coordinate, the race ends in tragedy
It’s those last steps that are the concerning ones. If we fail to respond to #1, we end up with #3. They soberly draw comparisons to the nuclear era and how collaboration – in the form of nuclear disarmament treaties – mitigated a global disaster where five people decide the fate of the world. With that framing, the polling among AI researchers doesn’t bode well:
50% of AI researchers believe there’s a 10% or greater chance that humans go extinct from our inability to control AI.
Those aren’t the odds we want. I’m not going to go full doomer here, but I think we already see the erosion of trust and distortion of reality with poor information, disinformation, deep fakes, and TikTok filters; it’s impossible to tell what’s real anymore. That has personal (e.g. body image) and social (e.g. geopolitical) ramifications. It causes me to wonder if the last 30 years of the internet will be undone. Will we reject online worlds and go back to peer-to-peer LAN networks, in-person notaries, and dropping by someone’s house instead of calling or texting? Or are we comfortable with a future where robots and AI chatbots are our most intimate friendships?
It’s a lot to consider. The last bit from the talk rattling around in my head was the “GLLM” (pronounced “golem”) pun. It’s a fitting analogy as we animate these metal machines with electric rock brains… is it too late to pull the shem from its mouth.