AI. It’s the talk of the town (or at least this year, that is). I’ve been in dozens of conversations about AI in recent days. The likelihood that the next big feature or product I build involves AI seems to be going up. No idea where it’s all headed – and reserve the right to change my mind – but a dozen conversations leaves me with a dozen or so disparate thoughts about this new frontier of technology.


Can you spot the difference between these two images?

Two images next to eachother, first is a small team with the caption 'A small team who wants to stay small but accomplish a bit more' and the second is the same small team with the caption 'A small team tasked with maximizing value for shareholders.'

AI was in the news this past month…

  1. OpenAI needs $7 trillion dollars (more than the US federal budget) to build new chips for AI
  2. OpenAI needs cheap energy from fission reactors to be sustainable

I mean, if I had $7 trillion dollars and access to free energy I could probably do some neat things too.


Is non-deterministic search really what we want?

In a world where our social fabric is being torn by “alternative facts” and online bubbles, do we want to yield finding answers over to a machine known for making things up and is often wrong? Do your customers want this?


People creating projects in the AI space look like they’re having fun. I cynically wonder if it’s fun because there’s freedom in low expectations when you yield result quality to the outputs of a black box. O to be a novel new technology!


When AI autocompletes code I’m familiar with but don’t have loaded in my brain RAM, I love it. When AI writes boilerplate-laden tests for me and I can get back to the more fun building part but with more test coverage, I love it. When AI enables me to maintain flow, it’s magical.

But when I’m fighting the machine, guessing which mystery word combo unlocks the functionality I want… Ughck.


Am I bad at prompt engineering? Is that why I don’t “get it”? I can never quite craft the right query to match what I see in my imagination. Am I asking the wrong questions or getting the wrong answers? Am I creating too high of a standard? Weird I’d even consider putting the blame on myself.


A friend of mine told me how he used ChatGPT to answer questions in realtime during a job interview. As a hiring manager, I’d be appalled. As a friend, I thought “Do what you gotta do.”

I think he got the job.


AI seems genuinely helpful for automating tedious tasks. For example, image background removal. That was painstaking and I don’t think anyone misses that part of their job anymore.

Another –and quite possibly my favorite– example of task automation is a story from Spider-man: Into the Spiderverse. In a video by Wired, Josh Beveridge, the Head of Character Animation explains how they used machine learning to automate some of the linework to help the characters could come to life. It’s important to note this was artist informed and artist corrected…

All of that technology was in the service of allowing artists more freedom to do things by hand and it all became part of this hand-crafted feel.

Eliminating tedious work seems worthwhile. Humans getting to do more of the fun stuff. That famous “augmented work” everyone talks about.

But what is considered tedious? Email? Writing documentation? Blogging? Design? Making music? Hiring? Working with junior developers? How come management never comes up in these conversations? Oof.


Alex Riviere recently wrote “Why I find LLMs Frustrating” where he highlights the duality of LLMs; how it’s bullshitting can be harmful but also useful for self-expression. Alex is finding success in tools like Brainstory to help craft a narrative and “format things in a way others will understand.”

As someone who hates being misunderstood and often smashes a dozen disparate ideas together (e.g., this post), I could benefit from a tool like that. As a tool for busy brains where the brain “goes faster than my fingers can write”, AI seems useful.


The Verge had a wonderfully art directed post about indie writers using AI to write fiction.

  • Author was writing 6 books a year for Amazon 🚩
  • Customers drop off if books take more than 2 months 😬
  • Author wanted to get unblocked, so she used Sudowrite 👍
  • Author slowly starts yielding more to AI 😬
  • Author can’t remember plot, story beats, dialog ⚠️
  • AI generated content is creating more competition on Amazon 😬
  • Author is now writing 10 books a year 🚩🚩
  • Author is spending more revenue on Amazon ads to sell their books in a crowded market 😱

A recent whitepaper on Copilot suggested a downward pressure on code quality because of a +39.2% increase in code churn. Most happening within two weeks. There’s some questions and good write-ups about this, but this correlation is probably worth watching. Bad generated code may have more serious consequences.

Anyways, I often wonder if the future of my job is just cleaning up all the robo-barf on the ship.


Aaron Gustafson wrote about AI for Accessibility. I know Aaron has spent a lot of time thinking and working in this space, so I appreciate, trust, and value his perspective.

Attempting to predict the traffic jam, I worry automating accessibility would give way to a libertarian mindset where accessible technology is an individual’s concern, not an everybody responsibility and what externalities we do have in making accessible websites will go away. We already see this with overlay companies. Rather than fix issues at the source, we put the onus on the disabled person to figure out how to use a dongle so they can buy underwear on the internet. That’s inequity.

At the same time, I want people to have tools to be able to fix their issues. People should be able to increase their own access if the designers and developers did a bad job.

“Hey Siri, I’m red-green colorblind, can you automatically fix every website for me”
“Hey Siri, label this form because someone did a bad job”
“Hey Siri, describe this chart”


I often think about my dyslexic daughter. My hope for her is that she learns to love learning because the US education system built on standardized tests and long form reading and writing is not conducive to how her brain naturally functions. Why shouldn’t she be able to use AI to summarize and expand text to level that playing field?

As a parent, this is tough to navigate. There’s a line, a tipping point, between tailored learning (using ChatGPT to help with an essay) and not learning anything at all (having ChatGPT write the essay). Does learning require a certain amount of struggle? What’s the long term impact of copy-pasting non-deterministic generated text?

Maybe we need a new fairy tale about a kid who lets a witch’s robot do all their chores and then the kid gets eaten up by the witch.


About AI making you more productive… here’s my grand conclusion after years of productivity chasing: Productivity is for you, not them.

If applying productivity hacks to your job benefits you, then do it. If you are more productive and don’t see monetary or “timeback” rewards or (more insidiously) you need to be more productive to keep your current job… we need to pause and take a macro perspective.

According to the Economic Policy Institute, productivity has gone up 64.7% over the last 40 years, yet wages have only gone up 14.8%. Over the same-ish timespan, according to Pew Research

From 1970 to 2018, the share of aggregate income going to middle-class households fell from 62% to 43%. Over the same period, the share held by upper-income households increased from 29% to 48%.

Weird, huh? Money magically rolls uphill. Consider not contributing to this gap. But if you need to feel productive, I want you to have it.


The nuance I’m weaving is this…

AI as a tool to empower humans is admirable, but it doesn’t fix broken systems. If anything it exacerbates any inequalities built into existing systems. We should be careful about what we build and where we deploy any technology, lest we cause actual harm or build new high tech for-profit prisons around ourselves.