Hank Green posted a video about four lies he believed. It’s a great video because it’s embarrassing to be wrong on the internet and here is smart person™️ Hank Green admitting he believed some bullshit. As you travel through the internet you’re constantly working against a lot of personal biases, like confirmation bias or the overconfidence bias, and it’s easy to slip up. It’s (biologically) difficult to correct your understanding as new facts invalidate what you believed true.

One simple question I’ve started asking a lot is: “How do you verify that?”

I’ve seen some compelling demos where LLMs take a jumble of text and spit out compelling artifacts like an image, some b-roll, a spreadsheet, a website, or an entire daily news podcast. It’s incredible… until my buzzkill brain starts asking questions. Images are easy to verify: does image look correct or does it contain phantom limbs and have five rows of teeth? Text summary or expansion, also within possibility to verify. A website? I mean… I can tell you if it’s an okay website code-wise but I’m probably an outlier here. Validating the accuracy of a larger corpus of data that got summarized, keyworded, and categorized? Oof. This is where the scales tip for me because it seems improbable. Or at least time consuming. Perhaps as time consuming as it takes to do the work yourself?

To be fair to the robots (why would I ever say this?) I think it’s worth pointing the question at myself. I’ve been thinking about my “old world” method of verifying facts and determined I have the following routine.

  1. I consume a lot
  2. I experiment
  3. I ask experts

When I say “consume a lot”, I mean I read hundreds – sometimes thousands – of articles and watch hundreds of videos each week (cf. novelty seeking, information addiction) as well as my book habit. To make that theoretical knowledge more practical, I prototype and experiment to answer questions. With a basic understanding established, I often ask experts for their perspective. For web tech, that’s lots of open questions or DMs to people I know care about a specific technology. I’ll then sometimes have those people on my podcast to prod them some more.1

Like loose change and shells rattling inside a coffee can, I collect all those informational tidbits and filter them through my years of experience and how my brain works. That produces an outcome I’m generally satisfied with, informed yet potentially wrong. It’s not too different from an LLM where the outputs are only as good as their inputs.

My inputs are sometimes flawed. If Books Could Kill… is a podcast where the hosts do takedowns of my favorite books: self-help pop-psychology airport books. They do an incredible job at on-the-fly fact checking, reading criticisms, scrutinizing every assertion, asking “Hey wait, is that true?” on every page and have nearly ruined the entire genre for me. I assume you learn this brand of skepticism in journalism school and the trade-off is that you’ll never enjoy a book again in your life. But I need to remind myself the goal of a book isn’t to get to the last page, it’s to expand your thinking.

I wish I did better at this (so I don’t repeat falsehoods) but collecting factoids makes me feel smart and I like feeling smart so now they’re insulated inside a little blanket of my personal biases. Fact-checking in realtime doesn’t tickle the ol’ grey matter. It doesn’t have the same dopamine response.

One more story and I’ll let you go. In an effort to not be a total fucking grandpa all the time I’m trying to use AI more while keeping an open mind. It’s been… challenging. But the other day I needed a fact checked – something one of my investors mentioned during our YCombinator application process that was noodling around in my head – and a regular search didn’t return any results, so I tried a trending AI search tool that provides citations.

  • First it told me “Yes, this was true” but cited the wrong reasons.
  • So I hit it with an “Are you sure?” and it changed its answer to “No, it’s not true” but cited a random Hacker News comment that asserted the opposite.
  • So I said “Your source says the opposite” and it KFC doubledown’d and said “No, it’s not true.”
  • So I copy-pasted the text from their own source into the prompt and it said “Yes, this was true.”

That’s a… complex user journey. And it happens to me a lot. I wonder if this tech falls victim to its own compelling demos. The “Time to First Demo” is so short and often so profound, we fill in the possibilities with our own projections and science fiction. It’s harder to question the quality or veracity when our imaginations and our biases get wrapped up in the idea because it might mean questioning ourselves… which is not something humans are biologically engineered to do well.

Okay, this is the last bit I promise. There’s one line from that Hank Green video that stands out to me in this whole self-fact-checking debacle…

I was fine with having the [mis]information put into my head, but when I put it into someone else’s head, I felt a little weird about it so I went to check just in case.

That seems like a humane approach to sharing information. Be liberal in what you accept, fact-check on the way out. I hope I can get better at that.

  1. Asking experts isn’t limited to tech either, if I share an area of interest with someone and they recommend an album, movie, coffee, book, or menu item; I’ll take them up on it. I’ll implicitly trust them to guide my experience. In Japan this is omakase and it’s transcendent.