I’ve been using GitHub Copilot full time over the summer and have collected my thoughts on this new industry redefining tool. It’s a product that has the potential to impact my life greatly and a product that’s a mix of helpful, clumsy, and potentially dangerous. Let’s dive into some of the good and bad parts of Copilot.
The $10 price point is perfect because it makes me mad
The pricing model for Copilot is my favorite part. $10/month doubles my GitHub costs. That’s almost one Netflix… to have tab completion!? $10/month for copy-pasta from StackOverflow? No way, José.
On the other hand, if you asked me, “For ~$100/year, I can make coding a little easier. Do you want that?” I am an unequivocal “Hell yes.” For 27¢/day I can have a robot buddy help me write code? Even if it’s only a little helpful, that’s worth it relative to what I earn… but the critical question is, does it work as advertised?
I’ve never hit the esc
key more in my life
Copilot is noisy and attempts to insert itself in the conversation whenever there’s a pause in coding. Sometimes Copilot is magical, effortlessly finishing an entire block of code on my behalf. And sometimes, Copilot is like one of those “Garth and Kat” SNL sketches where Fred Armisen and Kristin Wiig attempt to sing totally improvised songs in unison…
I find Copilot has a low accuracy specifically around closing brackets and parenthesis, so with Copilot I hit escape to cancel suggestions a lot. This seems like an area Copilot should succeed, but Copilot routinely adds extra brackets and prematurely closes objects, functions, and arrays. Those situations are frustrating.
This accuracy problem might get solved over time. Other people on my team have had similar experiences, so it seems not isolated to me, but maybe it’s related to our setup. Muscle memory plays a part too. Copilot could be like an instrument where you get better at it over time. Maybe I could find a situation where Copilot does succeed…
Can Copilot automate boring tasks?
I thought writing tests would be a good task for Copilot. I don’t like writing tests, maybe Copilot does? After typing test(
for an Alert component, Copilot suggested…
test('renders the correct markup with props', () => {
const wrapper = shallowMount(Alert, {
propsData: {
type: 'error',
message: 'This is an error message'
}
})
expect(wrapper.element).toMatchSnapshot()
})
Great job, Copilot… but watch out! The use of toMatchSnapshot()
introduced more overhead than what I was going for at the time. I also didn’t have message
prop in my Alert component, so that was wrong and confused me for a bit. And the test could be a bit simpler. After some editing, the final test for rendering an error Alert looked like this:
test('renders the correct markup with type=error', async () => {
await wrapper.setProps({ type: 'error' })
expect(wrapper.classes()).toContain('error')
})
I don’t expect a robot to know the only changes in <Alert type="error">
is a class="error"
on the element or that we had already setup a wrapper
in a beforeEach
function, but…
To Copilot’s credit, it did write a passing test; it just wasn’t the best test. I’ll still call this a win because a lot of testing is about getting started. Copilot seems good at summoning tests out of its butt much faster than I can, so I consider this a success.
Is Copilot bad for the industry?
There’s a lot of FUD (fear, uncertainty, and doubt) about Copilot being bad for the industry. I don’t want to dismiss genuine concerns, but here’s some of my thoughts on the common concerns raised.
- Copilot creates lazy programmers. Sure, I guess, but I sort of got into programming because I am lazy. If a robot can make me more lazy, that seems like a win. The nuance here is that you still need to be a good programmer to know if what the machine generates is (both stylistically and morally) good code.
- This will spread bad code. I agree with this point, Copilot has the potential to efficiently produce bad code to the extent that I wonder if the next twenty years of programming might be undoing AI-generated code.
- Licensing is a big problem. Theoretically, if humans trained Copilot on GPL code and injects a single line of GPL code into my software, then technically that means my entire application became GPL and I need to open source it all. Yikes. How do we quality control this? Does each line of machine-generated code need to come with a license? This is unsustainable at best. A few months in, I have no idea what lines of code were authored by me or by the machine. IANAL, but what I hope happens is that we’re catapulted into a new era of software licensing where code becomes an open commons.
- AI is coming for my job. I went through my John Henry crisis years ago and (spoiler!) seven years since I wrote that post, machines haven’t taken my job1… but they have gotten frighteningly good! With a single text prompt Midjourney can create fantastic compositions in a specific style; incredible! Where does that leave us? Universal Basic Income, hopefully, but beyond that I started looking for non-Doomsday scenarios with AI where it augments work for the better. My favorite example was this Behind the Scenes look at the how Into the Spiderverse used AI to automate the creation of wrinkle lines to help express emotion.
It’ll be interesting to watch what happens on these fronts. What’s most interesting is that we’re no longer talking about AI programming in a theoretical sense, it’s here now. It’s getting better. We’re at the singularity. That’s fascinating to think about, but let me share the biggest shift I experienced while using Copilot.
The Writing Code → Reviewing Code Shift
My biggest adjustment with using Copilot was that instead of writing code, my posture shifted to reviewing code. Rather than a free form solo-coding session I was now in a pair-programming session with myself (ironically) in the copilot seat reviewing. I kept having a verbal conversation with myself…
“Okay…”
“Do I like that?”
“Is that right?”
”It’s probably good enough.”
”That will have to be fixed.”
That shift in posture change was enough to make me not like Copilot at first. I want to write code, not read code, dammit! Blast, you infernal machine! This activity is about me writing code, not me approving changes… but then I started to get over my ego a bit.
Identifying the shift in posture allowed me to start enjoying Copilot. The end goal of programming is working software and the robot can suggest code faster than I can write code. Yielding to that dynamic creates a fundamental shift in programming…
Programming is now a game
Now when I sit down to write a block of code, I imagine that block of code in my brain. Then I start typing and Copilot starts guessing. Programming is now a game to see if Copilot matches that block of code in my head.
Sometimes Copilot gets this comically wrong. But…
When Copilot is in the ballpark of what I imagined in my brain, it feels wonderful. A mind-reading robot, how incredible! Zoltar, the magnificent! The robot’s suggestion reinforces my intuition and makes me feel that I’m on a right path because this robot (who was trained on billions of lines of code from hundreds of thousands of developers) arrived at near the same answer as me. That is a reassuring feeling.
For now, I’m in on Copilot. They have my $100. I look forward to seeing where this future is headed.
-
Notably, the big inspiration for my John Henry post in 2015 was The Grid, which imploded as early as 2016 after failing to deliver. ↩