SQYD.studio

Independent game development that follows the fun.

Bad Vibes Code

Hello anyone! I want to ramble about how I’ve been using Large Language Models(LLMs) in my workflow. Smarter people than me have a lot to say about AI, so I’m going to keep this rooted in my personal experience with the technology. Spoiler warning: my experiences are incredibly mixed.

At work I’ve been using Obsidian* to index our corporate files. Recently, I started building forms (using the community plugin Metabind and some custom javascript) to get structured reports from management to the boss. To do this I’ve been leaning heavily on GitHub Copilot (running ChatGPT 4). I don’t really know JavaScript/Typescript syntax and I’ve never written a JavaScript app. That said, I do understand how code is structured. I know my if’s and loops, I understand the garbage collector, and I’m increasingly comfortable using package managers to import and maintain dependencies. Based on this I have to say Copilot is very good at JavaScript/Typescript. It knows the syntax, it can usually recommend libraries with the right utility functions, and is quite good at managing javascript objects. When I prompt it for one function at a time- it does a very good job producing functional code**. Copilot has become an indispensable part of my workflow- and with all the “vibe coding” memes going around right now I figured I’d give that a shot too.

That brings us to this week. Boss wants the Obsidian report forms to send automated emails when they are filled (he used to get them through email). This means using OAuth2 to access our gmail API. This isn’t something I’ve done before, but it is something thousands of web apps have done before- I figured AI would kick this tasks butt. So I found a chunk of code from another repo that does the same thing, passed it to copilot as context, then told my robo-buddy to write me an authentication solution.

It wrote something that, on paper, looked like it would do what I needed it to do. When I launched it correctly open the web browser and prompted google sign in- but it never returned the credentials to my app. It was wrong, but it looked right and almost worked- so I spend way more time than I care to admit trying to debug it. It was stupid. I didn’t understand the technology I was trying to interact with properly, but I was so tantalizingly close to an answer that I was unable to resist the inertia of sunk costs.

The experience was very frustrating.I was focused on the ends and not the means. I wanted to get a complex task done as fast as possible to please the boss. I wasn’t building my skills, I was baby sitting a robot- the experience was the opposite of everything I’ve grown to love about writing code.

Eventually, I got smart. I found a tutorial for OAuth online, scrapped my vibe code, and took the time to learn something new. Call it a cautionary tale. These LLMs are very impressive. Impressive enough that they can convince you that they know what they are doing. But they really don’t. The really impressive thing is taking the time to learn something new.

Copilot saves me a lot of time when I knows enough to know when it is wrong. I’ve learned my lesson though. Whenever I encounter a new topic/technology, I’m going to take the time learn the fundamentals myself. Learning is the fun part, the end result is just a bonus.


* We are a small chaotic team- I recommend Obsidian for flexibility, but not for structure or security. Your mileage may vary.

** It’s less comfortable with the Obsidian API, and when I have to interact with the App It’s better to use my time going through the documentation than madly prompting and praying. Takeaway: LLMs are good at stuff they have the most training data for. Who knew?

Leave a comment