Our very own Stephen Shaw was on an episode of Web Dev Challenge on CodeTV: Build the Future of AI-Native UX in 4 Hours. I started watching this on my computer, but then moved to my living room couch to put it on the big screen. Because it deserves it! It honestly feels like “real” TV, as good as any episode of a home renovation show or the like. Only obviously better as it’s straight down the niche of web maker nerds like us.
All three teams in the episode were building something that incorporated AI usage directly for the user. In all three cases, using the app started with a user typing in what they wanted into a textbox. That’s what the input for LLMs thrives on. I’m sure in all three cases it was also augmented with additional prompting and whatnot, invisible to the user, but ultimately, you ask something in your own words.
LLMs were interacted with via API and the teams then dealt with the responses they got back. We didn’t get to see how they dealt with the responses much, but you get the sense that 1) they can be a bit slow so you have to account for that 2) they are non-deterministic so you need to be prepared for highly unknown responses.
The episode was sponsored by Algolia, which provides search functionality at it’s core. Algolia’s APIs are, in stark contrast to the LLM APIs, 1) very fast 2) largely deterministic, meaning you essentially know and can control what you get back. I found this style of application development interesting: using two very different types of APIs, leaning into what each are good at doing. That’s not a new concept, I suppose, but it feels like a fresh new era of specifically this. It’s not AI everywhere all the time for everything! It’s more like use AI sparingly because it’s expensive and slow but extremely good at certain things.
I admit I’m using AI more and more these days, but 95% just for coding help. I wouldn’t call it “vibe coding” because I’m very critical of what I get back and tend to work on a codebase where I already essentially know what I’m doing; I just want advice on doing things faster and help with all the rote work. What started as AI helping with line completion has expanded into much more general prompting and “agents” roaming a whole codebase, performing various tasks. I’m not sure when it flipped for me, but this whole agent approach to getting AI help is actually the most comfortable way working with AI and code for me now.
I haven’t tried Claude Code yet, mostly because it’s command-line only (right??) and I just don’t live on the command line like that. So I’ve been mostly using Cursor. I tried Windsurf a while back and was impressed by that, but they are going through quite a bit of turmoil lately so I think I’ll stay away from that unless I hear it’s great again or whatever.
The agentic tools that you use outside of your code editor itself kind of weird me out. I used Jules the other day for a decently rote task and it did a fine job for me, but was weird to be looking at diffs in a place I couldn’t manually edit them. It almost forces you to vibe code, asking for changes in text rather than making them yourself. There must be some market for this, as Cursor has them now, too.
It really is the “simple but ughgkghkgh” tasks for me that AI excels at. Just the other day I was working on an update to this very CodePen blog/podcast/docs site which we have on WordPress. I had switched hosting companies lately, and with that came a loss in how I was doing cache-busting CSS. Basically I needed to edit the header.php
file with a cache-busting ?v=xxx
string where I <link>
ed up the CSS, otherwise shipping updated CSS wouldn’t apply when I changed it. Blech. CodePen deployed sites will not have this problem. So, anyway, I needed a simple build process to do this. I was thinking Gulp, but I asked an AI agent to suggest something. It gave me a variety of decent options, including Gulp. So I picked Gulp and it happily added a build process to handle this. It required maybe 3-4 rounds of discussion to get it perfectly dialed in, but all in all, maybe a 10-minute job. I’d say that was easily a 2-3 hour job if I had to hand-code it all out, and much more if I hadn’t already done exactly this sort of thing many times in my career. I’m definitely starting to think that the more you know what you’re doing, the more value you get out of AI.
While we’re at it, I’ll leave you with some AI-ish bookmarks I’ve had sitting around:
- humanify: “Deobfuscate Javascript code using ChatGPT”
- Derick Ruiz: LLMs.txt Explained (Basically dump your docs into one big
.txt
file for LLMs to slurp up on purpose. Weird/funny to me, but I get it. Seems like npm modules should start doing this.) Ryan Law also has What Is llms.txt, and Should You Care About It? - Steve Klabnik: I am disappointed in the AI discourse. (If you’re going to argue about something, at least be informed.)
- Video: Transformers.js: State-of-the-art Machine Learning for the web. AI APIs baked into browsers will be a big deal. More privacy, no network round-trip, offline support, etc.