#3 Furry Circuits
The power of "what if?" questions, LLM debugging tools, reinventing wheels, the VR world, rubber duck LLMs and remote "ramblings"
There is such power in "what if?" questions. So many brilliant ideas start with this simple phrase. What if we allowed users to interact with screens through touch? (smart phones) What if we took the string output of an LLM and piped it to a function, using that to call tools? (AI agents.) What if we simulated the psychological effect of a slot machine through video content consumed on screen? (TikTok's endless scroll)
The magic happens when someone dares to ask the question—and then actually tries to answer it.
Here's a free idea I haven't fully fleshed out, so bear with me: What if we could drop an LLM into our code that acts like an intelligent debugger? It would capture object values, parameters, and other data as we develop and run new code, summarize the execution flow, and dump everything to a flat file. This output could then help progress development with your coding agent, giving it context for its next steps. If you steal this idea, make sure the website is called makenomistakes.ai.
"This is a motherfucking website. And it's perfect. Seriously, what the fuck else do you want?" People, including myself at times, have forgotten that the internet is a place for sharing ideas. You don't need fancy animations and a great color scheme to get your ideas across (although, I'll admit, in some cases it does help). As the site puts it: "Shit's legible and gets your fucking point across (if you had one instead of just 5mb pics of hipsters drinking coffee)." Lol.
"One of the most harmful pieces of advice is to not reinvent the wheel. It usually comes from a good place, but is typically given by two groups of people: 1. those who tried to invent a wheel themselves and know how hard it is, and 2. those who never tried to invent a wheel and blindly follow the advice." Damn right. If you look at the best wheels from 4500–3300 BCE, they don't compare to the wheels we have today. Many creators become paralyzed by the pressure to produce something entirely novel—whether in art, software, or academic research. But here's the truth: "Inventing wheels is learning." Through the reinvention process, you often discover something new or find what you believe is an optimal approach. Sometimes you go down the wrong path and understand why things are done a certain way. But occasionally, you stumble upon a genuine optimization that can be considered truly novel.
As a corollary to the above points: "Good design is simple" and "Good design is redesign." This is about optimizing the degree to which you wrap design around something useful, be it an API, formula, or garden patio. The quality of the output is, I would argue, 3/4 parts functionality and 1/4 part design. Paul Graham, Y Combinator founder, on good design:
Make a slogan, pin it to your wall. Take the stairs all the way to the rooftop and scream it out to the world. Write it on your hand if you have to—shit, get it tattooed if you must. Another reminder, this time for when you're writing: it doesn't have to be perfect. "This was something that came to mind recently while I was writing my book. It's one thing to say 'your first draft will be bad' or 'the quality of the first draft is not important.' It's another thing when you actually see a first draft firsthand and realize how bad it is. That's the revelation that came to me while I was following my favorite author. She released a first draft of an upcoming book, warning that it wasn't even edited or revised. Hoo boy, was it something. The point is, the bar for a first draft is so low, you can't possibly fail."
"Software is deterministic. Given some input, if you run the program again, you will get the same output. A developer has explicitly written code to handle each case. Most AI models¹ are not this way—they are probabilistic. Developers don't have to explicitly program the instructions." This is a good primer for Lee Robinson's breakdown on Understanding AI. I think this is what makes LLMs both brilliant and deadly. I can see how some applications can be replaced with a simple chat interface versus drag-and-drop, dropdown, and click interactions. But the non-deterministic nature of the generated output is a drawback for users who rely on deterministic results to help them do their job. Setting the temperature of the models to a lower value helps, but it doesn't solve the problem when a user says "generate a plot analyzing revenue over the past 3 months" vs "generate a plot analysing revenue over the past 3 months"—note the difference in English/American spelling will make the LLM generate different output, even if its temperature is set to zero, because the prompt is different. Food for thought.
"I've recently noticed a fundamental shift in how I've been programming personal projects [...] The energy I have to put my brain power into a problem has decreased, given how easy it is to use these tools. [...] Fundamentally, programmers can be put into two categories: destination programmers and journey programmers. The destination programmer [...] only cares about the end result [...] For these types of people, LLMs are a great tool, since they allow you to almost abstract away the code. Then there are journey programmers, where the actual destination isn't all that important [...] The majority of the joy was in the building, learning, and problem solving." This makes me want to dive back into coding. I want to unplug from the LLM motherboard and code the old-fashioned way. If I trigger a segmentation fault, I might even smile. There's beauty in fumbling with code. You read the same block over and over. Then you check the standard libraries' documentation. Finally, you realize you used a semicolon instead of a colon. It didn't feel like it at the time, but you learned something. I Want to be a Journey Programmer Again.
I recently tried the Apple Vision Pro for the first time while in SF. While there were definitely "wow" and "you can do that?" moments, several factors convinced me that VR/AR adoption will take much longer than expected. First, the headset is surprisingly heavy—after about 40 minutes, I felt significant neck strain, not to mention having to manage the external battery pack. Second, there's a serious lack of haptic feedback: no vibrations, sounds, or tactile cues to confirm that gestures registered successfully. Finally, and most importantly, I see the world far better through my own eyes (despite being vision-impaired). The 360-degree vistas are impressive, but they're not the real thing. Focus on the details and you'll notice what's missing: no insects crawling, no leaves rustling in the wind, no dust particles dancing in desert air. Just pixels.
All the above considered, there is a feeling of inevitability about MR/AR/VR. The Hyper-Reality short film by Keiichi Matsuda catches this experience brilliantly. It’s also a little scary when the headset the protagonist is wearing in the video turns off. No more flashy lights, fun music, or cool popups. Just your typical, bland, boring grocery store.
Are LLMs just a glorified rubber duck? That's what antirez discusses in his post Human Coders are still better than LLMs. There's no real surprise to me in that statement. I agree that "the creativity of humans still has an edge; we are capable of really thinking out of the box, envisioning strange and imprecise solutions that can work better than others. This is something that is extremely hard for LLMs." But I, like antirez, still use LLMs every day to code. The way I see it, "vibe coding" is a way of thinking fast—offloading your intuition on what a program should do in plain text for the LLM to code it up. Whereas coding yourself is thinking slow (borrowing Kahneman's framework from Thinking, Fast and Slow). This involves tapping into the "flow state" and the part of your mind that works at a near subconscious level to look at a system, envision edge cases, future features, and optimize the code for all of those things at once. But you can still use LLMs for thinking slow: "Still, to verify all my ideas, Gemini was very useful, and maybe I started to think about the problem in such terms because I had a 'smart duck' to talk with."
A fascinating essay and opinion from Wendell Berry published in the '80s on "Why I am Not Going to Buy a Computer." His logic shifts from practical concerns about the technology's necessity to deeper environmental and philosophical objections: "I would hate to think that my work as a writer could not be done without a direct dependence on strip-mined coal. How could I write conscientiously against the rape of nature if I were, in the act of writing, implicated in the rape?" Some people today are hesitant to overly rely on the use of LLMs for creative tasks or exploring ideas. But like with computers, complete adoption again feels inevitable.
"When you're placed in a high-stakes, time-pressured situation, like live coding, your brain reacts exactly like it would to any other threat. The amygdala gets activated. Cortisol levels spike. Your prefrontal cortex, the part of the brain responsible for complex reasoning and working memory, gets impaired. Either mildly, moderately, or severely, depending on the individual and their baseline stress resilience. [...] For some people, especially those with even mild performance anxiety, it becomes nearly impossible to think clearly. Your attention narrows. You can't hold multiple steps in your head. You forget what you just typed a few seconds ago. It feels like your IQ dropped by 30 points. In fact, it feels like you're a completely different version of yourself—a much dumber one." More evidence that the software engineering interview process is broken, even more so when you consider the use of LLMs to assist with coding these days. In my opinion, you should provide take-home assignments, or if it must be live: ask only for pseudocode (removing language-specific idiosyncrasies) and your approach to solving the problem. If you're hiring for a specialist in a particular language or technology, give them a quick 10-20 question quiz and/or review their portfolio.
Wouldn’t it be so cool to be part of the team responsible for putting the Curiosity rover on Mars? Imagine the technicalities. The latency in controlling the rover remotely from Earth to Mars. The sandstorms. The reliance on the sun to recharge the rover. So cool. I was dissapointed to see the Mars Curiosity Rover account on X be archived in June this year, but I’ll still be following its discoveries whatever way I can.
All the points above are my ramblings. If you're working on a remote team, you should ramble (remotely) together. It brings people together. It helps share ideas. It illustrates, through the content you consume, who you are. After all, you are what you eat.