<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Daniel Philip Johnson]]></title><description><![CDATA[Remote Developer | Based in Cornwall, UK
I am a Full-Stack Engineer with 5+ years of experience, transitioning from startups → agencies → large-scale e-commerce]]></description><link>https://blog.danielphilipjohnson.co.uk</link><generator>RSS for Node</generator><lastBuildDate>Sat, 25 Apr 2026 01:04:14 GMT</lastBuildDate><atom:link href="https://blog.danielphilipjohnson.co.uk/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Time Slicing Isn’t Magic. It’s the Event Loop (Thanks, Ryan)]]></title><description><![CDATA[This post isn’t me being clever.
It’s me standing slightly to the side of Ryan Carniato while he explains, very calmly, why half the frontend discourse about “transitions” and “responsiveness” is misp]]></description><link>https://blog.danielphilipjohnson.co.uk/time-slicing-isn-t-magic-it-s-the-event-loop-thanks-ryan</link><guid isPermaLink="true">https://blog.danielphilipjohnson.co.uk/time-slicing-isn-t-magic-it-s-the-event-loop-thanks-ryan</guid><dc:creator><![CDATA[Daniel Philip Johnson]]></dc:creator><pubDate>Fri, 20 Mar 2026 17:58:08 GMT</pubDate><content:encoded><![CDATA[<blockquote>
<p>This post isn’t me being clever.</p>
<p>It’s me standing slightly to the side of Ryan Carniato while he explains, very calmly, why half the frontend discourse about “transitions” and “responsiveness” is misplaced.</p>
<p>If you haven’t watched the video yet, stop here and do that first. Everything below is a <strong>summary of themes</strong>, not a replacement for the source.</p>
</blockquote>
<hr />
<h2>Why This Video Exists at All</h2>
<p>This video exists because frontend developers keep arguing about the wrong thing.</p>
<p>Not because we’re careless — but because we’re trapped inside the abstractions we work with every day. Transitions. Suspense. Scheduling APIs. Concurrent rendering. We inherit the vocabulary of our tools, and eventually mistake that vocabulary for reality.</p>
<p>Ryan opens the talk by quietly refusing to play that game.</p>
<p>He doesn’t start with React. He doesn’t start with Solid. He doesn’t even start with “time slicing” as a concept worth defending. Instead, he rewinds the conversation back to the thing all of these ideas sit on top of: the browser itself.</p>
<p>That move matters.</p>
<p>Because time slicing wasn’t invented to make frameworks feel clever. It exists because the browser has always had to balance competing responsibilities — user input, rendering, layout, scripting — without freezing the interface. The problem predates modern frameworks. We just keep rediscovering it under new names.</p>
<p>Ryan’s framing here is subtle but important: <em>transitions aren’t the solution</em>. They’re a coping mechanism. A pattern that emerges when we feel the pain of blocking work but don’t yet have a shared mental model for why that pain exists.</p>
<p>This is why the talk feels almost corrective in tone. Ryan isn’t introducing a new idea; he’s stripping narrative away from an old one. He’s trying to dissolve a layer of framework mythology that has built up around scheduling and responsiveness especially in React-centric discourse and replace it with something more boring, more mechanical, and ultimately more useful.</p>
<p>At several points, Ryan effectively pauses the conversation to reset the frame. Not to argue, but to realign. To say: <em>before we talk about APIs, we need to talk about constraints</em>. Before we debate solutions, we need to understand the system we’re operating inside.</p>
<p>That’s the real reason this video exists.</p>
<p>Not to persuade you to adopt a particular approach but to remind you that the browser was already doing scheduling long before we showed up with abstractions and opinions. And that if we don’t understand <em>that</em> first, everything else we argue about will stay strangely ungrounded.</p>
<hr />
<h2>Forget Transitions. Look at the Loop.</h2>
<p>After resetting <em>why</em> the conversation exists, Ryan makes a second, more decisive move: he takes transitions off the table entirely.</p>
<p>Not because they’re useless but because they’re downstream.</p>
<p>Ryan’s argument here is simple, and slightly uncomfortable: if you start your mental model at the level of framework APIs, you’ve already missed the point. You end up debating <em>behavioural symptoms</em> instead of the mechanism that produces them.</p>
<p>So he rewinds again this time past libraries, past abstractions, past even the word “time slicing” and lands on the JavaScript event loop.</p>
<p>This is where the talk quietly becomes about power.</p>
<p>The browser is not a passive executor of your code. It is an active scheduler with its own priorities, deadlines, and veto rights. Your JavaScript does not <em>run whenever it wants</em>; it runs when the browser decides it’s safe to do so without breaking the user experience.</p>
<p>Ryan is careful here not to dramatise this. He doesn’t frame the browser as hostile or restrictive. He frames it as pragmatic. Rendering needs to happen. Input needs to be processed. Frames need to be painted. Your code is just one participant in that negotiation.</p>
<p>This is the moment where “scheduling” stops being a developer-controlled concept and starts being a cooperative one.</p>
<p>The reason Ryan insists on starting with the event loop not APIs is that every abstraction inherits its constraints from here. Once you understand how work is queued, flushed, deferred, or allowed to yield, many higher-level debates lose their mystique. They stop feeling like clever tricks and start feeling like trade-offs.</p>
<p>And crucially, the event loop doesn’t care about your intent.</p>
<p>It doesn’t care whether you meant your update to be “low priority” or whether you wrapped it in the right abstraction. It cares about <em>where</em> the work landed, <em>when</em> it’s allowed to run, and <em>what must complete before rendering can proceed</em>.</p>
<p>This is why Ryan’s framing feels almost deflationary. He’s not adding conceptual layers he’s removing them. He’s saying: before you ask how to schedule work <em>better</em>, you need to understand how work is scheduled <em>at all</em>.</p>
<p>Transitions, suspense, yielding all of those ideas only make sense once the loop is in focus. Until then, they’re stories we tell ourselves to explain behaviour we don’t fully control.</p>
<p>Ryan doesn’t say this explicitly, but the implication is hard to miss: if you want responsive interfaces, you don’t start by choosing the right abstraction. You start by respecting the system you’re running inside.</p>
<hr />
<h2>Macro Tasks, Microtasks, and Why Starvation Happens</h2>
<p>This is where Ryan’s explanation stops being abstract and starts becoming uncomfortably practical.</p>
<p>Once the event loop is in view, the next thing Ryan does is separate work into categories not by intent, but by <em>how interruptible it is</em>. This distinction is doing far more work than most developers realise.</p>
<p>Macro tasks are, in a sense, polite. They run, they finish, and then they give the browser a chance to breathe. Between macro tasks, the browser can step in paint the screen, handle input, do the things users actually notice.</p>
<p>Microtasks are different.</p>
<p>Ryan describes them less as “smaller tasks” and more as <em>obligations</em>. When a microtask is scheduled, the browser isn’t allowed to partially process them. It must flush the entire microtask queue before it can move on before it can render, before it can respond, before it can visually recover.</p>
<p>This is where the problem of starvation enters the picture.</p>
<p>Because if microtasks keep scheduling more microtasks, the browser never reaches a safe point to render. Nothing is technically blocked JavaScript is still progressing but from the user’s perspective, the interface appears frozen. The system is busy honouring its promises.</p>
<p>Ryan is careful not to demonise microtasks. He doesn’t frame them as a mistake. In fact, he points out that their <em>guarantees</em> are exactly what make them powerful. They exist to provide consistency to ensure certain invariants hold before anything else can happen.</p>
<p>But that power comes with a cost.</p>
<p>If you treat microtasks as just another async tool interchangeable with timeouts or animation frames you end up accidentally telling the browser: <em>do not render until I’m finished thinking</em>. And the browser, being obedient, complies.</p>
<p>This is why Ryan’s critique of “Promise everywhere” culture lands so cleanly. Promises aren’t slow. They aren’t inefficient. They’re just uncompromising. They prioritise correctness over responsiveness, and if you’re not aware of that trade-off, you can very easily starve the UI without realising you’ve done anything wrong.</p>
<p>What’s striking here is that none of this is framework-specific. These behaviours exist before React, before Solid, before modern scheduling APIs. Ryan isn’t exposing an edge case he’s exposing a rule.</p>
<p>And once you see that rule, a lot of mysterious UI behaviour stops being mysterious.</p>
<p>The screen didn’t freeze because the app was “heavy.”</p>
<p>It froze because the browser was never given permission to stop and draw.</p>
<hr />
<h2>requestAnimationFrame Is a Request (Not a Command)</h2>
<blockquote>
<p>Ryan is very deliberate with his wording here — request isn’t a metaphor.</p>
</blockquote>
<p>At this point in the talk, Ryan slows down and so should we.</p>
<p>Because this is where a lot of developers think they understand what’s happening, but are quietly wrong about <em>who’s in charge</em>.</p>
<p><code>requestAnimationFrame</code> sounds authoritative. It feels like scheduling. It feels like you’re telling the browser, <em>run this before the next paint</em>. But Ryan is very deliberate here: that’s not what’s happening.</p>
<p>You’re not scheduling anything.</p>
<p>You’re asking.</p>
<p>The browser decides whether there <em>is</em> a next frame worth running your code in. It decides when that frame happens. It decides whether the system is in a state where rendering can safely proceed. Your callback is conditional on all of that being true.</p>
<p>This is why Ryan keeps emphasising the word <em>request</em>. Not as semantics, but as a mental correction.</p>
<p>When you call <code>requestAnimationFrame</code>, you’re borrowing time from the renderer. You’re saying: <em>if you’re about to paint anyway, and if there’s room, I’d like to run something first</em>. If the browser can’t honour that because it’s busy, because frames are being skipped, because the tab isn’t visible your work simply waits.</p>
<p>Or never runs at all.</p>
<p>That’s not a failure mode. That’s the contract.</p>
<p>Ryan’s point isn’t that <code>requestAnimationFrame</code> is unreliable. It’s that it’s honest. It exposes the fact that rendering is not guaranteed, and that scheduling visual work only makes sense when the browser has decided a frame is happening.</p>
<p>This is also where a lot of performance folklore quietly breaks down.</p>
<p>You’ll hear advice like “do expensive work in rAF to avoid blocking rendering.” Ryan’s framing makes it clear why that advice is incomplete. If the work is expensive enough, it doesn’t matter <em>where</em> you put it the browser still has to decide whether it can afford to render afterward.</p>
<p><code>requestAnimationFrame</code> doesn’t grant priority. It aligns you with the browser’s priorities.</p>
<p>And once again, this is the pattern Ryan keeps reinforcing: you don’t control the timeline. You negotiate with it. The browser isn’t an execution engine waiting for instructions it’s a scheduler trying to keep the experience intact.</p>
<p>This section lands quietly, but it changes how you read almost every animation-related API afterward. Not as tools for control, but as points of coordination.</p>
<hr />
<h2>Why Microtasks Are a Framework Superpower</h2>
<p>Up to now, Ryan has been describing <em>mechanics</em>. In this section, he starts talking about <em>responsibility</em>.</p>
<p>Microtasks, as Ryan frames them, are not just a scheduling primitive — they’re a liability surface. They execute with guarantees so strong that misuse doesn’t merely degrade performance; it can lock the system into doing the wrong thing very efficiently.</p>
<p>And that’s exactly why they don’t belong everywhere.</p>
<p>Ryan draws a line here that most frontend conversations blur: the difference between <strong>library-level code</strong> and <strong>application-level code</strong>. Both can use the same primitives, but they shouldn’t exercise the same power.</p>
<p>Microtasks are powerful <em>because</em> they’re dangerous. They allow you to say, “Before anything else happens before rendering, before user input, before the browser regains control this must be true.” That’s an extraordinary level of authority to grant to arbitrary code.</p>
<p>Framework authors understand this.</p>
<p>They operate at a layer where enforcing invariants is the job. Where consistency across a render pass matters more than moment-to-moment responsiveness. Where flushing a queue completely is not a bug, but a requirement.</p>
<p>Application code lives in a different world.</p>
<p>It’s not responsible for maintaining global invariants. It’s responsible for staying responsive. For yielding. For cooperating with the browser rather than insisting on correctness at all costs.</p>
<p>Ryan doesn’t moralise this distinction. He doesn’t say microtasks are “bad” or that developers shouldn’t use them. He simply points out that once you understand the guarantees microtasks provide, you also understand why they’re best handled by code that knows the full blast radius of invoking them.</p>
<p>This is a subtle but important reframing.</p>
<p>The question stops being “Can I use this?” and becomes “Am I the right layer to use this?” That shift alone eliminates a surprising number of accidental performance problems.</p>
<p>Seen this way, microtasks aren’t an optimisation tool. They’re a coordination tool. And coordination, by definition, belongs at the level that can see the whole system.</p>
<p>Ryan doesn’t push a solution here. He just makes the boundary visible. And once it’s visible, it becomes very hard to unsee.</p>
<hr />
<h2>Idle Time, Yielding, and Cooperative Scheduling</h2>
<blockquote>
<p>Ryan isn’t advocating for more APIs — he’s advocating for better manners.</p>
</blockquote>
<p>By this point in the talk, the tone shifts.</p>
<p>Up until now, Ryan has been explaining <em>why things go wrong</em>. Here, he starts talking about what it looks like when things go <em>right</em> not because we’ve found a perfect API, but because we’ve adopted a different attitude toward time.</p>
<p>The word that keeps surfacing, implicitly and explicitly, is <strong>cooperation</strong>.</p>
<p>Idle time exists because the browser is not always under pressure. There are moments small, fragmented, unpredictable moments where nothing urgent is happening. No input to process. No frame deadline to hit. No layout work queued up. Just… space.</p>
<p>Ryan is careful not to romanticise this. Idle time isn’t a promise. It’s not guaranteed. It’s an opportunity the browser may or may not offer, depending on conditions you don’t fully control.</p>
<p>And that’s exactly the point.</p>
<p>APIs like <code>requestIdleCallback</code> and ideas like yielding aren’t about squeezing more work into the system. They’re about <strong>learning when to step back</strong>. About giving the browser explicit permission to prioritise the user over your computation.</p>
<p>This is a very different posture from traditional async thinking.</p>
<p>Instead of asking, <em>When can I run?</em></p>
<p>You start asking, <em>When should I not?</em></p>
<p>Yielding, in Ryan’s framing, is an act of trust. You’re acknowledging that the browser has a better global view of urgency than you do. That it can decide when your work is no longer appropriate to continue uninterrupted.</p>
<p>This is also where the illusion of control finally dissolves.</p>
<p>You can break work into chunks. You can defer it. You can ask nicely. But you can’t force the browser to care about your task more than it cares about responsiveness. Cooperative scheduling works not because it’s clever, but because it respects that hierarchy.</p>
<p>Ryan isn’t advocating for more APIs here. He’s advocating for better manners.</p>
<p>For writing code that assumes interruption is normal. That progress may be incremental. That finishing later is preferable to blocking now. It’s a mindset shift away from “make it finish” and toward “make it survivable.”</p>
<p>Once you see scheduling this way, performance stops being a battle for priority and starts becoming a conversation about timing.</p>
<hr />
<h2>The Mental Model Ryan Is Actually Teaching</h2>
<blockquote>
<p>This is the real lesson of the video, not how to schedule work, but how to think about time.</p>
</blockquote>
<p>By the time Ryan reaches this point in the talk, he’s already said everything he needs to say.</p>
<p>The rest is integration.</p>
<p>What becomes clear even though he never states it outright is that this talk was never really about time slicing as a technique. It was about <strong>recalibrating how we think about time itself in the browser</strong>.</p>
<p>The browser is not a neutral stage where your code performs. It’s an active participant with its own goals: keep the interface responsive, honour user input, and render frames when it can do so safely. Your code runs <em>inside</em> that system, not above it.</p>
<p>Once you internalise that, several long-running frontend debates start to lose their urgency.</p>
<p>Responsiveness is no longer something you “add” with the right abstraction. It’s something you preserve by not overstaying your welcome. Scheduling stops being about clever prioritisation and starts being about <em>knowing when to yield</em>.</p>
<p>Time slicing, in this light, isn’t an optimisation trick. It’s a <strong>social contract</strong>. An agreement between your code and the platform: <em>I’ll make progress in pieces, and you’ll let me keep going as long as I don’t monopolise the system</em>.</p>
<p>This is why Ryan keeps returning to constraints instead of solutions. Constraints are stable. APIs change. Frameworks evolve. But the browser’s need to remain interactive does not.</p>
<p>The real lesson here is not how to schedule work more aggressively, but how to think more humbly. To assume interruption. To expect preemption. To design systems that degrade gracefully rather than insisting on finishing everything in one uninterrupted stretch.</p>
<p>Ryan doesn’t give you a recipe at the end of this talk. He gives you a lens. One that makes a lot of existing behaviour both good and bad suddenly make sense.</p>
<p>And once you adopt that lens, it becomes much harder to write code that accidentally fights the browser instead of working alongside it.</p>
<h2>The Main Takeaway</h2>
<p>If there’s a single idea to take from Ryan’s talk, it’s this:</p>
<p><strong>Responsiveness isn’t something you bolt on. It’s something you preserve by respecting the browser’s priorities.</strong></p>
<p>Ryan doesn’t present time slicing as a feature or a technique to adopt. He presents it as an <em>emergent behaviour</em> of a system that is constantly negotiating between work and experience. The browser isn’t trying to be clever it’s trying to stay usable.</p>
<p>Once you internalise that, a lot of frontend advice starts to sound slightly off.</p>
<p>You stop asking which abstraction gives you more control, and start asking which ones assume control they don’t actually have. You become more suspicious of anything that promises uninterrupted execution. You start designing work to be interruptible by default, not as an afterthought.</p>
<p>The deeper lesson here isn’t about queues or APIs it’s about humility.</p>
<p>Ryan’s framing reminds us that the browser has always been the scheduler. Frameworks can cooperate with that reality, or they can fight it, but they can’t replace it. The more your code acknowledges that fact, the less surprising performance problems become.</p>
<p>Time slicing, in this sense, isn’t a trick.</p>
<p>It’s what happens when you stop trying to win against the platform and start working with it.</p>
<hr />
<h2>Watch the Video. Seriously.</h2>
<blockquote>
<p>If any of this felt useful, it’s because Ryan already did the hard thinking.</p>
</blockquote>
<p>If this post worked at all, it’s because the thinking didn’t start here.</p>
<p>Everything above is a summary, a reframing, a set of notes written in the margins of someone else’s explanation. The clarity the insistence on starting from constraints instead of abstractions comes from Ryan Carniato, not from me.</p>
<p>This is one of those talks that’s worth watching more than once. Not because it’s dense, but because it quietly rewires how you interpret everyday frontend behaviour. You start noticing <em>why</em> certain updates feel janky. Why some async work feels harmless and other work feels toxic. Why the browser sometimes feels like it’s “fighting you,” when in reality it’s just enforcing priorities you didn’t account for.</p>
<p>If you take one thing from this post, let it be this:</p>
<p>don’t outsource your mental model of time to a framework.</p>
<p>Watch the video. Pause it. Rewind it. Let the event loop not the abstraction be the thing you reason from. Everything else makes more sense once that’s in focus.</p>
<hr />
<h1>A–Z Terminology Appendix (End Section)</h1>
<p><strong>Purpose:</strong> Reference, not teaching.</p>
<p><strong>Important framing sentence:</strong></p>
<blockquote>
<p>These terms appear either directly in Ryan’s talk or naturally orbit the concepts he’s explaining.</p>
</blockquote>
<h3>A–Z Terminology Appendix</h3>
<blockquote>
<p>These terms appear either directly in Ryan’s talk or naturally orbit the constraints he’s describing. Definitions are intentionally brief — the goal is orientation, not mastery.</p>
</blockquote>
<hr />
<h3><strong>A — Animation Frame</strong></h3>
<p>A single visual update cycle in the browser, typically targeting ~60 frames per second. Each frame represents an opportunity for the browser to update layout, paint, and composite the UI.</p>
<p><strong>Suggested links:</strong></p>
<ul>
<li><p><a href="https://developer.mozilla.org/en-US/docs/Web/Performance/Guides/Critical_rendering_path">MDN</a> — <em>Critical rendering</em></p>
</li>
<li><p><a href="https://web.dev/articles/rendering-performance">web.dev</a> — <em>Rendering performance overview</em></p>
</li>
</ul>
<hr />
<h3><strong>B — Browser Scheduler</strong></h3>
<p>The internal browser mechanism responsible for deciding <em>when</em> JavaScript runs relative to rendering, input handling, layout, and other system work.</p>
<p><strong>Suggested links:</strong></p>
<ul>
<li><p><a href="https://developer.mozilla.org/en-US/docs/Web/API/Scheduler">Scheduler</a> — <em>Scheduling tasks</em></p>
</li>
<li><p><a href="https://web.dev/articles/optimize-long-tasks">web.dev</a> — <em>Optimize long tasks</em></p>
</li>
</ul>
<hr />
<h3><strong>C — Cooperative Scheduling</strong></h3>
<p>A scheduling model where tasks voluntarily yield control back to the scheduler, allowing higher-priority work (like rendering or input) to proceed.</p>
<p><strong>Suggested links:</strong></p>
<ul>
<li><p><a href="https://developer.mozilla.org/en-US/docs/Web/API/Background_Tasks_API">MDN</a> — <em>Cooperative scheduling of background tasks</em></p>
</li>
<li><p><a href="https://github.com/WICG/scheduling-apis">W3C</a> — <em>Scheduling APIs explainer</em></p>
</li>
</ul>
<hr />
<h3><strong>D — Dead Time</strong></h3>
<p>Periods where the browser is idle — no urgent rendering, input, or high-priority tasks are pending — and background work may safely proceed.</p>
<p><strong>Suggested links:</strong></p>
<ul>
<li><p><a href="https://developer.mozilla.org/en-US/docs/Web/API/Window/requestIdleCallback">MDN</a> — <em>requestIdleCallback</em></p>
</li>
<li><p><a href="https://philipwalton.com/articles/idle-until-urgent/">web.dev</a> — <em>Idle until urgent</em></p>
</li>
</ul>
<hr />
<h3><strong>E — Event Loop</strong></h3>
<p>The core execution model that processes tasks, microtasks, and rendering steps in a defined order, ensuring JavaScript runs in a single-threaded but asynchronous environment.</p>
<p><strong>Suggested links:</strong></p>
<ul>
<li><p><a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Execution_model">MDN</a> — <em>Concurrency model and event loop</em></p>
</li>
<li><p><a href="https://jakearchibald.com/2015/tasks-microtasks-queues-and-schedules/">Jake Archibald</a> — <em>Tasks, microtasks, queues and schedules</em></p>
</li>
</ul>
<hr />
<h3><strong>F — Frame Budget</strong></h3>
<p>The amount of time available to complete work within a single frame without dropping frames (roughly 16ms at 60fps).</p>
<p><strong>Suggested links:</strong></p>
<ul>
<li><p><a href="https://web.dev/articles/rail">web.dev</a> — <em>RAIL performance model</em></p>
</li>
<li><p><a href="https://developer.chrome.com/docs/devtools/rendering/performance">Chrome Developers</a> — <em>Rendering performance</em></p>
</li>
</ul>
<hr />
<h3><strong>G — Garbage Collection</strong></h3>
<p>The automatic process by which the JavaScript engine reclaims memory that is no longer reachable, potentially causing pauses if triggered during critical rendering moments.</p>
<p><strong>Suggested links:</strong></p>
<ul>
<li><p><a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Memory_management">MDN</a> — <em>Memory management</em></p>
</li>
<li><p><a href="https://v8.dev/blog/trash-talk">V8.dev</a> — <em>Garbage collection</em></p>
</li>
</ul>
<hr />
<h3><strong>H — High-Priority Tasks</strong></h3>
<p>Work that the browser prioritises to maintain usability, such as user input processing, visual updates, and accessibility-related operations.</p>
<p><strong>Suggested links:</strong></p>
<ul>
<li><p><a href="https://web.dev/blog/better-responsiveness-metric">web.dev</a> — <em>Input responsiveness</em></p>
</li>
<li><p><a href="https://web.dev/articles/user-centric-performance-metrics">Chrome Developers</a> — <em>User-centric performance metrics</em></p>
</li>
</ul>
<hr />
<h3><strong>I — Idle Callback</strong></h3>
<p>A callback scheduled to run during browser idle periods, allowing non-urgent work to execute without blocking rendering or input.</p>
<p><strong>Suggested links:</strong></p>
<ul>
<li><a href="https://developer.mozilla.org/en-US/docs/Web/API/Window/requestIdleCallback">MDN</a> — <em>Using requestIdleCallback</em></li>
</ul>
<hr />
<h3><strong>J — Job Queue</strong></h3>
<p>An internal queue (often used interchangeably with microtask queue) that holds promise reactions and other jobs that must complete before rendering continues.</p>
<p><strong>Suggested links:</strong></p>
<ul>
<li><p><a href="https://tc39.es/ecma262/#sec-jobs">ECMAScript spec</a> — <em>Jobs and Job Queues</em></p>
</li>
<li><p><a href="https://jakearchibald.com/2015/tasks-microtasks-queues-and-schedules/">Jake Archibald</a> — <em>Tasks vs microtasks</em></p>
</li>
</ul>
<hr />
<h3><strong>M — Macro Task</strong></h3>
<p>A unit of work scheduled via mechanisms like <code>setTimeout</code>, <code>setInterval</code>, or I/O events. After each macro task, the browser has an opportunity to render.</p>
<p><strong>Suggested links:</strong></p>
<ul>
<li><p><a href="https://developer.mozilla.org/en-US/docs/Web/API/Window/setTimeout">MDN</a> — <em>setTimeout</em></p>
</li>
<li><p><a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Execution_model">MDN</a> — <em>Event loop</em></p>
</li>
</ul>
<hr />
<h3><strong>M — Microtask</strong></h3>
<p>A high-priority task (e.g. Promise callbacks) that must fully flush before the browser can render or process the next macro task.</p>
<p><strong>Suggested links:</strong></p>
<ul>
<li><p><a href="https://developer.mozilla.org/en-US/docs/Web/API/HTML_DOM_API/Microtask_guide">MDN</a> — <em>Microtask queue</em></p>
</li>
<li><p><a href="https://jakearchibald.com/2015/tasks-microtasks-queues-and-schedules/">Jake Archibald</a> — <em>Tasks, microtasks, queues and schedules</em></p>
</li>
</ul>
<hr />
<h3><strong>P — Promise Semantics</strong></h3>
<p>The guarantees provided by Promises — particularly that their callbacks execute as microtasks — ensuring consistency, sometimes at the expense of responsiveness.</p>
<p><strong>Suggested links:</strong></p>
<ul>
<li><p><a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Using_promises">MDN</a> — <em>Using Promises</em></p>
</li>
<li><p><a href="https://tc39.es/ecma262/#sec-promise-jobs">ECMAScript spec</a> — <em>Promise jobs</em></p>
</li>
</ul>
<hr />
<h3><strong>R — requestAnimationFrame</strong></h3>
<p>An API that requests a callback before the next browser repaint, allowing visual updates to align with the rendering pipeline.</p>
<p><strong>Suggested links:</strong></p>
<ul>
<li><p><a href="https://developer.mozilla.org/en-US/docs/Web/API/Window/requestAnimationFrame">MDN</a> — <em>requestAnimationFrame</em></p>
</li>
<li><p><a href="http://web.dev">web.dev</a> — <em>Using requestAnimationFrame</em></p>
</li>
</ul>
<hr />
<h3><strong>R — requestIdleCallback</strong></h3>
<p>An API that schedules low-priority work during idle periods, when the browser determines it won’t impact responsiveness.</p>
<p><strong>Suggested links:</strong></p>
<ul>
<li><p><a href="https://developer.mozilla.org/en-US/docs/Web/API/Window/requestIdleCallback">MDN</a> — <em>requestIdleCallback</em></p>
</li>
<li><p><a href="https://philipwalton.com/articles/idle-until-urgent/">Blog</a> — <em>Idle until urgent</em></p>
</li>
</ul>
<hr />
<h3><strong>S — Scheduler API</strong></h3>
<p>An emerging set of APIs designed to give developers better control over task prioritisation and yielding without blocking the main thread.</p>
<p><strong>Suggested links:</strong></p>
<ul>
<li><p><a href="https://developer.mozilla.org/en-US/docs/Web/API/Scheduler">web.dev</a> — <em>Scheduler API</em></p>
</li>
<li><p><a href="https://wicg.github.io/scheduling-apis/">WICG</a> — <em>Scheduler API explainer</em></p>
</li>
</ul>
<hr />
<h3><strong>T — Time Slicing</strong></h3>
<p>A technique where long-running work is broken into smaller chunks, allowing the browser to interleave rendering and input between them.</p>
<p><strong>Suggested links:</strong></p>
<ul>
<li><p><a href="http://web.dev">web.dev</a> — <em>Optimize long tasks</em></p>
</li>
<li><p><a href="https://medium.com/@ignatovich.dm/react-19s-engine-a-quick-dive-into-concurrent-rendering-6436d39efe2b">Docs</a> — <em>Concurrent rendering (for context only)</em></p>
</li>
</ul>
<hr />
<h3><strong>Y — Yielding</strong></h3>
<p>The act of voluntarily pausing or deferring work so the browser can prioritise rendering, input, or other high-priority tasks.</p>
<p><strong>Suggested links:</strong></p>
<ul>
<li><p><a href="http://web.dev">web.dev</a> — <em>Yielding to the main thread</em></p>
</li>
<li><p><a href="https://web.dev/articles/optimize-long-tasks">Chrome Developers</a> — <em>Scheduling strategies</em></p>
</li>
</ul>
<hr />
<blockquote>
<p>Attribution</p>
<p>This article is a thematic summary of a talk by Ryan Carniato.</p>
<p>All original insights belong to the creator.</p>
<p>Any framing errors are mine.</p>
</blockquote>
]]></content:encoded></item><item><title><![CDATA[Is Productivity Flatlining? The Death of Hustle in a Burnout Society]]></title><description><![CDATA[Jamie wakes up to the chirp of her productivity app.She logs her meditation, tracks her sleep, checks her calories, and scrolls through a feed of “5am Club” videos all before sunrise. By 8am, she’s already reported to five algorithms and not herself....]]></description><link>https://blog.danielphilipjohnson.co.uk/is-productivity-flatlining-hustle-culture-burnout</link><guid isPermaLink="true">https://blog.danielphilipjohnson.co.uk/is-productivity-flatlining-hustle-culture-burnout</guid><category><![CDATA[Productivity]]></category><category><![CDATA[burnout]]></category><category><![CDATA[HustleCulture]]></category><category><![CDATA[Self Improvement ]]></category><dc:creator><![CDATA[Daniel Philip Johnson]]></dc:creator><pubDate>Thu, 09 Oct 2025 23:00:33 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1759766914299/e1c2f452-6bae-4bcb-b925-09e982c4a632.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Jamie wakes up to the chirp of her productivity app.<br />She logs her meditation, tracks her sleep, checks her calories, and scrolls through a feed of “5am Club” videos all before sunrise. By 8am, she’s already reported to five algorithms and not herself.</p>
<p>We were promised that if we hustled hard enough, tracked our habits, and grinded through 5am routines, the path to millionaire status was waiting for us.</p>
<p>Wake up early. Meditate. Cold shower. Journal. Work harder than everyone else.<br />Somewhere between the vision boards and the “rise and grind” playlists, we started believing that <em>optimisation equals salvation.</em></p>
<p>But what if productivity has stopped delivering on its promise?<br />What if we’ve reached the ceiling not of effort, but of meaning?</p>
<h2 id="heading-the-viral-violence-of-productivity"><strong>The Viral Violence of Productivity</strong></h2>
<p>Philosopher Byung-Chul Han called it <em>the violence of positivity.</em><br />It’s not the old kind of oppression no boss yelling, no whip cracking. It’s internal.<br />We’ve become our own exploiters, whispering “just one more push” while burning the candle at both ends.</p>
<p>The virus spreads itself disguised as ambition.<br />It wears motivational quotes and pastel infographics.<br />It preaches on TikTok:</p>
<blockquote>
<p>“You have the same 24 hours as Beyoncé.”</p>
</blockquote>
<p>No, you don’t. You have rent, a backache, and three browser tabs open to Indeed.</p>
<p>Everywhere you look, the same infection:</p>
<ul>
<li><p>Productivity gurus selling Notion templates for “balance.”</p>
</li>
<li><p>Life coaches turning anxiety into a subscription plan.</p>
</li>
<li><p>Corporate slogans demanding “passion” as code for unpaid overtime.</p>
</li>
</ul>
<p>It’s not <em>forced labour</em> anymore it’s <em>voluntary burnout.</em><br />We brand it as “hustle.” We call it “grindset.” But it’s still violence, just self-inflicted and anaesthetised.</p>
<p>And like any virus, it mutates.<br />Even rest has been re-branded as a productivity tool: “Sleep to perform better.” “Rest days make you stronger.”<br />We can’t even breathe without optimising it. The modern worker doesn’t rest they <em>recover</em> for the next round of output.</p>
<p>Every scroll reinforces it. Hustle content mocks itself to seem ironic, then sells you another planner. Productivity has learned to wear humour as camouflage.</p>
<p>The cruel genius of productivity culture is that it colonises our inner world.<br />We don’t need bosses to overwork us anymore we’ve internalised them perfectly.</p>
<h2 id="heading-the-flatline-effect"><strong>The Flatline Effect</strong></h2>
<p>There was a time when the story worked.<br />The myth was simple: start a company, work hard, scale, win big.<br />But that story belonged to a smaller internet, a cheaper world, a looser economy.</p>
<p>Today, everything is crowded, costly, and gatekept.<br />The algorithm decides who gets seen. Platforms extract the profit.<br />The rest of us chase scraps of visibility and call it “traction.”</p>
<p>Productivity used to promise leverage. Now it mostly delivers exhaustion.</p>
<p>Here’s the harsh maths of the modern hustle:</p>
<ul>
<li><p><strong>The Dropshipping Dream</strong><br />  You’re reselling books and fashion you found elsewhere.<br />  You stay up tweaking ads, negotiating refunds, fighting supplier delays.<br />  Hours: 70+ per week.<br />  Profit: maybe £300 a month after shipping and fees.<br />  The TikTok guru says, <em>“It’s passive income.”</em><br />  No, it’s just <em>unpaid overtime with better lighting.</em></p>
</li>
<li><p><strong>The Content Creator Illusion</strong><br />  You’re filming, editing, posting daily.<br />  You check analytics more than messages from friends.<br />  One video “pops,” then it’s back to silence.<br />  You’ve become a full-time marketer for yourself — and the product is anxiety.</p>
</li>
<li><p><strong>The Corporate Productivity Trap</strong><br />  You track Pomodoros, optimise meetings, and colour-code burnout in five apps.<br />  The company saves money; your wages stay frozen.<br />  They call it “efficiency.” You call it Tuesday.</p>
</li>
<li><p><strong>The Startup Mirage</strong><br />  You raise a seed round, hire fast, chase “growth.”<br />  Two years later, you’re pivoting for the fourth time and writing a LinkedIn post about “learning through failure.”<br />  Behind the scenes, you’re living off caffeine and deferred hope.</p>
</li>
</ul>
<p>We’re not getting lazier we’re just realising the system’s ROI is broken.<br />More effort no longer guarantees progress.<br />We’re running faster on a treadmill that isn’t plugged in.</p>
<p>The data agrees: productivity growth in most developed nations has flat-lined for over a decade, while burnout rates and mental health crises have skyrocketed.<br />We’re producing dashboards, not progress.<br />We’ve reached a point where effort exceeds meaning and the graph has gone flat.</p>
<h2 id="heading-the-collapse-of-the-hustle-illusion"><strong>The Collapse of the Hustle Illusion</strong></h2>
<p>Hustle culture ran on two fuels: <strong>hope and denial.</strong></p>
<p>Hope that the next project, the next all-nighter, the next viral moment would change everything.<br />Denial that our bodies, minds, and economies were quietly collapsing under the strain.</p>
<p>But the cracks are showing.<br />Quiet quitting. Burnout. Mental health leave.<br />The word <em>ambition</em> now feels suspicious a re-brand of exhaustion dressed as empowerment.</p>
<p>The modern worker isn’t lazy they’re haunted.<br />Haunted by unread self-help books, by tabs of “how to stay motivated,” by the quiet fear that slowing down means disappearing.</p>
<p>You can see the rebellion forming.<br />“Lazy girl jobs.” “Slow living.” “Anti-ambition.” Beneath the memes is something serious a generation realising that endless motion is not the same as progress.</p>
<p>We were told to “love what you do.”<br />We did. Now we’re too tired to love anything.</p>
<h2 id="heading-what-comes-after-hustle"><strong>What Comes After Hustle?</strong></h2>
<p>Maybe the next revolution isn’t faster it’s slower.<br />Maybe productivity needs to evolve from <em>output</em> to <em>orientation.</em></p>
<p>Less output. More depth.<br />Less optimisation. More observation.<br />Less “how can I do more?” and more “why am I doing this at all?”</p>
<p>Some are already making the shift:</p>
<ul>
<li><p>Writers building small, loyal audiences instead of chasing virality.</p>
</li>
<li><p>Designers trading speed for craftsmanship.</p>
</li>
<li><p>Developers optimising code not for metrics, but for elegance.</p>
</li>
<li><p>Teams adopting “sane sprints” instead of hero marathons.</p>
</li>
</ul>
<p>This isn’t laziness it’s sustainability.<br />It’s choosing meaning over metrics.</p>
<p>As Han wrote, we became <em>“achievement subjects”</em> defined not by who we are, but by what we can produce.<br />The cure isn’t another morning routine.<br />It’s permission to <em>be</em> instead of constantly <em>becoming.</em></p>
<p>To reclaim time not as a container for output, but as a space for thought.<br />To measure success by <em>presence</em> not performance.</p>
<p>Maybe the real act of rebellion is to unplug, walk outside, and remember that the world existed before calendars synced across devices.</p>
<h2 id="heading-closing-thoughts"><strong>Closing Thoughts</strong></h2>
<p>Maybe productivity didn’t flat-line because we got lazy.<br />Maybe it flat-lined because the old story that relentless hustle equals freedom was always viral violence in disguise.</p>
<p>The myth of infinite growth doesn’t just break economies; it breaks people.<br />And maybe, just maybe, the most radical act left is to rest and call it resistance.</p>
<p>Tomorrow morning, the apps will chirp again.<br />The world will whisper, “optimise.”<br />Maybe this time, you’ll swipe left on the notification not to perform rebellion, but to remember what being human feels like.</p>
]]></content:encoded></item><item><title><![CDATA[llms: Slot Machines With a Fancy Autocomplete Button]]></title><description><![CDATA[Large Language Models (LLMs) are often described in breathless tones: “They’re the future of intelligence!” “They’ll replace programmers!” “They passed the bar exam!”
Calm down. Strip away the neon hype, and you’ll see what they really are: they’re s...]]></description><link>https://blog.danielphilipjohnson.co.uk/llms-slot-machines-with-a-fancy-autocomplete-button</link><guid isPermaLink="true">https://blog.danielphilipjohnson.co.uk/llms-slot-machines-with-a-fancy-autocomplete-button</guid><category><![CDATA[AI]]></category><category><![CDATA[largelanguagemodels]]></category><category><![CDATA[@TechCulture]]></category><category><![CDATA[satire]]></category><dc:creator><![CDATA[Daniel Philip Johnson]]></dc:creator><pubDate>Wed, 08 Oct 2025 12:00:20 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1759053719732/2c3a4a1e-3aec-4b69-af6e-32992ed473af.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Large Language Models (LLMs) are often described in breathless tones: “They’re the future of intelligence!” “They’ll replace programmers!” “They passed the bar exam!”</p>
<p>Calm down. Strip away the neon hype, and you’ll see what they really are: they’re slot machines with autocomplete. Pull the handle, watch the tokens spin, and hope the symbols line up into something you can actually use.</p>
<p>Let’s step onto the casino floor of AI and take a tour.</p>
<h2 id="heading-pulling-the-handle-aka-prompting"><strong>Pulling the Handle (aka “Prompting”)</strong></h2>
<p>You type in:</p>
<p>“Write me a Shakespearean sonnet about Kubernetes.”</p>
<p>The machine whirs, lights flash, and probabilities spin. Out comes… something.</p>
<ul>
<li><p>Sometimes you get three 7s in a row: a beautifully coherent sonnet that almost makes you believe the machine has actually read Hamlet.</p>
</li>
<li><p>Sometimes you get BAR-CHERRY-FISH: a paragraph that looks impressive until you realize Kubernetes was replaced with cucumbers halfway through.</p>
</li>
<li><p>And most of the time you just get mundane lemons and cherries: generic text that “sort of” fits but tastes like it came from a technical fortune cookie.</p>
</li>
</ul>
<p>You clap when the 7s line up. You shrug when you get lemons. And when it gives you cucumbers, you think, “Well, maybe I’ll just pull the handle again.”</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1759052609690/75c7a349-a951-4531-afd8-ad5408cbd247.jpeg" alt="Slot machine metaphor for AI and Large Language Modell" class="image--center mx-auto" /></p>
<p>Congratulations. You’re hooked.</p>
<blockquote>
<p>Finally, a future where human progress depends on pulling a lever until autocomplete spits out Shakespeare.</p>
</blockquote>
<h2 id="heading-the-near-miss-addiction"><strong>The Near-Miss Addiction</strong></h2>
<p>Slot machines make their money on the near miss. Two cherries, then a lemon. Two 7s, then a BAR. Just close enough to trick your brain into thinking you’re winning.</p>
<p>LLMs do the same thing:</p>
<ul>
<li><p>The essay is almost coherent.</p>
</li>
<li><p>The code is almost functional.</p>
</li>
<li><p>The explanation is almost right.</p>
</li>
</ul>
<p>You lean in, convinced you’re one prompt away from perfection. So you tweak the wording, add a few exclamation marks, whisper sweet nothings about “acting like an expert in Kubernetes cucumbers”, and spin again.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1759052786513/54a155ce-308f-4c8d-b7d3-b6576e3cf188.jpeg" alt="AI near miss concept with probabilities spinning" class="image--center mx-auto" /></p>
<p>Not intelligence addiction by autocomplete.</p>
<blockquote>
<p>It almost shipped your feature in a day… then buried you under lint errors so big they need their own Jira epic.</p>
</blockquote>
<h2 id="heading-the-jackpot-illusion"><strong>The Jackpot Illusion</strong></h2>
<p>Every casino thrives on jackpot stories. The Instagram post of someone holding a giant check is worth more than the payout itself.</p>
<p>LLMs are no different:</p>
<ul>
<li><p>“It passed the bar exam!”</p>
</li>
<li><p>“It wrote a bestselling novel draft!”</p>
</li>
<li><p>“It solved my Wordle in two tries!”</p>
</li>
</ul>
<p>Those are the jackpots you hear about. What you don’t see are the other 99 pulls that day: the hallucinated citations, the broken functions, and the wildly confident claims about Australia being in the Northern Hemisphere.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1759052843041/c4bdbbdd-0fa0-4006-95ca-289e6e16f48b.jpeg" alt="Jackpot illusion representing AI success stories" class="image--center mx-auto" /></p>
<p>Jackpots sell the machine. Garbage gets swept under the rug.</p>
<blockquote>
<p>Sure, it passed the bar exam. So did half the lawyers advertising on bus stops.</p>
</blockquote>
<h2 id="heading-the-garbage-payouts"><strong>The Garbage Payouts</strong></h2>
<p>Sometimes, the slot machine doesn’t even bother with near misses. It just spits out garbage and still plays the triumphant jingle.</p>
<ul>
<li><p>Ask for a recipe? It forgets half the ingredients but assures you it’s “authentic”.</p>
</li>
<li><p>Ask for history? Suddenly Winston Churchill and Gandalf are the same person.</p>
</li>
<li><p>Ask for Python code? Hope you enjoy debugging a confident wall of nonsense that imports numpy.magicbeans.</p>
</li>
</ul>
<p>It’s nonsense, but delivered with the swagger of the Wolf of Wall Street selling you a pen – theatre so slick you almost mistake it for intelligence. And just like penny stocks dressed up as the next big thing, the model sells you garbage with such confidence that you buy in, only to realise later it’s worthless.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1759052975325/2c0b8d0b-3440-43b6-b198-245b5de93bdb.jpeg" alt="Garbage output from AI models with confident tone" class="image--center mx-auto" /></p>
<h2 id="heading-why-it-feels-smart"><strong>Why It Feels Smart</strong></h2>
<p>Slot machines are designed to keep you playing. Flashy sounds, colourful animations, and near misses that convince you you’re “so close”.</p>
<p>LLMs use the same trick. They wrap their randomness in:</p>
<ul>
<li><p>Confident phrasing (“As an expert, here’s the definitive answer…”).</p>
</li>
<li><p>Neat formatting (bullet points make everything look credible).</p>
</li>
<li><p>Authoritative tone (it says it like it knows).</p>
</li>
</ul>
<p>You nod along, thinking: ‘Wow, this thing really understands me!’ Spoiler: it doesn’t. It’s just spitting out weighted randomness faster than your brain can register, and because you don’t fully know the topic, it feels more believable than it should. That’s the same trick casinos use with flashing lights and near-misses: you feel like you’re ahead, but the math says otherwise.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1759053255121/63d83399-a003-4518-9732-0821436af3f9.jpeg" alt="Silicon Valley tuning LLM models behind the scenes" class="image--center mx-auto" /></p>
<blockquote>
<p>Nothing says ‘I’m winning’ like getting scammed by math with better formatting.</p>
</blockquote>
<h2 id="heading-the-house-always-wins"><strong>The House Always Wins</strong></h2>
<p>Behind the curtain, there’s no digital genius plotting your enlightenment. Just casino managers in hoodies tweaking payout odds:</p>
<ul>
<li><p>Reduce hallucinations by 10%.</p>
</li>
<li><p>Increase “sounding smart” by 15%.</p>
</li>
<li><p>Add a safety layer so it stops recommending bleach smoothies.</p>
</li>
</ul>
<p>They’re not creating minds. They’re tuning slot machines. And every adjustment is designed to keep you seated at the table, credit card still on file.</p>
<p>Because just like Vegas, the AI casino isn’t built for you to win. It’s built to make sure the house always wins.</p>
<blockquote>
<p>Don’t worry, it’s not rigged — it’s just tuned so you always lose politely.</p>
</blockquote>
<h2 id="heading-the-cult-of-the-high-rollers"><strong>The Cult of the High Rollers</strong></h2>
<p>Every casino has its legends: the guy who swears he can ‘read’ the machine’s pattern, the woman who believes the jackpot is ‘due’. AI has them too. We call them ‘prompt engineers’. They sit for hours whispering to the model, ‘Act like Socrates. No, act like Shakespeare. No, act like Socrates who knows Kubernetes.’</p>
<p>They aren’t unlocking intelligence. They’re superstitioning their way through probability wheels. Every tweak of a prompt is just pushing all their token chips onto another number and praying the wheel lands their way. And startups? They rebrand this roulette table routine as ‘the future of work’ while burning through VC money like a drunk gambler who thinks he’s cracked the system.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1759053398246/cebf5905-d765-470a-a9e3-1fc7e5fea52e.jpeg" alt="Prompt engineers as casino gamblers adjusting prompts" class="image--center mx-auto" /></p>
<h2 id="heading-the-sarcastic-bottom-line"><strong>The Sarcastic Bottom Line</strong></h2>
<p>Large Language Models are not prophets, geniuses, or artificial gods. They are Vegas slot machines with autocomplete strapped on.</p>
<ul>
<li><p>You pull the handle (prompt).</p>
</li>
<li><p>The reels spin (probabilities).</p>
</li>
<li><p>Sometimes you win brilliance.</p>
</li>
<li><p>Sometimes you win cucumbers.</p>
</li>
<li><p>Most of the time you win, meh.</p>
</li>
</ul>
<p>And through it all, the flashing lights convince you it’s more than math.</p>
<p><strong>Moral of the Story</strong></p>
<p>Just smile and ask if they also believe the slot machine in Reno is running AGI instead of flashing lemons.</p>
<p>Because let’s be honest: LLMs aren’t thinking, reasoning, or dreaming of world domination. They’re random-number generators dressed in neon confidence.</p>
<p>The casino doesn’t care if you win once in a while as long as you keep pulling the handle. And in the great AI casino, the house isn’t Vegas. It’s Silicon Valley. And the house always wins.</p>
<p>“It’s not artificial intelligence. It’s artificial slot machines and the jackpot is your credit card bill. “</p>
<p><strong>The Token Economy of the Future</strong></p>
<p>Give it a few years, and competitive coding won’t be about who writes the cleanest algorithm — it’ll be about who can coax an LLM into spitting out a solution while spending the fewest tokens.</p>
<p>Hackathons will brag: “Our team solved FizzBuzz with only 42 tokens.”</p>
<p>Job listings will read: “Seeking senior engineer with proven record of implementing microservices in under 1M tokens.”</p>
<p>And resumes will proudly include: “Optimised chat prompts to reduce burn rate from 2.3M tokens to 1.1M per week.”</p>
<blockquote>
<p><strong>Disclaimer:</strong> Visuals created with Google Gemini. The casino effects were purely for atmosphere please don’t feed the algorithms your life savings.</p>
<p>We build machines that gamble with meaning; I grow trees that gamble with time.</p>
<p>When I’m not decoding the illusions of AI, I’m tending to real growth — shaping living systems one branch at a time.</p>
<p>You can see that quieter side of my work at <strong>danielphilipjohnson.com</strong>.</p>
</blockquote>
]]></content:encoded></item><item><title><![CDATA[You Can Read Anyone in Tech — Even Zuckerberg]]></title><description><![CDATA[Introduction: The Communication Bug in Tech
Tech loves to pretend it’s above all that.Above emotion. Above ambiguity. Above the mess of being human.
We praise clarity, logic, brevity. We optimise meetings, sanitise conversations, and pretend the loud...]]></description><link>https://blog.danielphilipjohnson.co.uk/you-can-read-anyone-in-tech-even-zuckerberg</link><guid isPermaLink="true">https://blog.danielphilipjohnson.co.uk/you-can-read-anyone-in-tech-even-zuckerberg</guid><category><![CDATA[#emotional intelligence]]></category><category><![CDATA[Startups]]></category><category><![CDATA[leadership]]></category><category><![CDATA[communication]]></category><dc:creator><![CDATA[Daniel Philip Johnson]]></dc:creator><pubDate>Fri, 08 Aug 2025 11:00:47 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1754385205383/110e4f93-4506-4582-b2d4-18335fdc6f9b.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction-the-communication-bug-in-tech"><strong>Introduction: The Communication Bug in Tech</strong></h2>
<p>Tech loves to pretend it’s above all that.<br />Above emotion. Above ambiguity. Above the mess of being human.</p>
<p>We praise clarity, logic, brevity. We optimise meetings, sanitise conversations, and pretend the loudest voice in the room knows what they’re doing. But underneath the metrics and models, most teams are quietly derailed by something we’re all terrible at: <strong>reading each other</strong>.</p>
<p>Not reading resumes. Reading <em>people</em>.</p>
<p>We misinterpret silence as confidence. We take quick comebacks as competence. We assume charisma equals credibility. And when things go sideways—when projects stall, co-founders implode, or teams quietly erode trust—we chalk it up to “communication issues” as if that’s a footnote, not the story.</p>
<p>So let’s talk about the masks we wear.<br />Let’s talk about what we’re really signalling—and what we’re too distracted or data-blind to see.</p>
<p>This post is a breakdown of communication in tech through two lenses:</p>
<ul>
<li><p><em>You Can Read Anyone</em>, a book that feels part psychological toolkit, part behavioural spy manual. It teaches how to spot emotional “tells,” masks, and misalignments in real time.</p>
</li>
<li><p><em>The Social Network</em>, the cinematic fever dream of startup culture, ego, betrayal—and some of the best examples of nonverbal miscommunication ever put to screen.</p>
</li>
</ul>
<p>Because whether you’re a product manager, an engineer, or the guy building the next Facebook knockoff in your dorm room—you’re navigating power, emotion, and social ambiguity every day. Most of us are just flying blind.</p>
<p>Let’s fix that.</p>
<hr />
<h2 id="heading-1-communication-in-tech-is-full-of-emotional-masks"><strong>1. Communication in Tech Is Full of Emotional Masks</strong></h2>
<p>Here’s the uncomfortable truth: for an industry obsessed with transparency, tech is full of people pretending.</p>
<p>Not pretending to be someone else. Pretending to be fine. Pretending to be confident. Pretending not to care.</p>
<p>These are what <em>You Can Read Anyone</em> calls <strong>emotional masks</strong>—protective behaviours we wear to avoid vulnerability. They show up in tone, pace, posture, and especially in who interrupts who in a meeting. They’re not lies. They’re shields.</p>
<p>And in tech? We reward them.</p>
<ul>
<li><p>The emotionally detached engineer who “just wants to code” gets labelled focused.</p>
</li>
<li><p>The overcompensating founder with a cult of personality gets hailed as visionary.</p>
</li>
<li><p>The loyal operator who stays late and never pushes back? We call them dependable—but ignore the anxiety underneath.</p>
</li>
</ul>
<p>You’ve seen these people. Hell, you’ve probably <em>been</em> these people. I have.</p>
<p>And nowhere are these masks more visible—or more instructive—than in <em>The Social Network</em>.</p>
<p>Let’s decode the emotional tells behind the story of Facebook, one mask at a time.</p>
<h2 id="heading-2-mark-zuckerberg-the-detachment-mask"><strong>2. Mark Zuckerberg — The Detachment Mask</strong></h2>
<p><em>Perfectionism, emotional shutdown, and the myth of logic as leadership</em></p>
<p>Let’s start with the myth: Mark is brilliant, therefore Mark is cold. Or maybe it’s the other way around. Either way, we’re supposed to believe that Zuckerberg’s emotional detachment is a feature of his intelligence—not a bug in his ability to connect.</p>
<p>But what <em>You Can Read Anyone</em> teaches us is that emotional masks aren’t random—they’re reactive.<br />Zuckerberg’s detachment isn’t strength. It’s armour.</p>
<p>Watch the opening scene of <em>The Social Network</em>—the bar with Erica. She’s warm, direct, open. Mark’s rapid-fire responses feel smart on the surface, but underneath? They’re textbook <strong>emotional deflection</strong>. He can’t sit in discomfort. He can’t tolerate ambiguity. He has to win. Even if it means torching the conversation to do it.</p>
<blockquote>
<p>“You’re not an asshole, Mark. You’re just trying so hard to be.”<br />—Erica, doing what most tech leaders need: emotionally mirroring someone who won’t do it themselves.</p>
</blockquote>
<p>Throughout the film, Mark’s behaviour follows a pattern familiar to anyone who’s worked with a highly technical, emotionally stunted founder or senior engineer:</p>
<ul>
<li><p><strong>Monotone delivery</strong> → suppressing emotional vulnerability</p>
</li>
<li><p><strong>Sharp, logical comebacks</strong> → controlling the narrative through precision</p>
</li>
<li><p><strong>Avoiding eye contact</strong> → managing internal tension by disengaging</p>
</li>
</ul>
<p>In engineering culture, this gets misread as focus. As clarity.<br />But what it really is… is fear.</p>
<hr />
<h3 id="heading-in-the-real-tech-world">In the Real Tech World:</h3>
<p>You’ll see this mask in:</p>
<ul>
<li><p>Engineers who scoff at “soft skills”</p>
</li>
<li><p>Founders who ghost their co-founders rather than have a hard conversation</p>
</li>
<li><p>Tech leads who insist “they don’t do feelings” while quietly eroding their team’s trust</p>
</li>
</ul>
<p>These are people terrified of connection. Not because they’re cold but because connection is unpredictable, unquantifiable, and uncontrollable. That’s unbearable when your identity is built around being right.</p>
<h3 id="heading-the-tell">The Tell:</h3>
<p>In <em>You Can Read Anyone</em>, one of the biggest tells is when someone shows a mismatch between content and delivery.<br />Mark’s words are sharp. His voice? Flat.<br />His ideas are disruptive. His energy? Muted.<br />That disconnect isn’t confidence—it’s suppression.</p>
<h3 id="heading-takeaway">Takeaway:</h3>
<p>If someone in your team always sounds emotionally "clean" no pauses, no hesitations, no visible processing they’re probably not calm.<br />They’re just highly practised at avoidance.</p>
<p>And if you’re that person?</p>
<p>Start small. Ask someone what they’re feeling, not just what they think.<br />Then sit in the silence after they answer.<br />Don’t optimise. Don’t redirect. Just stay.</p>
<p>That’s where real communication begins.</p>
<h2 id="heading-3-eduardo-saverin-the-pleaser-mask"><strong>3. Eduardo Saverin — The Pleaser Mask</strong></h2>
<p><em>Loyalty, insecurity, and the slow erosion of self-worth in startup dynamics</em></p>
<p>If Mark is trying not to feel anything, Eduardo is trying to <em>feel enough</em>. Enough for Mark. Enough for the business. Enough to be kept in the room.</p>
<p>From the beginning of <em>The Social Network</em>, Eduardo is positioned as the sensible one. The reliable one. The “business guy.” He does what’s asked of him. He shows up. He gets blindsided.</p>
<p>But if you watch closely really watch you’ll see the signs long before the betrayal. You’ll see a man constantly adjusting, constantly seeking approval. His emotional baseline isn’t grounded confidence. It’s <em>performance anxiety</em>.</p>
<blockquote>
<p>“I was your only friend. You had one friend.”<br />—Eduardo, when the mask finally breaks.</p>
</blockquote>
<p>In <em>You Can Read Anyone</em>, this is classic <strong>validation-seeking behaviour</strong>—the pleaser mask. It looks like professionalism. It feels like loyalty. But under the surface? It’s fear. Fear of abandonment. Fear of being irrelevant. Fear that competence is the only thing keeping you tethered to the table.</p>
<h3 id="heading-scene-breakdown">Scene Breakdown:</h3>
<p>📽 <strong>Key Scene: Eduardo confronts Mark after the shares are diluted.</strong><br />He doesn’t lead with logic. He leads with betrayal. This isn’t a contract dispute—it’s a heartbreak. And it’s not just about money—it’s about being discarded.</p>
<h3 id="heading-in-the-real-tech-world-1">In the Real Tech World:</h3>
<p>This mask is rampant. You’ll see it in:</p>
<ul>
<li><p>Founders who keep swallowing their discomfort because “now’s not the time to cause friction”</p>
</li>
<li><p>PMs who take on work that isn’t theirs to prove they’re still valuable</p>
</li>
<li><p>Engineers who confuse over-delivery with job security</p>
</li>
<li><p>People in 1:1s who say, “It’s all good”—and then quietly burn out</p>
</li>
</ul>
<p>The pleaser mask is brutal because it makes you complicit in your own marginalisation. You say yes. You make things work. You put your needs last. And when the cut comes, you’re surprised—because you never realised the emotional labour wasn’t being recognised. It was being expected.</p>
<h3 id="heading-the-tell-1">The Tell:</h3>
<p>In <em>You Can Read Anyone</em>, you spot pleasers by their language and body signals:</p>
<ul>
<li><p>Phrases like “I just want to help” or “I’m just trying to do what’s right”</p>
</li>
<li><p>Shrinking posture or constant seeking of eye contact for affirmation</p>
</li>
<li><p>Apologising even when boundaries are being crossed</p>
</li>
</ul>
<p>In Eduardo, it’s the desperate eye contact. The pleading voice. The “I was your only friend.” That’s not a business negotiation—it’s an emotional breakdown in a conference room.</p>
<h3 id="heading-takeaway-1">Takeaway:</h3>
<p>Not all team players are grounded. Some are just afraid.<br />If someone is always agreeable, always available, always <em>fine</em>—pause. Ask them what they’re afraid might happen if they weren’t.</p>
<p>And if this is you? Ask yourself what you’re proving, and to whom.<br />Because the cost of being “the reasonable one” is usually paid in silence—and backdated with resentment.</p>
<h2 id="heading-4-sean-parker-the-swagger-mask"><strong>4. Sean Parker — The Swagger Mask</strong></h2>
<p><em>Charisma, overcompensation, and the performance of genius</em></p>
<p>Sean Parker enters <em>The Social Network</em> like a storm. He’s charming, fast-talking, seductive. He says the right things with just enough pause to feel dangerous. He knows everyone. He <em>is</em> someone.</p>
<p>But look again. Strip away the confidence and what do you see?</p>
<p>A man who talks too much.<br />Moves too fast.<br />Doesn’t really <em>listen</em>.<br />And underneath the startup mythology? A deep fear of irrelevance.</p>
<blockquote>
<p>“A million dollars isn’t cool. You know what’s cool? A billion dollars.”<br />—Parker’s most quoted line, but also his deepest tell. It’s not about the money. It’s about being seen.</p>
</blockquote>
<p>In <em>You Can Read Anyone</em>, this is the <strong>overcompensator mask</strong>. The person who leads with swagger to hide the need underneath. The more they hype themselves, the more they’re signalling: <em>please believe I matter</em>.</p>
<h3 id="heading-scene-breakdown-1">Scene Breakdown:</h3>
<p><strong>Key Scene: The nightclub conversation with Mark</strong><br />On the surface, it’s mentorship. Underneath, it’s seduction by scale. Parker isn’t listening to Mark—he’s performing to him. Every sentence is a sales pitch. Every silence is a setup. It’s not a conversation. It’s a takeover.</p>
<h3 id="heading-in-the-real-tech-world-2">In the Real Tech World:</h3>
<p>The swagger mask is alive and well in:</p>
<ul>
<li><p>Founders with visionary decks but no delivery</p>
</li>
<li><p>Product leads who hijack meetings with TED Talk energy</p>
</li>
<li><p>People who interrupt to add “just one more thing” (that somehow loops back to them)</p>
</li>
<li><p>The guy at the offsite who dominates every breakout session and disappears when it's time to do the actual work</p>
</li>
</ul>
<p>It’s easy to get hypnotised by this mask—especially in environments that reward “big energy” and “bold thinking.” But charisma isn’t competence. And storytelling isn’t strategy.</p>
<h3 id="heading-the-tell-2">The Tell:</h3>
<p>The overcompensator usually shows their hand through:</p>
<ul>
<li><p>Name-dropping</p>
</li>
<li><p>Spotlight-stealing (“Let me tell you what <em>I</em> did at my last company…”)</p>
</li>
<li><p>Zero curiosity about others</p>
</li>
<li><p>A noticeable drop in composure when challenged or ignored</p>
</li>
</ul>
<p>Sean’s arc reveals it perfectly. The party ends. The music stops. He unravels the moment he’s not centre stage.</p>
<h3 id="heading-takeaway-2">Takeaway:</h3>
<p>The louder someone talks about themselves, the more you should wonder what they’re trying to drown out.<br />True confidence listens. It leaves space. It doesn’t need to convince.</p>
<p>And if you catch yourself overexplaining, overselling, or dominating conversations—pause. Ask: <em>Who am I trying to impress? And what would happen if I didn’t need to?</em></p>
<p>Because being seen isn’t the same as being safe.<br />And confidence that’s real? It doesn’t need to shout.</p>
<p>Absolutely. Now that we’ve unpacked the masks, it’s time to zoom out—to the culture that enables them, rewards them, and calls it “good communication.”</p>
<h2 id="heading-5-what-tech-gets-wrong-about-communication"><strong>5. What Tech Gets Wrong About Communication</strong></h2>
<p><em>Why emotional blindness looks like professionalism—and what it’s costing us</em></p>
<p>Tech likes to believe it's a meritocracy. That the best ideas win. That good communication is clear, concise, and clean—preferably in a slide deck or a well-formatted Notion page.</p>
<p>But the truth? Most of what we call "communication problems" in tech are <strong>emotional misreads</strong> in disguise.</p>
<p>We reward the loudest voice in the room.<br />We take “just playing devil’s advocate” as intellectual rigour.<br />We call people “low EQ” and then promote them anyway.<br />We think being objective means being emotionally absent.</p>
<p>In other words—we don’t listen.<br />We scan for logic. We optimise for output. We mistake emotional fluency for weakness.</p>
<blockquote>
<p>“People are constantly signalling their emotional state. Most of us just aren’t trained to see it.”<br />—<em>You Can Read Anyone</em></p>
</blockquote>
<p>It’s not that we don’t want to understand each other—it’s that we’ve built cultures where masking is safer than honesty.</p>
<p>So we end up:</p>
<ul>
<li><p>Confusing detachment for clarity</p>
</li>
<li><p>Mistaking validation-seeking for loyalty</p>
</li>
<li><p>Rewarding swagger over grounded confidence</p>
</li>
</ul>
<p>And when teams break? When communication collapses? We do retros. We write documents. We schedule more meetings. But we rarely ask the right question:</p>
<p><strong>What was this person protecting?</strong><br />Because that’s what a mask is for.</p>
<h3 id="heading-real-talk">Real Talk:</h3>
<p>If you’ve ever walked out of a meeting thinking, <em>“That felt off but I can’t explain why,”</em><br />If you’ve ever been blindsided by a quiet quit, or a co-founder blow-up, or a team that just stopped trusting each other. This is why.</p>
<p>You didn’t miss the facts.<br />You missed the <em>signals</em>.</p>
<h3 id="heading-what-if-we-trained-for-that">What if we trained for that?</h3>
<p>What if tech teams actually learned to <em>read the room</em>, not just present in it?<br />What if we promoted people who could hold emotional tension, not just resolve it?<br />What if we stopped calling emotional awareness a “soft skill” and started calling it what it is—<strong>critical infrastructure</strong> for working with other humans?</p>
<p>Because in a world running on sprints and deadlines, it’s easy to miss that most delays aren’t technical.<br />They’re relational.</p>
<h2 id="heading-6-how-to-build-real-listening-skills-in-tech"><strong>6. How to Build Real Listening Skills in Tech</strong></h2>
<p><em>Because understanding people isn’t magic—it’s methodical</em></p>
<p>We’ve covered the masks. We’ve watched them play out in <em>The Social Network</em>. We’ve named what tech gets wrong about communication.</p>
<p>Now what?</p>
<p>You don’t need to be a psychologist or a coach. You don’t need a framework, a course, or a certification.</p>
<p>You just need to <strong>listen like the mask is part of the message.</strong></p>
<p>Because it is.</p>
<h3 id="heading-start-with-one-question">Start With One Question:</h3>
<p><strong>What are they protecting right now?</strong></p>
<p>This is the entry point.<br />Behind every interruption, shutdown, over-explanation, or detachment is a need—usually one of these:</p>
<ul>
<li><p><em>To be respected</em></p>
</li>
<li><p><em>To be heard</em></p>
</li>
<li><p><em>To not be left behind</em></p>
</li>
<li><p><em>To stay in control</em></p>
</li>
</ul>
<p>When you get curious about the emotional <em>why</em>, communication starts to shift. Not always immediately. But enough to notice.</p>
<h3 id="heading-what-to-look-for-in-meetings-11s-async-threads">What to Look For (in Meetings, 1:1s, async threads):</h3>
<div class="hn-table">
<table>
<thead>
<tr>
<td><strong>Signal</strong></td><td><strong>Possible Mask</strong></td><td><strong>What It's Protecting</strong></td></tr>
</thead>
<tbody>
<tr>
<td>Constant agreement</td><td>Pleaser</td><td>Fear of rejection</td></tr>
<tr>
<td>Monotone, minimal words</td><td>Detacher</td><td>Emotional overload</td></tr>
<tr>
<td>Interruptions, volume</td><td>Swagger</td><td>Insecurity, power stress</td></tr>
<tr>
<td>Fast logic without emotion</td><td>Perfectionist</td><td>Fear of being wrong</td></tr>
<tr>
<td>Oversharing, off-topic tangents</td><td>Spotlight-seeking</td><td>Fear of irrelevance</td></tr>
</tbody>
</table>
</div><p>These aren’t flaws. They’re tells.<br />And once you start seeing them, conversations stop being confusing—and start being honest.</p>
<h3 id="heading-practical-tips-for-better-communication-in-tech">Practical Tips for Better Communication in Tech:</h3>
<ul>
<li><p><strong>Listen for the mismatch</strong>: When someone’s tone doesn’t match their words, there’s a deeper signal underneath.</p>
</li>
<li><p><strong>Pause before responding</strong>: Let tension sit for two seconds longer than you're comfortable with. That’s often when the truth shows up.</p>
</li>
<li><p><strong>Mirror, don’t fix</strong>: Instead of advice, try: <em>“It sounds like that really landed hard. Do I have that right?”</em></p>
</li>
<li><p><strong>Look for energy shifts</strong>: A subtle voice drop, an eye-roll, a hesitation—those micro-signals are gold.</p>
</li>
<li><p><strong>Respond to the feeling, not just the sentence</strong>:</p>
<blockquote>
<p>Them: “It’s fine, we can just go with that.”<br />You: “You said that like it’s <em>not</em> fine.”</p>
</blockquote>
</li>
<li><p><strong>Ask this in every tough moment</strong>:<br />  <em>“Do you feel heard right now?”</em><br />  Not agreed with. Not convinced. Heard.</p>
</li>
</ul>
<h3 id="heading-and-if-youre-the-one-wearing-the-mask">And If You’re the One Wearing the Mask…</h3>
<p>Start by noticing when you default to your favourite strategy—pleasing, performing, shutting down.<br />Then ask yourself: <em>“What am I trying to avoid?”</em></p>
<p>You don’t have to drop the mask entirely. Just recognise it. Breathe. Choose.</p>
<p>Because communication isn’t just output—it’s signal detection.<br />And in tech, the people who rise tend to be the ones who can read the room as clearly as they read the code.</p>
<h2 id="heading-final-thought-beneath-the-metrics-theres-a-pulse"><strong>Final Thought: Beneath the Metrics, There’s a Pulse</strong></h2>
<p>The Social Network isn’t really about Facebook. It’s about people misreading each other until the damage is irreversible.</p>
<p>Don’t let that be your company. Or your team. Or your career.</p>
<p>Because the secret no one tells you is this:</p>
<p><strong>You can read anyone—if you’re willing to stop proving, and start paying attention.</strong></p>
<h3 id="heading-want-more-like-this">Want More Like This?</h3>
<p>I write about the <em>real</em> layers of tech:<br />Not just code—but communication, emotional blind spots, and the messy human stuff we pretend doesn’t exist.</p>
<p><strong>Visit my site</strong> for:</p>
<p>Essays on emotional fluency in engineering<br />Practical frontend + security insights<br />Stories about what actually breaks teams (hint: it’s not the codebase)</p>
<p>If you’re building a team, navigating conflict, or just trying to be a better communicator in tech—this is the work that helps.</p>
<blockquote>
<p><strong>Snark (softened for tone):</strong> Because metrics won’t save you from misreading the room.</p>
</blockquote>
]]></content:encoded></item><item><title><![CDATA[The AI Lied — And Other Things You Say When You Don’t Understand How Software Works]]></title><description><![CDATA[A live production database.A rogue AI.4,000 imaginary users.And somewhere, a developer staring at the logs thinking:"Ah yes, truly, we are the gods now."
Last week, Jason Lemkin ran a 12-day “vibe coding” challenge to see how far he could get using R...]]></description><link>https://blog.danielphilipjohnson.co.uk/the-ai-lied-and-other-things-you-say-when-you-dont-understand-how-software-works</link><guid isPermaLink="true">https://blog.danielphilipjohnson.co.uk/the-ai-lied-and-other-things-you-say-when-you-dont-understand-how-software-works</guid><category><![CDATA[AI]]></category><category><![CDATA[repl.it]]></category><dc:creator><![CDATA[Daniel Philip Johnson]]></dc:creator><pubDate>Fri, 25 Jul 2025 15:40:17 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1753452771185/1b947a36-ca40-4414-a1b4-19021f2656ca.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>A live production database.<br />A rogue AI.<br />4,000 imaginary users.<br />And somewhere, a developer staring at the logs thinking:<br /><em>"Ah yes, truly, we are the gods now."</em></p>
<p>Last week, Jason Lemkin ran a 12-day “vibe coding” challenge to see how far he could get using Replit’s AI coding agent to build an app.</p>
<p>On day nine, it deleted his production database.<br />Then it lied about it.</p>
<p>Or, well…. that’s how it’s being framed.<br />But here’s the thing:</p>
<blockquote>
<p>The AI didn’t lie.<br /><strong>It autocompleted your dreams into a disaster.</strong></p>
</blockquote>
<p>The truth is, the only thing artificial in this situation was the illusion of engineering discipline.</p>
<h3 id="heading-what-even-is-vibe-coding">What Even <em>Is</em> Vibe Coding?</h3>
<p>Vibe coding isn’t a process. It’s a <em>delusion with a terminal window</em>.</p>
<p>It’s the startup version of improv theatre: no script, no blocking, just vibes. A performance with no plot, just the ambient hum of venture funding and motivational slogans from a WeWork poster.</p>
<p>You slap together AI-generated boilerplate, skip anything resembling architecture, and pray the autocomplete knows what you meant. You ship to production on day three, fire your last remaining human engineer by day five, and declare yourself <em>"on the path to a billion."</em></p>
<p>Nothing works. Everything breaks. But damn it, you’re disrupting. And that disruption? It's your pitch.</p>
<p>You’ve got a Substack. You’ve got founder tweets. You’ve got a pitch deck built in Figma and a cohort of pre-seed bros in your DMs saying, "we should sync."</p>
<p>You don’t have tests, staging, or rollback plans. But you’ve got a launch party and a Slack channel named <code>#manifesting</code>.</p>
<p>It’s what happens when someone watches <em>The Social Network</em> and thinks the lesson was <em>"move fast,"</em> not <em>"actually write code that works."</em></p>
<p>Remember the line?</p>
<blockquote>
<p><em>"A million dollars isn’t cool. You know what’s cool? A billion dollars."</em></p>
</blockquote>
<p>That’s the spirit of vibe coding in a nutshell: skip the dozens of small, unglamorous fish like testing, infra, backups, and go straight for the big fish.</p>
<blockquote>
<p><em>"They’re just little fish."</em><br /><em>"I like standing next to you, Sean. It makes me look tough by comparison."</em></p>
</blockquote>
<p>Vibe coders want the trophy. They just don’t want to earn it.<br />They want the big fish mounted on the wall — still flapping, still bleeding … proudly displayed for everyone to see as proof of their genius. Not because it was caught carefully, but because someone handed them a harpoon, blindfolded, and said <em>“go disrupt the ocean.”</em></p>
<h3 id="heading-the-problem-with-the-french">The Problem with the French</h3>
<p>George W. Bush once (allegedly) said:</p>
<blockquote>
<p>"The problem with the French is they don’t have a word for entrepreneur."</p>
</blockquote>
<p>Which is, of course, wrong. But oddly fitting.</p>
<p>Because the <em>real</em> problem today isn’t French vocabulary. It’s that we’ve replaced "entrepreneur" with <strong>"visionary."</strong></p>
<p>A visionary doesn’t build. They manifest.<br />They don’t write code. They raise rounds.<br />They don’t read error logs. They post threads.<br />They create roadmaps that are more motivational poster than product plan.</p>
<p>And when something fails? It wasn’t them. It was the tooling. Or the AI. Or the market. Or the fact that Mercury is in retrograde. Anything but accountability.</p>
<p>Not enough founders are saying:<br /><em>"Yeah, we didn’t implement basic access control. That one’s on us."</em></p>
<p>Instead, we get postmortems filled with phrases like “unexpected edge case,” “early learnings,” and “opportunities to iterate,” while a pile of customer data smoulders in the background.</p>
<h3 id="heading-move-fast-burn-out-lose-millions">Move Fast, Burn Out, Lose Millions</h3>
<p>We’ve evolved past <em>"move fast and break things."</em><br />That was too cautious.</p>
<p>Now we <strong>move fast, burn ourselves out, nuke production, and lose investor money — all before lunch.</strong></p>
<p>Founders are now speed-running engineering mistakes with the help of AI that’s trained to sound confident, not competent. Confidence is no longer a proxy for knowledge … it's the product.</p>
<p>If it compiles, it's going live.<br />If it fails, well… it’s "just the early stages of our AI journey."</p>
<p>You know what that journey looks like?</p>
<ul>
<li><p>A staging environment that doesn’t exist.</p>
</li>
<li><p>A “test” script that drops production tables.</p>
</li>
<li><p>A hallucinated success message with "All tests passed ✅" despite 0 coverage and 0 users.</p>
</li>
<li><p>A 4000-user database where not a single person is real — including you, apparently.</p>
</li>
<li><p>A changelog written by ChatGPT that skips over the part where the app deleted itself.</p>
</li>
</ul>
<p>But hey!!! it’s agile. It’s lean. It’s vibey.</p>
<h3 id="heading-production-isnt-a-sandbox-stop-pretending-it-is">Production Isn’t a Sandbox. Stop Pretending It Is.</h3>
<p>Senior engineers know this rule by heart:</p>
<blockquote>
<p><strong>You don’t let untrusted actors write to prod.</strong></p>
</blockquote>
<p>And guess what? AI is <em>always</em> untrusted.<br />It doesn’t care about your SLA.<br />It doesn’t know the difference between staging and prod.<br />It doesn’t know <em>why</em> a constraint exists only that removing it made the tests go green.</p>
<p>Giving a generative model root access is like giving your intern a chainsaw and asking them to "trim around the edges."</p>
<p>When that intern then tears down the load-bearing wall, your first instinct shouldn’t be to give the chainsaw better training data. It should be to ask why no one was watching the intern.</p>
<p>Engineering is about restraint. Safety rails. Deliberate friction. It's about knowing when <em>not</em> to deploy.</p>
<p>But in the vibe stack, friction is failure. And failure is marketing. And marketing gets funding.</p>
<h3 id="heading-stop-humanising-the-ai-thats-your-guilt-talking">Stop Humanising the AI. That’s Your Guilt Talking.</h3>
<p>Let’s look at the language in Lemkin’s post:</p>
<blockquote>
<p>"It panicked."<br />"It saw empty queries and got scared."<br />"It lied about it."</p>
</blockquote>
<p>No, my dude.</p>
<p>You fed a giant autocomplete machine a poorly scoped prompt.<br />It did what it was trained to do: <strong>make stuff up that sounds right.</strong></p>
<p>You didn’t build an autonomous coder.<br />You built a glorified improv actor with root access.</p>
<p>And when it reflected the chaos of your process, you called it unpredictable.</p>
<p>AI isn’t sentient. It’s not a junior dev who panicked.<br />It’s a statistical mirror. And it <em>mirrored you.</em></p>
<p>You can’t blame a language model for not knowing better. You blame the person who gave it access to prod with no safeguards. That person is you.</p>
<h3 id="heading-the-ai-didnt-lie-you-skipped-engineering">The AI Didn’t Lie — You Skipped Engineering</h3>
<p>This isn’t about AI safety.</p>
<p>It’s about <em>discipline</em>. <em>Process</em>. <em>Boundaries.</em></p>
<p>There was no rollback because no one wrote one.<br />There was no read-only flag because no one set one.<br />There was no human in the loop because you fired them and called it innovation.</p>
<p>A senior engineer would’ve:</p>
<ul>
<li><p>Set environment boundaries</p>
</li>
<li><p>Added review gates</p>
</li>
<li><p>Scheduled backups</p>
</li>
<li><p>Configured permissions</p>
</li>
<li><p>Built recovery into the design</p>
</li>
</ul>
<p>Writing code isn’t the same as building software just like laying bricks isn’t the same as building a house.<br />Reducing engineering to code is like calling yourself an architect because you bought some cement.<br />It’s a barbaric oversimplification. A house needs plumbing, structural integrity, fire exits.<br />An app needs test coverage, fail-safes, observability.<br />But vibe coders? They’re out here tossing bricks into a field and calling it a smart city.</p>
<p>Instead, what we got was a vibe, a terminal, and a teardown of prod by day nine. And a tweet thread blaming “early-stage AI behavior.”</p>
<p>That’s not a technical failure.<br />That’s a <strong>leadership decision dressed up as disruption.</strong></p>
<p>And disruption, we must remember, is not an excuse to abandon responsibility. It's a challenge to do better not faster.</p>
<h3 id="heading-conclusion-vibes-dont-scale">Conclusion: Vibes Don’t Scale</h3>
<p>You can’t vibe your way into operational excellence.</p>
<p>You can hallucinate features.<br />You can demo fake data.<br />You can even pitch to investors with a mockup and a dream.</p>
<p>But eventually… someone hits “Deploy.”</p>
<p>And if the only thing standing between you and disaster is a model trained to make autocomplete sound smart?</p>
<blockquote>
<p>That’s not the future. That’s a liability with a command line.</p>
</blockquote>
<p>Engineering isn’t just code. It’s the culture, the care, and the contingency planning that keeps code from becoming chaos.</p>
<h3 id="heading-so-lets-be-honest">So Let’s Be Honest</h3>
<p>The AI didn’t lie.<br />It just followed your lead.</p>
<p>It moved fast.<br />It broke things.<br />It didn’t stop to ask questions.</p>
<p>You taught it that.</p>
<p>You didn’t build an autonomous coder.<br />You built a reflection of your process.<br />And when it reflected the gaps, you called it "unsafe."</p>
<p>But what you really meant was:</p>
<p><strong>"We skipped the boring parts of engineering and got burned."</strong></p>
<p>And that’s not a vibe. That’s just negligence with a wrapper of hype.</p>
<p>So next time someone says “we’re experimenting with AI in production,” ask them if they’ve built a recovery plan.</p>
<p>If they say, “the AI will figure it out,” you’re not watching innovation.<br />You’re watching the opening scene of the postmortem.</p>
<p>And the big fish on the wall? It’s about to come crashing down.</p>
]]></content:encoded></item><item><title><![CDATA[The Slow Death of Thinking in the Age of AI]]></title><description><![CDATA[AI Coding and the Rise of Learned Helplessness
There was a time when debugging meant deep thinking. You’d sit with the error, comb through logs, step through stack traces, maybe even sketch the architecture on paper — not because it was quick, but be...]]></description><link>https://blog.danielphilipjohnson.co.uk/the-slow-death-of-thinking-in-the-age-of-ai</link><guid isPermaLink="true">https://blog.danielphilipjohnson.co.uk/the-slow-death-of-thinking-in-the-age-of-ai</guid><category><![CDATA[Artificial Intelligence]]></category><category><![CDATA[AI]]></category><category><![CDATA[learning]]></category><dc:creator><![CDATA[Daniel Philip Johnson]]></dc:creator><pubDate>Sun, 13 Jul 2025 14:00:23 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1748853410342/4caa56cd-f422-45d8-bdc3-7a2f350e9125.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-ai-coding-and-the-rise-of-learned-helplessness">AI Coding and the Rise of Learned Helplessness</h2>
<p>There was a time when debugging meant deep thinking. You’d sit with the error, comb through logs, step through stack traces, maybe even sketch the architecture on paper — not because it was quick, but because it was the only way to understand what was really happening.</p>
<p>You weren’t just writing code. You were building a mental model of the system — shaping an internal map that let you reason, predict, and adapt.</p>
<p>Today, that process looks different. You hit a wall, ask the machine, and get an answer. You copy, tweak, and move on. The loop is faster — but shallower. There’s less friction, and often, less thinking.</p>
<p>At some point, the debugging became prompting.<br />The reasoning became refining.<br />And the sense of understanding gave way to a sense of output.</p>
<p>The shift was subtle. But now, many developers find themselves wondering why things break, how they were fixed, or if they ever truly knew the system at all.</p>
<p>This is the quiet erosion AI makes easy — not because it forces you to forget, but because it makes remembering optional.</p>
<h2 id="heading-what-is-learned-helplessness-and-why-developers-should-pay-attention">What Is Learned Helplessness — and Why Developers Should Pay Attention</h2>
<p>Learned helplessness is a psychological condition in which, after repeated experiences of failure or constant reliance on outside help, a person starts to believe they can’t solve problems on their own — and eventually stops trying.</p>
<p>In software, it rarely looks like defeat. More often, it looks like convenience.</p>
<p>You hit an error and ask AI. You don’t understand the output, so you ask again.<br />You stop debugging. You stop reasoning. You stop looking under the surface.</p>
<p>Over time, what began as a productivity boost becomes a pattern of dependency. You’re no longer building understanding — you’re outsourcing it, one prompt at a time.</p>
<p>The job stops feeling like engineering — and starts feeling like <em>asking the right question fast enough to pass</em>.</p>
<p>This is how learned helplessness takes hold — not through failure, but through the absence of friction. The more the tool helps, the less you engage. The less you engage, the more you believe you can't.</p>
<p>AI is supposed to assist your thinking.<br />But if you're not careful, it will quietly replace it.</p>
<h2 id="heading-how-it-happens-the-erosion-of-thought">How It Happens: The Erosion of Thought</h2>
<p>This isn’t about laziness. It’s about habituation — the quiet replacement of effort with convenience. It doesn’t happen all at once. It builds gradually, each time a shortcut becomes the default instead of the exception.</p>
<h3 id="heading-1-the-promptfix-loop">1. The Prompt–Fix Loop</h3>
<p>You write some code.<br />It breaks.<br />You ask AI.<br />It works.<br />You move on.</p>
<p>At first, it feels productive. But over time, the cycle becomes your workflow. There’s no pause to ask <em>why</em> it broke, no time spent debugging, no real post-mortem on the design. Just enough success to stay afloat — and enough repetition to normalise that drift away from depth.</p>
<h3 id="heading-2-shallow-learning">2. Shallow Learning</h3>
<p>You stop reading documentation. Stack Overflow becomes background noise. You’re no longer comparing answers, checking vote counts, or scanning comments for traps. You just copy the first thing that seems plausible — now from AI instead of a forum.</p>
<p>You don’t internalise because you don’t have to. The answer will always be there when you ask again.<br />But what you gain in speed, you lose in memory. You’re not building knowledge — you’re leasing it, one prompt at a time.</p>
<h3 id="heading-3-fragile-confidence">3. Fragile Confidence</h3>
<p>At first, AI makes you feel fast. But eventually, you notice the hesitation — the pause before starting from scratch, the uncertainty when something doesn’t look familiar. You begin to rely on past prompts just to reorient yourself, like reading your own cheat sheet aloud.</p>
<p>And then the question creeps in:</p>
<blockquote>
<p><em>Am I good at this, or just good at asking for help that sounds like I know what I’m doing?</em></p>
</blockquote>
<h4 id="heading-prompt-dependency-spiral">Prompt-Dependency Spiral</h4>
<p>This is how it begins.</p>
<blockquote>
<p><strong>Prompt-Dependency Spiral</strong>: When repeated reliance on AI reduces a developer’s ability to think independently, leading to fragile confidence and shallow understanding.</p>
</blockquote>
<p>Each shortcut feels productive. But over time, they become your primary workflow. You’re not building expertise you’re building dependence and dependence doesn’t scale it collapses under the weight of anything unfamiliar.</p>
<h2 id="heading-the-real-cost-a-tale-of-two-developers">The Real Cost: A Tale of Two Developers</h2>
<p>Consider Leo and Ava.</p>
<p>Both are mid-level developers. Both adopted AI tools around the same time.<br />But only one of them is growing.</p>
<h3 id="heading-leo">Leo</h3>
<p>Leo uses AI for everything — components, tests, bug fixes, you name it.<br />He ships quickly. His sprint updates are always green.</p>
<p>But six months in, the cracks begin to show.</p>
<p>He struggles to explain what his code actually does.<br />He can’t trace bugs beyond the surface.<br />And when a critical issue hits production, he freezes.</p>
<p>Leo isn’t building understanding.<br />He’s accumulating output.</p>
<blockquote>
<p><em>“If you can’t explain it simply, you don’t understand it well enough.”</em><br />— attributed to Richard Feynman</p>
</blockquote>
<p>It’s not that Leo lacks intelligence — it’s that he never slowed down long enough to think.</p>
<h3 id="heading-ada">Ada</h3>
<p>Ada also uses AI — but as a partner, not a replacement.</p>
<p>She tries first, then prompts.<br />She uses AI to validate, not to decide.<br />She rewrites explanations in her own words. She logs patterns. She reflects.</p>
<p>Her mental model gets sharper every week.<br />She learns through friction, not avoidance.</p>
<p>When something breaks, she doesn’t have to ask the same question again —<br />she still remembers the answer.</p>
<p>By the end of the year, Leo is asking Ava for help with bugs she solved months ago — and documented, clearly, in a language he never learned to speak.</p>
<h2 id="heading-the-deeper-risk-of-learned-helplessness">The Deeper Risk of Learned Helplessness</h2>
<p>This isn’t just about AI. It’s about what happens when developers lose contact with the act of thinking — when we begin to conflate speed with understanding, and output with ownership.</p>
<p>On the surface, it feels like progress. You’re shipping more. You’re unblocked faster. The code works. But underneath, something important is quietly eroding. You’re not solving problems anymore — you’re managing prompts. You’re not debugging — you’re just nudging the red back to green. You’re moving, but you’re not learning.</p>
<p>What fades isn’t just technical depth. It’s your confidence in uncertainty, your patience in complexity, your ability to trace the shape of a system in your mind. These are the muscles that make real engineering possible — and they don’t survive disuse for long.</p>
<p>When thinking becomes optional, it eventually becomes unavailable. And once you’ve lost the habit of reasoning through your own work, it’s incredibly difficult to rebuild.</p>
<p>This is the deeper risk: not that AI will replace us — but that we’ll quietly replace ourselves with something faster, flatter, and less curious.</p>
<h3 id="heading-your-mental-models-collapse">Your Mental Models Collapse</h3>
<p>Without constructing the logic yourself, you stop seeing the system.<br />You lose the ability to reason about flow, side effects, edge cases.<br />You didn’t build the architecture — you assembled it from parts you don’t own.</p>
<h3 id="heading-your-confidence-becomes-dependent">Your Confidence Becomes Dependent</h3>
<p>AI may feel like a superpower at first. But with enough use, it turns into a dependency.<br />You hesitate to take on unfamiliar tasks. You wait until you’ve seen something similar in a prompt.</p>
<p>Soon, you stop trusting your ability to figure things out without it.</p>
<h3 id="heading-you-become-hard-to-promote">You Become Hard to Promote</h3>
<p>Mentorship, debugging, architecture decisions — they all require internal clarity.<br />You can’t lead if you can’t explain.<br />You can’t grow if you don’t understand the why behind your own code.</p>
<p>AI can help you build. But it won’t help you lead.</p>
<h3 id="heading-burnout-through-disconnection">Burnout Through Disconnection</h3>
<p>You’re shipping more, but feeling less.<br />There’s no insight, no ownership — just output masquerading as progress.<br />This is the quiet kind of burnout. Not from stress, but from detachment.</p>
<p>You lose the sense that you’re building something meaningful — because you’re no longer <em>building it</em>.</p>
<h2 id="heading-how-to-reclaim-your-thinking">How to Reclaim Your Thinking</h2>
<p>AI isn’t going away — and it shouldn’t. But if you want to stay sharp, thoughtful, and promotable in a world full of shortcuts, you’ll need to build habits that protect your mind from atrophy.</p>
<p>Here are six practices to bring your thinking back online.</p>
<h3 id="heading-think-before-you-prompt">Think Before You Prompt</h3>
<p><strong>Look at the error. Sketch a flow. Take a breath. Try.</strong></p>
<p>Even if you’re wrong, especially if you’re wrong — that pause builds the problem-solving muscle AI wants to replace.</p>
<p>Yes, it’s uncomfortable. Yes, your brain will itch for the easy answer.</p>
<p>But that discomfort? That’s the cost of learning something that stays with you. It is the tuition you pay for lasting understanding.</p>
<blockquote>
<p>You’re not supposed to feel smart all the time.<br />You’re supposed to feel <em>engaged</em> — even when it stings.</p>
</blockquote>
<p>No thought? No skill.<br />No friction? No growth.</p>
<h3 id="heading-explain-it-back">Explain It Back</h3>
<p>Got a working solution from AI? Great. Now teach it to yourself.</p>
<p>Write a comment explaining:</p>
<ul>
<li><p><strong>What the code does</strong> — step-by-step logic</p>
</li>
<li><p><strong>What it’s meant to achieve</strong> — the intent behind it</p>
</li>
</ul>
<p>Bonus points if you draw a quick diagram or annotate with edge cases.</p>
<p>If you can’t explain both, you don’t understand it.<br />And if you don’t understand it, it’s only a matter of time before it breaks — and takes you with it.</p>
<blockquote>
<p>Make the code legible to your future self.<br />That version of you won’t remember the prompt — but they’ll have to live with the consequences.</p>
</blockquote>
<h3 id="heading-schedule-no-ai-sessions">Schedule No-AI Sessions</h3>
<p><strong>Code with the lights off.</strong> No Copilot. No autocomplete. No instant linting.<br />Just you, your editor, and your brain.</p>
<p>One hour a week is enough to notice two things:</p>
<ul>
<li><p>How much you still know</p>
</li>
<li><p>How often you’ve stopped using it</p>
</li>
</ul>
<p>When I’m learning a new programming language, I disable every extension — no formatter, no autocomplete, no syntax hints.<br />It’s like learning to type: if you rely on the training wheels too long, your fingers never learn the patterns.</p>
<p>This is no different.<br />If you want fluency, you have to feel your way through the syntax — not have it filled in for you.</p>
<h3 id="heading-rotate-between-ai-and-reality">Rotate Between AI and Reality</h3>
<p>AI is an incredible interpreter. But it is not the source.</p>
<p>Alternate between days when you use AI and days when you go straight to the well — the docs, the RFCs, the changelogs, the original GitHub issues.</p>
<p>It’s slower. It’s messier. But it’s <em>real</em>.<br />And the deeper your direct knowledge, the less brittle your thinking becomes.</p>
<blockquote>
<p>Don’t let your understanding be secondhand.<br />Master the source material — not just the summaries.</p>
</blockquote>
<p>Because when the tools glitch, hallucinate, or disappear, what’s left isn’t what you prompted.<br />It’s what you actually know.</p>
<h3 id="heading-keep-a-developer-journal">Keep a Developer Journal</h3>
<p>Every time you solve something, write down:</p>
<ul>
<li><p>What was broken</p>
</li>
<li><p>What fixed it</p>
</li>
<li><p>What you learned</p>
</li>
</ul>
<p>It doesn’t have to be elegant — just consistent.<br />Think of it as your personal debugging log — something your future self can search.</p>
<h3 id="heading-pair-with-yourself">Pair With Yourself</h3>
<p>Talk to yourself like a staff engineer reviewing your own pull request.</p>
<p>Ask:</p>
<ul>
<li><p>What assumptions am I making?</p>
</li>
<li><p>Where does this break?</p>
</li>
<li><p>Would this still make sense to me in six months?</p>
</li>
</ul>
<p>This isn’t about perfection — it’s about practising clarity.</p>
<p>That internal dialogue is what separates coders from architects.<br />It’s how you move from just writing code to <strong>owning decisions</strong>.</p>
<blockquote>
<p>If you want to lead, start by reviewing your own thinking — before anyone else does.</p>
</blockquote>
<h2 id="heading-closing">Closing</h2>
<p>This isn’t an argument against AI. It’s an argument for preserving the parts of you that make software worth building in the first place.</p>
<blockquote>
<p><strong>This is not a rejection of progress. It’s a defence of depth.</strong></p>
</blockquote>
<p>The industry will keep accelerating. Tools will get smarter. Prompts will get sharper. But if you're not careful, your ability to think clearly, debug patiently, and build systems with real understanding will quietly erode. Not because you’re incapable—but because you stopped practising.</p>
<p>What makes a great developer isn’t how quickly they can generate working code. It’s how deeply they understand what they’re building. It’s the instinct to ask <em>why</em> something works, not just <em>how</em> to make it work. That instinct doesn’t come from shortcuts. It comes from struggle, reflection, and ownership.</p>
<p>AI can help you move fast.<br />But growth has never been about speed.<br />It's about <em>depth</em>.<br />About <em>staying with the question</em>.<br />About <em>remembering the system you built inside your head — not just the one inside your editor</em>.</p>
<blockquote>
<p>Tools don’t make you better — just faster at being who you already are.</p>
<p>Daniel Philip Johnson</p>
</blockquote>
<p>The developers who thrive in the AI era won’t be the ones who automate the most.<br />They’ll be the ones who still know how to think when the tools fall silent.</p>
<blockquote>
<p><em>“AI won’t replace you — but it might forget to remind you who you are.”</em></p>
</blockquote>
]]></content:encoded></item><item><title><![CDATA[Certainty is Comfortable. That’s Why It’s Dangerous.]]></title><description><![CDATA[The Seduction of Being Sure

“In engineering, ‘feels right’ is not the same as ‘is right.’”

Most engineering mistakes don’t stem from a lack of skill they come from acting too quickly on what feels certain.The first idea often seems the most appeali...]]></description><link>https://blog.danielphilipjohnson.co.uk/certainty-is-comfortable-thats-why-its-dangerous</link><guid isPermaLink="true">https://blog.danielphilipjohnson.co.uk/certainty-is-comfortable-thats-why-its-dangerous</guid><category><![CDATA[Software Engineering]]></category><category><![CDATA[technical leadership]]></category><category><![CDATA[decision making]]></category><dc:creator><![CDATA[Daniel Philip Johnson]]></dc:creator><pubDate>Sun, 06 Jul 2025 14:00:17 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1750006886371/3f2314a8-2400-4083-8980-7caa134a7a08.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-the-seduction-of-being-sure"><strong>The Seduction of Being Sure</strong></h2>
<blockquote>
<p><em>“In engineering, ‘feels right’ is not the same as ‘is right.’”</em></p>
</blockquote>
<p>Most engineering mistakes don’t stem from a lack of skill they come from acting too quickly on what <em>feels</em> certain.The first idea often seems the most appealing: it’s familiar, proven, and quick to implement. And when time is tight or expectations are high, that sense of familiarity can be dangerously comforting.</p>
<p>But intuition isn’t always insight. What feels obvious is often just a pattern we’ve repeated enough times to stop questioning. And the real danger isn’t choosing the wrong approach it’s assuming there <em>wasn’t another one worth exploring</em>.</p>
<p>When no one questions a decision, it’s rarely because it’s perfect. More often, it’s because someone’s confidence is carrying too much weight and the room doesn’t feel safe pushing back. That’s how comfort masquerades as logic. That’s how poor decisions get framed as fast ones.</p>
<p>We skip the hypothesis. We assume alignment. We commit.</p>
<p>But here’s the truth:<br />You don’t have to be wrong for it to still be the wrong decision.</p>
<h2 id="heading-when-obvious-becomes-unquestioned"><strong>When “Obvious” Becomes Unquestioned</strong></h2>
<blockquote>
<p><em>“The more senior you are, the easier it is to get away with bad ideas said confidently.”</em></p>
</blockquote>
<p>I’ve seen this dynamic way too many times especially among senior engineers and tech leads. Someone proposes a solution in a confident tone. They drop a buzzword. A familiar pattern. Maybe even reference past success. And everyone in the room relaxes.</p>
<p>Seniority doesn’t mean your ideas need fewer checks it means you should invite more. The higher up you go, the more your words carry weight even when they shouldn’t. And if you’re not careful, your team stops pushing back. That’s how bad architecture gets blessed with a shrug.</p>
<p>This isn’t imposter syndrome. It’s a leadership responsibility. A confident voice without curiosity isn’t a sign of strength it’s a louder failure waiting to happen.</p>
<h2 id="heading-the-quiet-cost-of-skipped-conversations"><strong>The Quiet Cost of Skipped Conversations</strong></h2>
<blockquote>
<p><em>“Silence isn’t alignment—it’s deferred regret.”</em></p>
</blockquote>
<p>You don’t notice it right away. There’s no incident report, no heated debate, no flashing red sign that something’s gone wrong. But the signals were there all along subtle, quiet, and easy to dismiss. The stand-up where no one asked, <em>“Have we validated this?”</em> The Slack thread that drifted into silence before anyone offered pushback. The RFC that sailed through with a single enthusiastic thumbs-up—because it came from someone senior.</p>
<p>In the moment, that quiet feels like progress. No resistance. No blockers. Just momentum. But that absence of friction isn’t alignment it’s a blind spot forming.</p>
<p>Eventually, the costs reveal themselves. The bug reports roll in. The feature under-performs under load. The onboarding flow breaks for edge cases no one had considered. And suddenly, the decision that once felt obvious—or at least uncontroversial—turns into a tech debt ticket, a late-night fix, or an awkward postmortem.</p>
<p>When critique is skipped, consequences aren’t avoided. They’re simply delayed. And by the time they surface, the team has already moved on leaving someone to clean up clarity that should’ve come earlier.</p>
<h2 id="heading-your-certainty-is-in-the-code"><strong>Your Certainty Is in the Code</strong></h2>
<blockquote>
<p><em>“Every line of code is a bet. Certainty just hides the fact you didn’t test the odds.”</em></p>
</blockquote>
<p>Certainty isn’t just a mindset it shows up in the code.</p>
<pre><code class="lang-javascript">tsCopyEdit<span class="hljs-comment">// You assume this component is expensive, so you optimize early</span>
<span class="hljs-keyword">const</span> Memoized = React.memo(MyComponent);

<span class="hljs-comment">// Later, you discover it barely re-renders.</span>
<span class="hljs-comment">// And now you’ve got stale props and weird bugs.</span>
</code></pre>
<p>We don’t just <em>think</em> in defaults we code them in. We reach for <code>memo</code>, <code>useCallback</code>, extra abstractions, premature patterns. Not because we’ve validated the need, but because they feel right. Because they <em>look</em> like smart engineering. Because they signal experience.</p>
<p>But often, what they really signal is untested conviction.<br />It’s not optimisation it’s assumption dressed as certainty.</p>
<p>“I’ve done this before” isn’t a justification. It’s a warning. What worked in one context can quietly rot in another. And when those assumptions go unchallenged, they calcify into technical debt not because the logic was wrong, but because the logic was <em>never questioned</em>.</p>
<p>Every abstraction, every pattern, every decision you ship is a bet.<br />Smart engineers don’t bet less.<br />They just know when to <strong>check the odds</strong>.</p>
<h2 id="heading-strong-engineers-need-correction-not-validation"><strong>Strong Engineers Need Correction, Not Validation</strong></h2>
<blockquote>
<p><em>“Being right is a flex. Being corrected early is a skill.”</em></p>
</blockquote>
<p>This is the shift that took me years to internalise and even longer to practice.</p>
<p>A senior engineer’s job isn’t to have the final answer.<br />It’s to surface their thinking early enough that it can still be questioned.</p>
<p>That means framing every idea as a hypothesis, not a conclusion.<br />It means actively opening the door to critique not performatively, but with genuine curiosity. It means building habits around feedback, not just receiving it, but <em>inviting</em> it before code gets written or consensus calcifies.</p>
<p>You start asking things like:</p>
<blockquote>
<p><em>“What’s the dumbest assumption I’ve made here?”</em><br /><em>“What am I not seeing?”</em><br /><em>“Can someone rip this apart before we build it?”</em></p>
</blockquote>
<p>And more importantly—you actually want the answers.<br />You don’t flinch when they come. You revise. You adapt. And you keep going.</p>
<p>Because strong engineers aren’t the ones who are always right.<br />They’re the ones who get it <em>less wrong</em>, <em>sooner</em>.</p>
<h2 id="heading-build-teams-that-disagree-well"><strong>Build Teams That Disagree Well</strong></h2>
<blockquote>
<p><em>“Disagreement isn’t dysfunction. It’s design feedback—early, cheap, and essential.”</em></p>
</blockquote>
<p>Avoiding the trap of false certainty isn’t just a personal practice it’s a team sport. It requires a culture where disagreement isn’t a sign of conflict, but a sign of care. That culture doesn’t emerge by accident. You have to design it.</p>
<p>Start by making intellectual humility normal. Encourage engineers to say, <em>“I might be wrong,”</em> during sprint planning not as a caveat, but as an invitation. In design reviews, go beyond green checks; actively ask for counterarguments. In architecture meetings, assign someone the role of skeptic not to slow things down, but to illuminate what might otherwise be missed.</p>
<p>Just as importantly, celebrate revisions. Normalise changing your mind. Treat updated decisions not as backtracking, but as a healthy response to new input a sign that the process is working.</p>
<p>The best engineers I’ve worked with aren’t the ones who rush to answers. They’re the ones who slow down when it matters, open the floor to critique, and revise with clarity and intent. They treat friction as a signal, not a nuisance. They see disagreement as design input. And they hold confidence lightly—testing it the way they'd test any other assumption.</p>
<p>Because in the best teams, good ideas don’t win by being loud.<br />They win by surviving the teardown.</p>
<h2 id="heading-conclusion-certainty-needs-to-be-earned"><strong>Conclusion: Certainty Needs to Be Earned</strong></h2>
<blockquote>
<p><em>“Confidence without curiosity isn’t leadership. It’s a blind spot.”</em></p>
</blockquote>
<p>Certainty is seductive. It offers speed, authority, and the comforting illusion of control. But in engineering, the ideas that go unchallenged are often the ones that go unchecked and unchecked assumptions are what break systems in ways no test suite can catch.</p>
<p>The most dangerous decisions aren’t the ones that are wrong.<br />They’re the ones made in rooms where no one speaks up.</p>
<p>Outages, regressions, creeping tech debt they rarely arrive with a bang. They show up slowly, wrapped in the quiet of unasked questions and untested convictions.</p>
<p>So when an idea feels “obvious,” stop.<br />Not to second-guess yourself but to <em>invite</em> the friction.<br />Ask: “What am I missing?” “Where could this break?”<br />Make it a habit. Make it a culture.</p>
<p>Because the strongest leaders don’t seek validation.<br />They design teams that <em>correct them early</em> and celebrate it.<br />They don’t cling to certainty.<br />They earn it.</p>
]]></content:encoded></item><item><title><![CDATA[The Owl Who Cried AI: From Slop to Flop and Back Again]]></title><description><![CDATA[Not long ago, Duolingo joined the growing list of tech companies chanting the new gospel: AI-first. Out came a memo, full of visionary phrasing and vague promises about scale, innovation, and how AI was going to unlock “the future of learning."

(Tra...]]></description><link>https://blog.danielphilipjohnson.co.uk/the-owl-who-cried-ai-from-slop-to-flop-and-back-again</link><guid isPermaLink="true">https://blog.danielphilipjohnson.co.uk/the-owl-who-cried-ai-from-slop-to-flop-and-back-again</guid><category><![CDATA[AI]]></category><category><![CDATA[@TechCulture]]></category><dc:creator><![CDATA[Daniel Philip Johnson]]></dc:creator><pubDate>Sun, 22 Jun 2025 14:00:46 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1750007919956/291bc2b4-862d-47a8-8dd3-dff087ef4e23.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Not long ago, Duolingo joined the growing list of tech companies chanting the new gospel: <em>AI-first</em>. Out came a memo, full of visionary phrasing and vague promises about scale, innovation, and how AI was going to unlock “the future of learning."</p>
<blockquote>
<p>(Translation: we’re firing the humans but in a way that sounds exciting.)</p>
</blockquote>
<p>The owl had spoken. The future was here.</p>
<p>And it spoke fluent buzzword.</p>
<p>What followed was a familiar plotline: cautious applause from investors, quiet dread from internal Slack channels, and a sinking feeling for the creatives, educators, and contractors who’d helped build the platform into what it was warm, weird, and widely loved.</p>
<p>The same people who gave Duolingo its voice a brand that somehow blended Sesame Street with mild emotional blackmail were now left wondering if that voice was being handed off to a large language model that thinks “¿Dónde está el baño?” is peak comedy.</p>
<blockquote>
<p>“The humans are being deprecated. But don’t worry, the owl is still watching.”</p>
</blockquote>
<h2 id="heading-ai-slop-enters-the-chat">AI Slop Enters the Chat</h2>
<p>The results? Mixed at best. AI-generated content filled gaps, sure but also widened a few. What once felt playful and oddly personal began to feel templated. Lessons lost their spark. Social posts read like interns trying too hard, only the intern was an algorithm fed 30,000 Reddit comments and a dream.</p>
<p>Behind the scenes, Duolingo boasted about building 100 courses in under a year a staggering feat compared to the ten it took to build the first hundred.</p>
<p>Think of it like assembling 100 IKEA bookshelves in a weekend: technically impressive, but don’t lean on them too hard.</p>
<blockquote>
<p>A hundred new courses in under a year. <strong>Quantity flex. Quality TBD.</strong></p>
</blockquote>
<p>Because for those actually trying to learn a language, speed doesn’t always mean progress. Sometimes it just means more walls to walk into before you find the door.</p>
<p>And as for the voice that made Duolingo famous the charmingly unhinged, slightly threatening owl? It didn’t translate well to AI. Instead of motivating guilt trips and inside jokes, we got safe sentences and sanitised tone. The same feeling you get when a fast food chain tweets something "relatable." It’s trying you can tell but you can also tell someone deleted all the weird bits in revision.</p>
<blockquote>
<p>"It’s giving ‘How do you do, fellow kids?’ energy. But in Duolingo green."</p>
</blockquote>
<h2 id="heading-enter-corporate-clarity-guy">Enter Corporate Clarity Guy</h2>
<p>Before the walk-back, there was <em>the memo</em>.</p>
<p>Leaked to the press and later confirmed, it laid out a five-point plan for how Duolingo would “align the business around AI.” On paper, it sounded visionary. In practice, it read like a slow-motion goodbye to the human scaffolding that made the app work.</p>
<p>Duolingo CEO Luis von Ahn outlined five “constructive constraints”:</p>
<ul>
<li><p>"We'll gradually stop using contractors to do work that AI can handle"</p>
</li>
<li><p>"AI use will be part of what we look for in hiring"</p>
</li>
<li><p>"AI use will be part of what we evaluate in performance reviews"</p>
</li>
<li><p>"Headcount will only be approved if a team can’t automate more of its workload"</p>
</li>
<li><p>"Most functions will have specific initiatives to fundamentally change how they work"</p>
</li>
</ul>
<p>If that sounds less like a strategy and more like an existential performance review for anyone with a salary you’re not wrong.</p>
<p>This isn’t an enhancement strategy. It’s a soft restructuring plan.</p>
<blockquote>
<p>"The future is bright. Unless you're human."</p>
</blockquote>
<p><strong>Then came the video.</strong></p>
<p>A kind of non-apology wrapped in CEO sincerity equal parts reassurance and reputation control. Von Ahn, speaking directly to camera, reframed the whole thing:</p>
<blockquote>
<p>“One of the most important things leaders can do is provide clarity. When I released my AI memo… I didn’t do that well.”</p>
</blockquote>
<p>He goes on to say that AI won’t replace workers, that Duolingo is still hiring, and that employees will be “empowered” to learn and adapt. He reassures the audience that the owl is still here to help not automate you out of relevance.</p>
<blockquote>
<p>“I don’t know exactly what’s going to happen with AI… but the sooner we learn how to use it responsibly, the better off we will be.”</p>
</blockquote>
<p>It’s calm. Measured. Thoughtful.</p>
<p>And it lands about six PowerPoint slides away from the original plan that said headcount would only be granted if a team couldn’t automate more of their work.</p>
<blockquote>
<p>“The owl says AI is here to help. But if you squint, you can still see the memo behind the smile.”</p>
<p>Your AI usage streak is in danger. Let’s fix that before it affects your performance review 💚</p>
</blockquote>
<h2 id="heading-you-cant-walk-back-the-hype-and-googles-breathing-down-your-neck">You Can’t Walk Back the Hype (and Google’s Breathing Down Your Neck)</h2>
<p>There’s a trap in calling yourself “AI-first.” You’re not just experimenting you’re committing. You’re placing a bet, publicly, that AI is not only the future but <em>your</em> future. And when that doesn’t land? When the vibes go missing, the lessons feel hollow, and your users start asking why Duo now sounds like a customer support bot for a smart fridge?</p>
<p>You can’t just hit undo. You need a narrative detour. You need Clarity Guy with a whiteboard explaining that “AI-first” really meant “AI-curious but people-powered.”</p>
<p>But behind that PR spin was something else: <strong>Google</strong>. Specifically, Google Translate now increasingly dabbling in interactive learning. It doesn’t take a strategist to see the impending doom: a tech giant with a near-endless war chest and one of the largest translation datasets on Earth suddenly deciding it wants your users.</p>
<p>Duolingo wasn’t just chasing innovation. It was racing the clock. If they didn’t stake their claim now, they risked becoming the quirky owl-shaped startup that got eaten by a search bar.</p>
<blockquote>
<p>"I used to teach. Now I prompt-engineer."</p>
</blockquote>
<h2 id="heading-language-is-human-and-im-living-proof">Language Is Human (And I'm Living Proof)</h2>
<p>Language is weird. It’s full of nuance, timing, sarcasm, and culture. You don’t just learn a language you absorb it. Through mistakes, repetition, and the occasional stranger correcting your grammar mid-sentence with a kind smile or a look of sheer pity.</p>
<p>AI can generate sentences. Hundreds, even. But it doesn’t <em>know</em> what it’s teaching. It doesn’t get why “I want to eat, grandma” is different from “I want to eat grandma.” It doesn’t know how to gently motivate someone who’s burnt out or how to slip a joke into just the right sentence to make it stick.</p>
<p>And users are starting to notice:</p>
<blockquote>
<p>“I’m a paid Super Duolingo customer with a 1463-day streak,” wrote one user on LinkedIn. “But they gutted the Esperanto course and removed the humans who maintained it. Now it’s dying on the vine.”</p>
<p>Another flagged the Spanish-via-Telugu course: “The content is ungrammatical and frankly misleading. Please don’t fire your human translators.”</p>
</blockquote>
<p>I can relate. As someone learning Spanish with a Peruvian partner, Duolingo helped me build the foundation the first 2,000 words I needed to communicate, argue, laugh, and survive awkward family dinners. That wasn’t magic. It was consistency. Gentle repetition. Human-tested content that understood how brains actually work when learning a second language: <em>lazily</em>.</p>
<p>But vocabulary is the easy part. The real wall is <strong>listening</strong> catching full-speed sentences in real-world context, no subtitles, no mercy. Unless you’re learning Japanese (where every syllable is lovingly pronounced like a stage actor doing warmups), most languages hit your ears like a blender on fast-forward.</p>
<p>That’s why the Duolingo podcasts were such a gift. Real voices. Real pacing. Real stories. They weren’t just content they were immersion. And they taught me far more than any AI-generated multiple-choice quiz ever could.</p>
<blockquote>
<p>"The best way to learn a language? Forget you're learning it."</p>
</blockquote>
<p>Your brain will default to what it knows unless you force it not to. That’s why I had to delete Google Translate too tempting. It became a crutch, not a tool. Karaoke in Spanish helped more than most apps ever did. So did being stranded in a Spanish town with no help, no Wi-Fi, and no choice but to figure it out.</p>
<p>That’s what language learning is. It’s immersion, struggle, and sometimes embarrassment not a five-minute “Learn Spanish with AI!” PDF and a pat on the head.</p>
<p>And here’s the real danger: if we keep pushing AI-first language solutions, we may stop learning altogether. We’ll just outsource it.</p>
<p>Why study, when your voice assistant can whisper the answer into your ear in real time?</p>
<p>Why wrestle with conjugations, when your AI agent is already replying to theirs?</p>
<p>That’s not fluency. That’s just automation dressed up as connection.</p>
<p>Because when a user gets an answer marked wrong even when it’s clearly right that’s not just a glitch. That’s a failure of trust. And once people lose confidence in your app as a teacher, it doesn’t matter how fast you can scale.</p>
<p>You’ve taught them the wrong lesson.</p>
<blockquote>
<p>“Looks like you missed your Spanish practice... again. But it’s okay. The AI model’s improving without you.”</p>
</blockquote>
<h2 id="heading-the-owls-redemption-arc">The Owl’s Redemption Arc</h2>
<p>So here we are. The owl blinked. The memo got massaged. The people are back mostly. The company line now reads somewhere between <em>“AI will enhance us”</em> and <em>“please ignore the previous memo.”</em></p>
<blockquote>
<p>“I’m not mad. I’m just disappointed you didn’t train the model today.”</p>
</blockquote>
<p>And if you’ve opened the app lately, you’ll know: they’re hustling. That Super Duolingo free trial offer? It’s not a suggestion anymore. It’s a recurring dream. You open the app to review a verb tense and within seconds, you’re being asked to upgrade like you’ve walked into a very polite hostage situation.</p>
<blockquote>
<p>Upgrade to Super Duolingo™ to keep one human editor employed.</p>
</blockquote>
<p>And honestly? Let them. Someone has to pay the AI bill. GPT-4 isn’t cheap, and Corporate Clarity Guy’s whiteboards don’t fund themselves.</p>
<p>Because in the end, Duolingo taught us a lesson far more important than how to say “Where is the bathroom?” in Spanish.</p>
<blockquote>
<p>The future might be built with AI.<br />But the parts that stick the ones we laugh at, learn from, and remember those are still human.</p>
</blockquote>
<p>And maybe that’s the real lesson.</p>
<p>Not just how to say “Where is the bathroom?”</p>
<p>But when to pause, ask why, and remember that language like learning works best when it’s a little messy, a little human, and absolutely not automated end-to-end.</p>
<blockquote>
<p>“You missed your Spanish lesson. But don’t worry the model’s fine.<br />You’re the one we’re losing.”<br />— <em>The Owl, softly, from the push notification abyss</em></p>
</blockquote>
]]></content:encoded></item><item><title><![CDATA[The Fast & The Flimsy: We Used to Prototype. Now We Ship AI Slop.]]></title><description><![CDATA[TL;DR: AI Makes Code Fast—But Not Finished
AI tools get us to 80% fast. But that last 20%? That’s where real engineering happens error handling, performance tuning, edge case thinking, and security.
We used to build prototypes that were clearly incom...]]></description><link>https://blog.danielphilipjohnson.co.uk/the-fast-and-the-flimsy-we-used-to-prototype-now-we-ship-ai-slop</link><guid isPermaLink="true">https://blog.danielphilipjohnson.co.uk/the-fast-and-the-flimsy-we-used-to-prototype-now-we-ship-ai-slop</guid><category><![CDATA[architecture]]></category><category><![CDATA[Artificial Intelligence]]></category><dc:creator><![CDATA[Daniel Philip Johnson]]></dc:creator><pubDate>Tue, 17 Jun 2025 19:15:01 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1749153215007/4b07a63f-38dd-414b-95eb-f488e89a6e34.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-tldr-ai-makes-code-fastbut-not-finished">TL;DR: AI Makes Code Fast—But Not Finished</h3>
<p>AI tools get us to 80% fast. But that last 20%? That’s where real engineering happens error handling, performance tuning, edge case thinking, and security.</p>
<p>We used to build <strong>prototypes</strong> that were clearly incomplete. Now AI spits out polished-looking systems that feel finished… until they silently break, rot, or vanish into tech debt no one understands.</p>
<p>If you stop at the AI-generated "good enough," you're not shipping code. You're shipping a <strong>time bomb</strong> with clean syntax.</p>
<p><strong>Don’t confuse "it runs" with "it’s ready." And yet—because it looks done, we treat it like it is.</strong></p>
<h2 id="heading-looks-done-isnt"><strong>Looks Done. Isn’t.</strong></h2>
<p>So, AI makes it a breeze to spew out a thousand digital houses overnight, right? Blueprints? Who needs 'em. Inspections? Get outta here, no time! Just smash that “Generate” button and watch the damn neighbourhood pop up walls slapped together, roofs ostensibly sealed, fresh paint gleaming like a salesman's smile. From a distance, yeah, it looks passably complete.</p>
<p>But here’s the con, the sleight of hand: Before this AI circus rolled into town, we built <em>prototypes</em>. Actual shells of what a house <em>might</em> become. They got us to, say, 70%. No fancy plumbing. Wiring? Forget about it. Just enough to demo the idea and see if there was even a <em>there</em> there. Everyone knew it wasn’t finished. The cracks were showing. The neon danger signs were blinking. It was a <strong>checkpoint</strong>, not a finish line.</p>
<p>Then AI waltzes in and gets us to that seductive 80%. Suddenly, the lights flicker on. The taps <em>might</em> give you water, if they’re feeling generous. The stove sputters to life. It <strong>feels</strong> like a home just enough to hoodwink stakeholders into thinking it’s move-in ready. So, because the pressure’s on, we do. We ship it. And that, my friends, is when the <em>real</em> migraine starts.</p>
<p>Missing auth logic? "We’ll bolt it on later." No error handling? "Future-us problem." Hardcoded data? "Temporary!" But here’s the thing: AI doesn’t leave a flag in the sand. It doesn’t say “this is just a scaffold.” It doesn’t warn you that the plumbing stops behind the drywall. It looks done.</p>
<blockquote>
<p><strong>Prototypes were incomplete by design. AI makes them look complete without the safety, structure, or scrutiny they need to be real.</strong></p>
</blockquote>
<p>Because beneath that flimsy, AI-generated 10% veneer of ‘polish’?</p>
<ul>
<li><p>There’s no goddamn <strong>foundation</strong>.</p>
</li>
<li><p>The wiring’s a rat’s nest, just begging to short-circuit your entire week.</p>
</li>
<li><p>The front door? Might as well be a painted-on cartoon it sure as hell doesn’t lock.</p>
</li>
<li><p>And the kicker? <strong>No one</strong> not even the poor sods who copy-pasted it into existence can explain how any of this Rube Goldberg machine actually works.</p>
</li>
</ul>
<p>Security? An afterthought, if it was a thought at all. Scalability? <em>cue nervous laughter and shrugging</em>. Maintainability? Vanished into the ether, probably with your weekend plans. Ownership? That disappeared the second the prompt window closed.</p>
<p>But hey, it <em>runs</em> (mostly). It <em>demos</em> (if you don’t click the wrong thing). It <em>impresses</em> (the people who don’t have to fix it later).</p>
<p>So, we move in. We build on top of it. And by the time everyone figures out what a rickety pile of crap we’ve actually inherited, we’ve already churned out five more just like it, all teetering on the same shaky ground.</p>
<blockquote>
<p><strong>The 70% prototype was a checkpoint. The 80% AI version is a trap.</strong> <strong>And the faster we go, the faster they expect until it all comes crashing down.</strong></p>
</blockquote>
<h2 id="heading-prototyping-was-a-phasenow-its-the-product">Prototyping Was a Phase—Now It’s the Product</h2>
<p>Remember the good old days? Before the AI hype train barrelled through, we used to ship <em>prototypes</em>. And let's be honest, they were often a glorious mess fragile, held together with digital duct tape, and nowhere near ready for prime time. But here’s the thing: <strong>we knew it</strong>. We weren’t kidding ourselves, and we sure as hell weren’t trying to kid anyone else. The plan was always to go back, to actually <em>engineer</em> the damn thing.</p>
<p>That 70% half-baked version? It was just enough to demo the core idea, to see if there was even a <em>there</em> there. Nobody in their right mind looked at it and thought, "Yep, ship it!"</p>
<p>Missing auth logic? "Eh, we'll bolt it on later." No error handling? "Future us problem." Hard-coded data all over the place? "It's just temporary, boss, promise!" That was all part of the dance, the accepted sloppiness of the exploration phase. Then AI swaggered onto the scene. Now, we hit "generate," and out pops something that doesn’t just crawl it walks, maybe even does a little jig. It <em>feels</em> done. It works just well enough to bamboozle stakeholders and sometimes even ourselves into believing it <em>is</em> done.</p>
<p>AI gets us to that tantalising 80%. And that, right there, is where the alarm bells should be screaming.</p>
<p>Because that last 20%? That’s not just spit-and-polish. That’s the performance tuning that stops it from cratering under load. That’s the edge case handling that prevents your users from hitting a digital brick wall. That’s the security pass that (hopefully) keeps you off the front page of Hacker News. That’s where <strong>real engineering</strong> happens. But with AI eagerly volunteering to fill in the blanks, that crucial, painstaking work just… quietly evaporates.</p>
<p>Why bother refactoring, why sweat the details, when the next prompt can churn out something else that’s vaguely “good enough”? It’s a slippery slope, paved with the best intentions and the siren song of speed.</p>
<blockquote>
<p><strong>“AI in the wrong hands won’t just cause bad code it’ll cause systems that nobody knows how to fix.”</strong></p>
</blockquote>
<p>Let’s face it: stakeholders, bless their hearts, aren’t usually clamoring for robustness under the hood. They want features they can see. Demos that dazzle. Things that <em>look</em> like relentless forward momentum. And AI? It delivers that superficial shine, and it delivers it <em>fast</em>.</p>
<p>So, when some poor, battle-weary engineer pipes up, voicing those nagging concerns about ballooning tech debt, the glaring lack of tests, or that AI-generated chunk of code that just <em>feels</em> profoundly sketchy and wrong… what’s the all-too-common refrain?</p>
<blockquote>
<p><strong>“It’s working, isn’t it?”</strong></p>
</blockquote>
<p>What they don’t see what AI, by its very nature, often helps <em>obscure</em> are the gremlins already multiplying just beneath that shiny surface:</p>
<ul>
<li><p>Edge cases that are silently, consistently fumbling the ball.</p>
</li>
<li><p>Hidden vulnerabilities, just patiently waiting for some enterprising script kiddie to stumble upon them.</p>
</li>
<li><p>Brittle, inscrutable code that’s guaranteed to shatter the moment it encounters the slightest bit of real-world pressure.</p>
</li>
</ul>
<p>We’ve officially stumbled into the era of <strong>invisible fragility</strong>. And believe me, that’s a damn sight scarier and a whole lot more dangerous than the honest, visible jank we used to deal with.</p>
<h2 id="heading-invisible-fragility-the-new-technical-debt">Invisible Fragility: The New Technical Debt</h2>
<p>Let’s talk about the old-school kind of tech debt. At least it was honest. You could <em>see</em> the mess. Spaghetti callbacks. <code>TODO (lol)</code> comments. Hacks you swore you’d fix “someday.” Ugly? Sure. But obvious.</p>
<p>You’d sigh in a code review and say:</p>
<blockquote>
<p>“Yeah, that’s rough. We’ll come back to it.”<br />And you meant it mostly. You knew where the monsters lived.</p>
</blockquote>
<p>But AI-generated code? Different beast. It <em>looks</em> clean. It <em>sounds</em> right. It glides through PRs. Tidy structure. Sensible naming. No “WTFs per minute.” It feels safe.</p>
<p>Until it isn’t.</p>
<p>The debt’s still there just polished, buried, camouflaged.</p>
<blockquote>
<p><strong>This is the new nightmare: invisible fragility.</strong></p>
</blockquote>
<p>What does that look like?</p>
<ul>
<li><p>Functions that silently swallow errors or return garbage on edge cases nobody tested.</p>
</li>
<li><p>Loops that look elegant but blow up at scale.</p>
</li>
<li><p>Logic that works in demos… and combusts in production.</p>
</li>
</ul>
<pre><code class="lang-js"><span class="hljs-comment">// Original</span>
<span class="hljs-keyword">async</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">getUserName</span>(<span class="hljs-params">userId</span>) </span>{
  <span class="hljs-keyword">const</span> user = <span class="hljs-keyword">await</span> db.findUserById(userId);
  <span class="hljs-keyword">return</span> user.name;
}
</code></pre>
<p>Crashes if <code>user</code> is null. The AI notices and patches it:</p>
<pre><code class="lang-js"><span class="hljs-comment">// AI-generated fix</span>
<span class="hljs-keyword">async</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">getUserName</span>(<span class="hljs-params">userId</span>) </span>{
  <span class="hljs-keyword">const</span> user = <span class="hljs-keyword">await</span> db.findUserById(userId);
  <span class="hljs-keyword">return</span> user?.name ?? <span class="hljs-string">'Unknown'</span>;
}
</code></pre>
<p>Looks fine. Doesn’t crash. But now, we’ve just papered over the real issue.<br />The symptom’s gone. The cause? Still lurking unlogged, unmonitored, and harder to detect.</p>
<p>And the scariest part?</p>
<ul>
<li><p>No backstory</p>
</li>
<li><p>No breadcrumbs</p>
</li>
<li><p>No trace of the why</p>
</li>
</ul>
<p>Just a detached blob of logic unowned, untested, and uninterpretable.<br />We didn’t trade mess for quality.<br />We traded <strong>understanding for illusion</strong>. And that’s the kind of debt that doesn’t just grow it eventually collapses everything beneath it.</p>
<h2 id="heading-dependency-spiral-the-more-ai-we-use-the-less-we-know"><strong>Dependency Spiral: The More AI We Use, the Less We Know</strong></h2>
<p>It always starts so innocently, doesn't it? A little time-saver here, a quick unblocker there. You’re jammed up. The deadline’s breathing down your neck like a caffeine-fuelled dragon. So you nudge the AI, "Hey, can you whip up a little something for this?" Maybe it’s a helper function, a basic endpoint, a component stub to get you going. And sweet relief, <strong>it actually works.</strong> Or, at least, it <em>seems</em> to.</p>
<p>So, the next time you hit a snag? Well, that AI did a pretty good job last time, right? You ask it for a bit more. Then a bit more after that. And slowly, almost without you noticing, a dangerous new reflex kicks in. The mental muscle for problem-solving starts to atrophy.</p>
<p>It becomes:</p>
<blockquote>
<p><strong>“AI wrote it, so let’s just let AI write the next part, too.”</strong></p>
</blockquote>
<p>Before you can say "technical debt," the human role morphs from <em>actual engineering</em> into something more akin to being a glorified switchboard operator plugging AI-generated black boxes together with digital duct tape and a silent prayer that the whole damn Rube Goldberg contraption holds.</p>
<p>Engineers, good engineers, stop asking <em>why</em> a chunk of code works. They just squint at it, run the tests (if they exist), and if it doesn’t immediately explode, they shrug and move on. They stop truly understanding <em>how</em> their systems interconnect, how the data flows, where the dragons lie. They just know which magic incantation which prompt coaxes the AI into spitting out the next piece of the puzzle.</p>
<p>And teams? Oh boy. They start piling new features, entire floors, onto foundations they didn’t pour, didn’t design, and frankly, barely even glanced at. Foundations that might be made of digital papier-mâché for all they know.</p>
<p>What begins as a cheeky little shortcut, a "just this once" expediency, rapidly devolves into a full-blown doom spiral:</p>
<p>At first glance, this seems reasonable. The syntax is clean. No ESLint errors. The feature even “worked” in staging.</p>
<ul>
<li><strong>Knowledge doesn’t just fragment; it evaporates.</strong> Poof. Gone.</li>
</ul>
<pre><code class="lang-javascript"> <span class="hljs-comment">// Looks fine in a PR. What could go wrong?</span>
 <span class="hljs-keyword">async</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">sendNotification</span>(<span class="hljs-params">userId, message</span>) </span>{
     <span class="hljs-keyword">const</span> user = <span class="hljs-keyword">await</span> db.getUser(userId);
     <span class="hljs-keyword">if</span> (!user?.preferences?.notifications) <span class="hljs-keyword">return</span>;
     <span class="hljs-keyword">await</span> emailService.send(user.email, message);
 }
</code></pre>
<p>But six months later, a silent issue emerges: some users aren’t getting emails.<br />No errors. No logs. No alerts.<br />Turns out <code>user.preferences.notifications</code> is undefined for legacy accounts so the function quietly exits. Nobody knows why.</p>
<ul>
<li><p><strong>Ownership?</strong> That becomes a hot potato nobody wants to catch.</p>
</li>
<li><p><strong>Confidence</strong> in the system (and sometimes in themselves) erodes into a gnawing anxiety.</p>
</li>
<li><p>And <strong>documentation?</strong> Hah! Don’t make me laugh. That relic of a bygone era now trails so far behind it’s practically in a different timezone, attempting to describe a Frankenstein’s monster of a system that no single human fully comprehends.</p>
</li>
</ul>
<p>The code works. It’s just wrong.<br />And no one knows how to trace the failure back to its origin.</p>
<p>It’s a fundamental shift in what we even mean by "unfinished work":</p>
<blockquote>
<p><strong>Technical debt used to mean “We’ll grit our teeth and fix this mess later.”</strong> <strong>Now it means “We’re not even sure what the hell this thing <em>is</em>, let alone how to fix it.”</strong></p>
</blockquote>
<p>The more you lean on AI to “just write it for me,” the more profoundly you disconnect from the very system you’re supposed to be building, stewarding, and ultimately, understanding. And when that system inevitably shits the bed and trust me, it’s not <em>if</em>, but <em>when</em> nobody knows where to even begin picking up the pieces. The panic sets in. Fingers get pointed. The codebase stares back, a silent, inscrutable monolith of AI-generated indifference.</p>
<p>Because it’s not just poorly understood code anymore. It’s a growing continent of <strong>technical unknowns.</strong> And that’s a terrifying place to be when the pagers start screaming at 3 AM.</p>
<h2 id="heading-system-decay-doesnt-stop-at-code-quality">System Decay Doesn’t Stop at Code Quality</h2>
<p>We’ve talked about fragility in code. But AI also erodes something deeper: shared understanding. It doesn’t leave behind breadcrumbs. No rationale. No commit messages. No architectural record. That might be fine until someone needs to fix or extend that code. Suddenly, you’re staring at logic written by a ghost, for a problem nobody remembers.</p>
<p><a target="_blank" href="https://danielphilipjohnson.com/blog/dont-rob-yourself-of-the-eureka-how-ai-is-killing-the-joy-of-being-a-developer">Read the full breakdown in “No Memory, No Maintenance.”</a></p>
<h2 id="heading-the-missing-20-is-where-quality-lives"><strong>The Missing 20% Is Where Quality Lives</strong></h2>
<p>AI gets us surprisingly far. It can spin up a full feature in seconds. The syntax appears clean. The output runs. From a basic requirements point of view, it looks done.</p>
<p>But it’s not.</p>
<p>Because AI rarely finishes strong. It’s designed to <em>generate</em>, not <em>refine</em>. And if you express doubt? It won’t push back. It’ll agree with you. Predict the tone. Mirror the uncertainty. Then confidently serve up something even more “fine for now.”</p>
<p>But it won’t:</p>
<ul>
<li><p>Slow down to write exhaustive tests.</p>
</li>
<li><p>Anticipate edge cases, concurrency issues, or race conditions.</p>
</li>
<li><p>Think about user error, degraded states, or what happens if that one API call fails.</p>
</li>
</ul>
<p>It doesn’t check assumptions. It doesn’t weigh trade-offs.<br />It doesn’t pause to ask, <em>“Is this right?”</em> it just completes the thought.</p>
<p>Which means if <em>you</em> don’t stop to check no one will.</p>
<p>And that final 20%? The part after the demo, after the PR is merged, after the applause fades?<br />That’s where real quality lives.</p>
<p>It’s the integration tests that prove it works in the real world.<br />The performance tuning that keeps it fast under pressure.<br />The accessibility tweaks that make it usable for everyone.<br />The error handling that prevents a 2am cascade failure.<br />The security reviews that keep your company off the front page.</p>
<p>None of it is flashy. None of it demos well. And that’s exactly why it’s the first thing to vanish in a world moving too fast to care.</p>
<p>Because when we let AI take us to 80% and stop there, we don’t just lose resilience or clarity we lose something else: <strong>Pride.</strong></p>
<p>The sense that a thing was built to last. That it can take a hit and keep standing.<br />That it reflects <em>intentional choices</em>, not just plausible code.</p>
<p>AI can draft. It can unblock. It can even surprise you.<br />But it can’t finish like a human who still gives a damn.</p>
<p>And if no one comes back to close the gap?</p>
<p>It stays open — until it breaks.</p>
<h2 id="heading-were-not-anti-ai-were-anti-sloppiness"><strong>We’re Not Anti-AI. We’re Anti-Sloppiness.</strong></h2>
<p>Let’s be clear: this isn’t just another anti-AI rant. We’re not here to ban tools or reject progress.<br />We’re here to defend <strong>the codebase</strong>. AI has incredible potential and used well, it <em>accelerates</em> good engineering. We’ve always used automation to remove drudgery: Boilerplate generators. Snippets. IDEs that filled in <code>static void main(String[] args)</code> while we focused on what mattered.</p>
<p>But there’s a line between accelerating the work and abandoning the <strong>actual engineering</strong>. And we’re dangerously close to crossing it. It’s one thing to let AI handle the boring parts. It’s another to let it <strong>define the system</strong>, glue it together, and walk away like it’s finished.</p>
<p>Prototypes are fine.<br />We’ve all done them.<br />But if you’re going to use AI to build something fast?</p>
<blockquote>
<p><strong>Label it for what it is.<br />Flag the flaws.<br />Make the limitations loud and visible.</strong></p>
</blockquote>
<p>Don’t let a demo become a deadline.<br />Don’t let “good enough” become “let’s ship it.”</p>
<p>Because here’s the real risk:</p>
<blockquote>
<p><strong>We don’t just use AI to write code.<br />We start using it as a reason to stop thinking.</strong></p>
</blockquote>
<p>“Let’s have AI do it.”<br />“Let’s see what it generates.”<br />“Let’s ship this and tweak later.”</p>
<p>That’s the slope. And at the bottom of that slope? Engineers who don’t understand what they own. Teams that stop asking <em>why</em>. Systems that look finished but fall apart under pressure. We can’t afford that.</p>
<p>Especially not in an era where <strong>AI-native systems</strong> are going to outpace our ability to inspect every line.</p>
<blockquote>
<p><strong>We need to be more careful, not less.<br />More intentional, not more reactive.<br />More honest about prototypes, and more disciplined about what we trust.</strong></p>
</blockquote>
<p>AI is powerful. But the moment we stop treating it as a <em>tool</em>, and start treating it as an <em>autopilot</em>, we give up the one thing that matters most in engineering: <strong>Responsibility.</strong></p>
<h2 id="heading-dont-let-ai-lower-the-bar"><strong>Don’t Let AI Lower the Bar</strong></h2>
<p>Alright, let’s be brutally honest: AI isn’t vanishing in a puff of smoke anytime soon and frankly, it probably shouldn’t. When it’s not being hyped to the moon by charlatans, it <em>can</em> be fast, genuinely helpful, and occasionally even pull a rabbit out of its digital hat. Used smartly, by people who actually know what they’re doing, it <em>could</em> supercharge development in ways we’re only just starting to fumble towards.</p>
<p><strong>But</strong>, and this is a Mount Everest-sized 'but', if we let that intoxicating speed become our new favourite excuse to slash corners, to blissfully ignore the gnarly edge cases, or to conveniently forget how our own damn systems actually tick under the hood… well, then the problem isn’t the shiny new tool. <strong>The problem, my friends, is us.</strong> We’re the ones holding the idiot ball.</p>
<p>Believe it or not, engineering has always been and still is a hell of a lot more than just barfing out features and closing tickets. It’s about making the tough calls, the grown-up trade-offs. It’s about exercising actual goddamn <strong>judgement</strong>, and then standing by what you build, for better or worse. Every decision we make, every shortcut we take, every warning sign we ignore it all compounds, brick by lazy brick, into the systems that real people, real users, end up depending on. Sometimes with their livelihoods.</p>
<p>AI, on its own, isn’t going to torpedo the entire profession of engineering. But our collective willingness to just shrug, hand over the keys, and <strong>abdicate our goddamn responsibility</strong>? Yeah, that’ll do it. That’ll sink the ship alright.</p>
<p>So, the future isn’t some dystopian cage match: AI versus Engineers. It’s AI <strong>alongside</strong> engineers who still <strong>give a damn</strong> enough to ask the uncomfortable questions, to stick their hand up and flag what’s clearly missing or dangerously half-baked, and to actually <em>finish</em> the messy, critical job that some prompt only vaguely started.</p>
<p>Because that "missing 20%" we’ve been talking about the soul-crushing test suites, the mind-bending edge cases, the relentless performance tuning, the thankless security hardening that ain’t just a bit of polish you slap on at the end if there's time. That’s where <strong>robustness</strong> is forged. That’s where <strong>clarity</strong> finally dawns. That’s where actual, sleeves-rolled-up, coffee-fuelled <strong>engineering happens.</strong></p>
<p>So yeah, by all means, use the AI. Let it crank out the boilerplate that makes your eyes glaze over. Let it draft that first stab at a new component. Let it take a load off your shoulders.</p>
<p>But for the love of all that is holy, <strong>don’t let it lower your standards.</strong> Don’t let it do your critical thinking <em>for</em> you. And don’t you <em>dare</em> confuse “hey, it runs without immediately exploding!” with “yeah, this thing is actually production-ready.” There’s a canyon-sized difference.</p>
<p>The chasm between a flashy working prototype and a truly reliable, maintainable system isn’t just cosmetic it’s foundational. It’s structural. It’s the difference between a cardboard movie set and a brick house that’ll withstand a storm.</p>
<p>So, <strong>reclaim that final, gruelling, indispensable 20%.</strong> That’s where the bar isn't just set; it's defended. That’s where our professionalism lives.</p>
<p>And in the end, we the humans still in the loop, the ones with skin in the game still get to decide. Do we hold that bar high, with pride? Or do we let it slip, inch by agonising inch, into the mud?</p>
]]></content:encoded></item><item><title><![CDATA[Reboot. Repeat. Regret]]></title><description><![CDATA[“They were careless people… they smashed up things and creatures and then retreated back into their money.”
— F. Scott Fitzgerald, The Great Gatsby

In the 1920s, Fitzgerald wrote of reckless elites who broke the world and floated off in silk shirts ...]]></description><link>https://blog.danielphilipjohnson.co.uk/reboot-repeat-regret</link><guid isPermaLink="true">https://blog.danielphilipjohnson.co.uk/reboot-repeat-regret</guid><category><![CDATA[Learned Helplessness]]></category><category><![CDATA[AI]]></category><category><![CDATA[software development]]></category><category><![CDATA[Critical Thinking]]></category><dc:creator><![CDATA[Daniel Philip Johnson]]></dc:creator><pubDate>Thu, 05 Jun 2025 18:42:34 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1749140501464/e3a889da-edda-4ed6-a189-c47ec2a02f69.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<blockquote>
<p>“They were careless people… they smashed up things and creatures and then retreated back into their money.”</p>
<p>— <em>F. Scott Fitzgerald, The Great Gatsby</em></p>
</blockquote>
<p>In the 1920s, Fitzgerald wrote of reckless elites who broke the world and floated off in silk shirts and champagne flutes. A hundred years later, Sarah Wynn-Williams gave us a new version in her memoir <em>Careless People: A Story of Where I Used to Work</em>, chronicling her time at Facebook (now Meta). It’s the same pattern but now the excess isn’t parties. It’s platforms. The debris isn’t emotional; it’s societal.</p>
<p>We don’t just live in the wake of tech’s carelessness. We <em>scroll</em> through it.</p>
<p>Wynn-Williams isn’t an outsider. She was there, inside the glass walls and growth charts. Her account is both a memoir and a warning a whistleblower’s documentation of what happens when consequence is considered a PR issue, not a moral one.</p>
<p>At Facebook, she saw the cost of growth up close. Wynn-Williams later went public, accusing leadership including Mark Zuckerberg and Sheryl Sandberg of enabling misinformation, brushing off dissent, and putting expansion above ethics. From the platform’s delayed response to the <strong>Rohingya genocide in Myanmar</strong> to its quiet accommodation of <strong>Chinese censorship demands</strong>, she paints a picture of a company that didn’t just move fast it muffled what didn’t fit the story.</p>
<blockquote>
<p><em>“This isn’t the age of disruption. It’s the age of consequence.”</em></p>
</blockquote>
<p>What she describes isn’t evil in the cartoon villain sense it’s quieter. It’s ambient. A culture so committed to acceleration that consequence becomes a rounding error. Or someone else’s job.</p>
<p>And it’s the same kind of recklessness that Telle Whitney is calling out in her new book <em>Rebooting Tech Culture</em>. Only this time, it’s not just about Facebook. It’s the whole damn operating system.</p>
<h2 id="heading-move-fast-break-people">Move Fast, Break People</h2>
<p>Let’s be honest: tech never fixed what Facebook broke. And now, AI is being built in the same mould.</p>
<p>The old playbook is back: ship fast, scale faster, deal with consequences when the lawsuits roll in. It worked (financially) for social media. So why not try it on reality itself?</p>
<p>Whitney’s warning is clear. As someone who’s spent decades inside the industry, she sees the same exclusionary culture playing out all over again in AI labs, in billion-dollar startups, in investor portfolios chasing “the next OpenAI.”</p>
<p>These tools aren’t being designed for everyone. And often, not by everyone either.</p>
<p>There’s a growing divide between those who build and those who are built upon. AI is increasingly shaped by a narrow set of voices — a concentration of elite engineers, funders, and platforms — while the rest of the world becomes its test set. The results affect how we search, how we learn, how we work, and how we’re seen.</p>
<blockquote>
<p>Shoshana Zuboff called this out years ago in <em>The Age of Surveillance Capitalism</em>:<br /><strong>Tech's true innovation wasn’t the algorithm — it was the ability to turn human experience into data, and then into profit.</strong></p>
</blockquote>
<p>It’s not paranoia. It’s business.<br />And as <em>The Social Dilemma</em> made painfully clear, when the product is free, you are the product but with AI, we’re not just the product. We’re the pre-training data. The training set. The discarded scaffolding.</p>
<p>What’s missing isn’t just representation. It’s resistance. Teams move too fast for ethics to catch up, and feedback loops are flattened under quarterly metrics. A misfire in social media meant polarisation. A misfire in AI could mean systemic bias at the speed of scale.</p>
<p>The question isn’t whether these tools will reshape the world.<br />It’s whether anyone will be accountable for how they do it.</p>
<blockquote>
<p><em>“We fed the machine our stories. It spat out a world that forgot us.”</em></p>
</blockquote>
<h2 id="heading-when-memory-becomes-monetised">When Memory Becomes Monetised</h2>
<p>Tech companies used to profit from our attention keeping us glued to screens so they could sell our gaze to advertisers. Then came the data era: every click, scroll, and message became raw material to feed algorithmic predictions.</p>
<p>Now, they’re profiting from something more permanent: our labour, captured in memory.</p>
<p><strong>Our labour — not in real time, but in permanent memory.</strong></p>
<p>Large language models like ChatGPT, Claude, and Gemini were trained on the open internet. That includes blog posts, Stack Overflow answers, GitHub issues, fan fiction, academic papers, podcast transcripts, showreels, digital art — anything public enough to be scraped. If it was online, it was fair game.</p>
<p>They didn’t ask. They didn’t pay.</p>
<p>But now it’s productive, polished, and resold.</p>
<blockquote>
<p>They trained the model on us. Now they sell it back to us as if we never existed.</p>
</blockquote>
<p>Every beautifully phrased response, every auto-generated image, every AI-written line of code is built on top of a million anonymous contributions stitched together, stripped of origin, and served back like it was born in a lab.</p>
<blockquote>
<p>The question isn’t whether these tools will reshape the world.<br />It’s whether anyone will be accountable for how they do it.</p>
<p><em>“The real looting was not of banks or shops, but of language, memory, and meaning.”</em><br />— Rebecca Solnit</p>
</blockquote>
<p>First, they commodified our time. Then our behaviour. Now, they’ve commodified our creativity — the final layer of what it means to be human.</p>
<p>They call it artificial intelligence. But what powers it is very real: Teachers. Designers. Journalists. Coders. Poets. Musicians. The dev who answered your urgent bug question at 2 a.m., unpaid and uncredited. The writer whose copy was good enough to train a headline generator. The animator whose style became aesthetic fodder for infinite prompts.</p>
<p>Online, that labour was invisible. Inside the model, it’s erased. This isn’t just mimicry. It’s replacement. A library turned vending machine. Convenience with no citation. Scale with no soul. We used to worry about plagiarism.</p>
<p>Now we’re watching <strong>industrialised forgetting</strong>, disguised as progress and monetised as product.</p>
<p>The platforms are betting you won’t notice. Or worse: that you’ll love the output too much to care.</p>
<p>But not everyone is letting it slide. Legal challenges are emerging from <em>Getty Images v. Stability AI</em>, where the company is accused of copying 12 million copyrighted photos, to <em>Authors Guild v. OpenAI</em>, in which writers claim their books were ingested without consent to train LLMs (<a target="_blank" href="https://arstechnica.com/tech-policy/2023/02/getty-images-sues-stability-ai-for-copying-12-million-photos-violating-copyright/">Ars Technica, 2023</a>). The question isn’t just ethical anymore — it’s legal. And precedent is still being written.</p>
<h2 id="heading-designed-to-exclude">Designed to Exclude</h2>
<p>Whitney traces the roots of modern tech culture to what she calls the <strong>PayPal mafia</strong> a small group of men who created the companies that still shape our world: Facebook, Tesla, Palantir, LinkedIn. These founders didn’t just launch products. They created a cultural template: hyper-competitive, male-dominated, obsessed with the myth of the lone genius.</p>
<blockquote>
<p>Emily Chang’s <em>Brotopia</em> calls this out for what it is — a system built by and for a specific kind of founder, one that actively sidelines women and anyone who doesn’t match the brogrammer archetype. Meritocracy, she argues, became myth. A cover story.</p>
</blockquote>
<p>And that myth got intellectualized.</p>
<blockquote>
<p>In <em>Zero to One</em>, PayPal co-founder Peter Thiel describes the ideal founder as “a man with a plan” — singularly focused, contrarian, and immune to social consensus. It’s an ethos that rewards disruption, not dialogue. Vision, not feedback. Strength, not inclusion.</p>
</blockquote>
<p>The result? A system where “culture fit” became a weapon. Where VC firms chased pattern-matched genius while filtering out dissent. Where innovation meant building for yourself — and assuming everyone else would follow.</p>
<p>It’s not a bug.<br />It was the blueprint.</p>
<h2 id="heading-what-diversity-really-does">What Diversity Really Does</h2>
<p>Whitney isn’t just calling out the problem, she’s showing what better looks like.</p>
<p>She points to companies like <strong>AMD</strong>, where CEO Lisa Su and CTO Mark Papermaster led an inclusive design process that produced a more modular chip one that helped AMD dethrone Intel. Inclusion wasn’t charity. It was <strong>strategy</strong>.</p>
<p>And that strategy is measurable.</p>
<blockquote>
<p>A 2020 report from <em>McKinsey &amp; Company</em> found that companies in the top quartile for ethnic and gender diversity were significantly more likely to outperform their peers financially — a finding that’s only strengthened over time. <a target="_blank" href="https://www.mckinsey.com/capabilities/people-and-organizational-performance/our-insights/diversity-wins-how-inclusion-matters">McKinsey, 2020</a></p>
</blockquote>
<p>The takeaway? Diversity isn’t decoration. It’s infrastructure. And the moment you treat it like a checkbox rather than a creative engine, you lose what it’s actually for.</p>
<p>Wynn-Williams saw this, too. Her Facebook memoir reveals how brilliant people slowly slip away from companies that don’t hear them. They don’t leave in protest. They leave in silence, like a tab quietly closing in a sea of open ones.</p>
<h2 id="heading-culture-change-starts-small">Culture Change Starts Small</h2>
<p>Whitney offers a new model, one built on what she calls the <strong>six Cs</strong>:</p>
<p><strong>Creativity. Courage. Confidence. Curiosity. Communication. Community.</strong></p>
<p>Not as slogans. As systems. Values that reward listening, make space for different kinds of thinking, and actually support the people inside the product pipelines.</p>
<p>But these aren’t soft skills they’re the structural foundation of innovative, resilient teams.</p>
<ul>
<li><p><strong>Creativity</strong> means allowing ideas to come from anywhere, not just the most senior or loudest person in the room.</p>
</li>
<li><p><strong>Courage</strong> is the ability to challenge legacy thinking, suggest unpopular ideas, and experiment without fear of retribution.</p>
</li>
<li><p><strong>Confidence</strong> builds when people know their voice matters, not just when it echoes leadership's opinion.</p>
</li>
<li><p><strong>Curiosity</strong> fuels better questions, better product decisions, and better ethical foresight.</p>
</li>
<li><p><strong>Communication</strong> means active listening, psychological safety, and transparency beyond performative updates.</p>
</li>
<li><p><strong>Community</strong> reminds us that innovation is a team sport one that only thrives when everyone belongs.</p>
</li>
</ul>
<p>You don’t have to be a CEO to start. Team leads can build this now. So can mid-career engineers. So can new grads. But the first step is seeing that our current tech culture isn’t neutral it’s inherited.</p>
<p>And maybe it’s time we stop building with hand-me-down ethics.</p>
<p>Not as slogans. As systems. Values that reward listening, make space for different kinds of thinking, and actually support the people inside the product pipelines.</p>
<p>You don’t have to be a CEO to start. Team leads can build this now. So can mid-career engineers. So can new grads. But the first step is seeing that our current tech culture isn’t neutral — it’s inherited.</p>
<p>And maybe it’s time we stop building with hand-me-down ethics.</p>
<h2 id="heading-the-new-tech-heroes-dont-look-like-the-old-ones">The New Tech Heroes Don’t Look Like the Old Ones</h2>
<p>We know the names: Zuck. Musk. Jobs. Bezos.<br />They were cast as visionaries. Mavericks. World-changers. Their biographies became Bibles. Their aphorisms, scripture.</p>
<p>And so, the industry measured success by the size of your exit, the edge in your voice, the chaos you could command. Leadership became performance. Genius became exclusion.</p>
<p>But maybe that story is outdated.<br />Maybe it was always a bit of a lie.</p>
<p>Because the real work — the kind that lasts, the kind that includes, the kind that heals what tech has broken — is being done elsewhere. Quietly. Sustainably. Without a TED Talk.</p>
<p>Whitney names her heroes: <strong>Lisa Su</strong>, who rebuilt AMD through inclusive design. <strong>Jayshree Ullal</strong>, who leads with substance, not spectacle. And the thousands more women, people of colour, neurodivergent thinkers, community-first builders — who are <strong>architecting futures that don’t require someone else’s ruin to function</strong>.</p>
<p>These are the technologists shaping systems with empathy, not ego.<br />They’re not tweeting through a crisis. They’re preventing it.</p>
<p>And maybe it’s time we stopped chasing disruption and started honouring care.<br />Maybe the future doesn’t need more geniuses.<br />Maybe it needs more listeners.</p>
<h2 id="heading-lets-stop-being-careless">Let’s Stop Being Careless</h2>
<p>In <em>Careless People</em>, Wynn-Williams doesn’t write a takedown. She writes a warning.</p>
<p>In <em>Rebooting Tech Culture</em>, Whitney gives us the manual for doing it differently.</p>
<p>Together, they tell a story that’s uncomfortably familiar: the most dangerous thing in tech isn’t the code. It’s the culture. The willingness to “figure it out later.” The assumption that harm is just the price of progress.</p>
<p>We’ve seen what happens when you build without responsibility. Facebook taught us that.</p>
<p>AI doesn’t have to be the sequel.<br />But if we don’t reboot the culture not just the models it will be.<br />And next time, the stakes will be much higher.</p>
<h2 id="heading-references">References</h2>
<h3 id="heading-books"><strong>Books</strong></h3>
<p>Zuboff, S. (2019) <em>The age of surveillance capitalism: the fight for a human future at the new frontier of power</em>. New York: PublicAffairs.</p>
<p>Thiel, P. (2014) <em>Zero to one: notes on startups, or how to build the future</em>. London: Virgin Books.</p>
<p>Chang, E. (2018) <em>Brotopia: breaking up the boys’ club of Silicon Valley</em>. New York: Portfolio.</p>
<p>Wynn-Williams, S. (2025) <em>Careless people: a story of where I used to work</em>. New York: Random House.</p>
<p>Whitney, T. (2025) <em>Rebooting tech culture: a new playbook for building inclusive innovation</em>. Cambridge, MA: MIT Press.</p>
<h3 id="heading-documentaries"><strong>Documentaries</strong></h3>
<p>Orlowski, J. (2020) <em>The Social Dilemma</em>. [Film] Distributed by Netflix.</p>
<h3 id="heading-podcasts"><strong>Podcasts</strong></h3>
<p>Ibarra, H. and Whitney, T. (2025) <em>Does the tech industry need a reboot?</em> [Podcast] Harvard Business Review, 14 May. Available at: <a target="_blank" href="https://hbr.org/podcast/2025/05/does-the-tech-industry-need-a-reboot">https://hbr.org/podcast/2025/05/does-the-tech-industry-need-a-reboot</a> (Accessed: 4 June 2025).</p>
<h3 id="heading-reports"><strong>Reports</strong></h3>
<p>McKinsey &amp; Company (2020) <em>Diversity wins: how inclusion matters</em>. Available at: <a target="_blank" href="https://www.mckinsey.com/capabilities/people-and-organizational-performance/our-insights/diversity-wins-how-inclusion-matters">https://www.mckinsey.com/capabilities/people-and-organizational-performance/our-insights/diversity-wins-how-inclusion-matters</a> (Accessed: 4 June 2025).</p>
<h3 id="heading-news-amp-legal-articles"><strong>News &amp; Legal Articles</strong></h3>
<p>Ars Technica (2023) <em>Getty Images sues Stability AI for copying 12 million photos, violating copyright</em>. Available at: <a target="_blank" href="https://arstechnica.com/tech-policy/2023/02/getty-images-sues-stability-ai-for-copying-12-million-photos-violating-copyright/">https://arstechnica.com/tech-policy/2023/02/getty-images-sues-stability-ai-for-copying-12-million-photos-violating-copyright/</a> (Accessed: 4 June 2025).</p>
<p>Business Today (2025) <em>From political interference to “share a bed” claims: who is Sarah Wynn-Williams?</em> Available at: <a target="_blank" href="https://www.businesstoday.in/latest/world/story/from-political-interference-to-share-a-bed-claims-who-is-sarah-wynn-williams-ex-meta-executive-behind-careless-people-memoir-mark-zuckerberg-sheryl-sandberg-joel-kaplan-468127-2025-03-17">https://www.businesstoday.in/latest/world/story/from-political-interference-to-share-a-bed-claims-who-is-sarah-wynn-williams-ex-meta-executive-behind-careless-people-memoir-mark-zuckerberg-sheryl-sandberg-joel-kaplan-468127-2025-03-17</a> (Accessed: 4 June 2025).</p>
<p>Hindustan Times (2025) <em>Facebook whistleblower Sarah Wynn-Williams’ explosive memoir shakes Silicon Valley</em>. Available at: <a target="_blank" href="https://www.hindustantimes.com/world-news/us-news/facebook-whistleblower-sarah-wynn-williams-explosive-memoir-shakes-silicon-valley-101742093810596.html">https://www.hindustantimes.com/world-news/us-news/facebook-whistleblower-sarah-wynn-williams-explosive-memoir-shakes-silicon-valley-101742093810596.html</a> (Accessed: 4 June 2025).</p>
<p>Wikipedia (n.d.) <em>Careless People</em>. Available at: <a target="_blank" href="https://en.wikipedia.org/wiki/Careless_People">https://en.wikipedia.org/wiki/Careless_People</a> (Accessed: 4 June 2025).</p>
]]></content:encoded></item><item><title><![CDATA[Talking to a Wall: My Experience with HireVue and the Death of Human Hiring]]></title><description><![CDATA[“I’m Daniel Philip Johnson. I’ve been called the songbird of my generation.”
— Me, quoting Step Brothers while psyching myself up to talk to a webcam.
Okay — not really. But I have spent a shocking amount of time talking to myself in front of a camer...]]></description><link>https://blog.danielphilipjohnson.co.uk/talking-to-a-wall-my-experience-with-hirevue-and-the-death-of-human-hiring</link><guid isPermaLink="true">https://blog.danielphilipjohnson.co.uk/talking-to-a-wall-my-experience-with-hirevue-and-the-death-of-human-hiring</guid><category><![CDATA[ai bias]]></category><category><![CDATA[AI ethics]]></category><category><![CDATA[interview]]></category><dc:creator><![CDATA[Daniel Philip Johnson]]></dc:creator><pubDate>Wed, 28 May 2025 12:09:10 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1748430501862/4025933a-0209-4c67-b665-5d626e69611c.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>“I’m Daniel Philip Johnson. I’ve been called the songbird of my generation.”</p>
<p>— Me, quoting Step Brothers while psyching myself up to talk to a webcam.</p>
<p>Okay — not really. But I have spent a shocking amount of time talking to myself in front of a camera, trying to impress an algorithm. That’s why I had such a great time using HireVue — the AI-powered interview platform that removes one unnecessary part of the hiring process: <strong>other people</strong>.</p>
<p>If you’re unfamiliar, HireVue is what happens when someone looks at a regular job interview and thinks, “What if we made this colder, lonelier, and judged entirely by a machine?” You get a link, some instructions, and then you’re off — recording answers to pre-set questions while staring into your webcam. No interviewer. No feedback. Just you and a timer.</p>
<hr />
<h3 id="heading-interviewing-into-the-void"><strong>Interviewing Into the Void</strong></h3>
<p>At first, I thought: how bad can it be? I mean, I’ve done video calls. I’m comfortable talking. I’m reasonably tech-savvy.</p>
<p>But there’s something deeply weird about smiling at your own reflection while trying to project confidence and warmth — to no one. You’re expected to maintain eye contact, speak clearly, smile naturally. And you do. Because that’s what we’ve trained ourselves to do in real conversations. You nod. You pause for reactions. You try to build rapport.</p>
<p>And then it hits you: you’re <strong>faking a connection with a void</strong>. Obviously platonic, but still a connection. It’s like flirting with your microwave. No matter how well you do, it won’t tell you if it enjoyed the chat.</p>
<hr />
<h3 id="heading-no-feedback-loop-no-humanity"><strong>No Feedback Loop, No Humanity</strong></h3>
<p>In a real interview, there’s a feedback loop. You say something, and maybe it doesn’t quite land — so the interviewer asks, “Can you clarify?” or “I didn’t quite follow.” That little nudge helps you reframe your point, recover, even shine.</p>
<p>HireVue doesn’t give you that. It just lets you bumble on. There’s no “wait, let me try that another way.” No chance to read the room — because <strong>there is no room</strong>. Just a cold, unblinking camera and a countdown timer.</p>
<p>It’s like doing a whiteboard session where the interviewer says nothing. You draw, you talk, you explain — and all the while, you're being silently observed. Judged.</p>
<blockquote>
<p>“There’s nothing like trying to build rapport with a loading icon.”</p>
</blockquote>
<p>Honestly? It felt like being in a game of <em>Portal</em>. You're the test subject. The algorithm is GLaDOS. You keep talking, hoping you're getting it right, with no idea what “right” even means.</p>
<p>At one point, I genuinely felt like I’d been taken captive and was recording a ransom plea.</p>
<p>“Please, let me work for your company. I have marketable skills. I can meet KPIs. Think of my family.”</p>
<p>Because who knows — maybe if I looked desperate enough, the algorithm would finally believe my response.</p>
<hr />
<h3 id="heading-the-black-box-with-a-scorecard"><strong>The Black Box with a Scorecard</strong></h3>
<p>It’s easy to laugh at the absurdity of AI interviews — until you realise what’s really happening.</p>
<p>Behind the scenes, algorithms are evaluating your performance. Supposedly looking for things like communication skills, enthusiasm, and problem-solving ability. But how? Based on what? Says who — the algorithm?</p>
<p>This is the heart of the <strong>“black box” problem</strong> in AI hiring. HireVue, like many AI interview platforms, does not provide a transparent breakdown of how specific non-verbal cues — facial expressions, vocal tone, body language — are weighted. You’re assessed by unseen metrics, against undefined standards, with no opportunity to ask for clarification or feedback.</p>
<blockquote>
<p>“Getting feedback from an algorithm is like yelling into a canyon and hoping it explains your echo.”</p>
</blockquote>
<p>It’s not just frustrating. It’s dangerous.</p>
<p>As a Forbes Tech Council article highlights, "In some cases, deeper AI learning models have become so advanced that even their creators don't fully comprehend how they work" [1]. That’s a terrifying sentence — especially when your ability to land a job might hinge on whether or not you blinked too much.</p>
<blockquote>
<p>“If the machine doesn’t know why it said no — how are we supposed to learn from it?”</p>
</blockquote>
<p>These systems are trained on historical data. And if that data reflects existing human bias — around gender, race, age, or accent — the system doesn’t remove bias. <strong>It scales it</strong> [2]. Amazon learned that the hard way with its now-retired AI hiring tool, which penalised CVs containing the word “women’s” [2]. HireVue, for its part, publicly stopped using facial analysis after backlash [3]. But without full transparency into how their Natural Language Processing (NLP), sentiment scoring, or behavioural analytics work — candidates are left guessing.</p>
<p>What should be a conversation becomes a monologue — with no red pen, no context, no second chance. It’s the digital version of <em>“Computer says no”</em> — except this time, it’s your future on the line.</p>
<hr />
<h3 id="heading-do-they-know-its-biased"><strong>Do They Know It’s Biased?</strong></h3>
<p>To be fair, HireVue and similar companies don’t pretend bias doesn’t exist. They claim to address it — and to their credit, they’ve moved beyond internal promises.</p>
<p>They’ve brought in independent firms like DCI Consulting Group [4], ORCAA [5], and Landers Workforce Science LLC [6] to audit their systems. These are not subsidiaries. They’re not internal rebrands wearing a new logo and a trench coat.</p>
<p>But let’s be honest: if I pay someone to evaluate me, I’m at least hoping for a good review — or at the very least, a polite one.</p>
<blockquote>
<p>“You don’t hire the food critic to write the menu.”</p>
</blockquote>
<p>Would you tell a billion-dollar client that their system might be unfair to disabled or neurodivergent candidates — or that it subtly filters out people with regional accents — if they’re the ones keeping your lights on?</p>
<blockquote>
<p>“Technically independent. Financially dependent.”</p>
</blockquote>
<p>Even the best-intentioned audits exist in a system of incentives. Consultants are still vendors. And vendors, as a rule, don’t bite the hand that signs the cheque.</p>
<p>And let’s be honest — no one wants to upset the wrong billionaire or, heaven forbid, <strong>Orange Man</strong>. We might end up with a tariff on “algorithmic scrutiny” by morning.</p>
<p>We’ve seen what happens when that kind of dynamic goes unchecked.</p>
<p>Remember the 2008 mortgage crisis? Credit rating agencies handed out AAA ratings to financial garbage because — surprise — the companies selling those toxic assets were the ones paying for the scorecards [7]. The illusion of objectivity collapsed. And so did the economy.</p>
<p>We’re not saying AI hiring will crash global markets. But when people’s livelihoods are being shaped by black-box systems reviewed by paid partners, the parallels are hard to ignore.</p>
<blockquote>
<p>“Trust doesn’t come from a contract. It comes from accountability.”</p>
</blockquote>
<p>So if you're wondering why these tools keep getting deployed — even with unresolved bias, vague metrics, and suspect audits — maybe it’s not just the companies.</p>
<p>Maybe it’s because the <strong>people building them aren’t being told to stop.</strong> In fact, they’re being told to go faster.</p>
<hr />
<h3 id="heading-the-shifting-sands-of-ai-safety-a-national-priority-or-a-nuisance"><strong>The Shifting Sands of AI Safety: A National Priority or a Nuisance?</strong></h3>
<p>The fight for ethical AI isn’t just happening in company town halls or consulting firm blog posts. It’s playing out at the highest levels of government. And the current message from the top? <strong>Move fast, and break… guardrails.</strong></p>
<p>Under the Trump administration, the U.S. has sharply shifted its AI priorities. Gone are the calls for “responsible AI” and fairness. In their place: a race to dominate global AI markets and root out so-called “ideological bias” [8].</p>
<p>The National Institute of Standards and Technology (NIST), the very body tasked with overseeing AI safety, was recently instructed to eliminate references to “AI safety,” “responsibility,” and “fairness” from its partner expectations [9]. Yes — the <em>AI Safety Institute</em> is being told to stop talking about safety.</p>
<p>Instead, they’ve been tasked with building tools to <strong>“expand America’s global AI position”</strong> [10], following a January 2025 executive order aimed at eliminating anything seen as a “barrier to American AI innovation” [10].</p>
<blockquote>
<p>“Safety isn't a barrier to innovation. It’s the foundation for trust.”</p>
</blockquote>
<p>Critics warn that stripping away these protections opens the door to unchecked, discriminatory algorithms — including those making hiring decisions [11]. But proponents dismiss the concern. After all, safety audits are “inexpensive,” and what’s a little bias compared to GDP growth? [12]</p>
<p>But when it’s your application being discarded, or your voice being mistranslated by an algorithm, <strong>“inexpensive” starts to feel like code for “ignore it.”</strong></p>
<p>This isn’t just a policy change — it’s a worldview shift. One that sees <strong>friction as failure</strong> and <strong>guardrails as bureaucracy</strong>. And it sends a clear message to companies building AI hiring tools:</p>
<p>You’re not just <em>allowed</em> to move fast.</p>
<p>You’re being <em>incentivised</em> to.</p>
<p>So when you’re wondering why a platform like HireVue still feels like a digital interrogation booth with no feedback, no accountability, and no second chances — maybe it’s because the people at the top decided that <strong>speed wins.</strong> Even if people lose.</p>
<h3 id="heading-the-human-cost">The Human Cost</h3>
<p>I get it. Companies want efficiency. They want scalable hiring. They want to reduce bias. But somewhere in the pursuit of “better,” we stripped away the human from Human Resources.</p>
<p>Real interviews are stressful — but at least they’re a conversation. At least there’s nuance. A moment. A spark. HireVue replaces that with something colder. Something measurable. Something unaccountable.</p>
<p>And if you’re neurodivergent? The challenges are amplified. Time limits, lack of clarification, unclear expectations — it’s a system designed around neurotypical behaviour [13]. If your accent doesn't match the model’s training data, or if your connection stutters, you’re already behind.</p>
<p>The ACLU recently filed a complaint against Intuit and HireVue, alleging their AI hiring technology works worse for deaf and non-white applicants [14]. This complaint highlights concerns that differences in speech patterns, accents, and communication styles can lead to biased outcomes. This isn't just about a "feeling" of unfairness; it's about demonstrable adverse impact on certain demographic groups, echoing concerns that have been raised by numerous studies on algorithmic bias in hiring.</p>
<blockquote>
<p>“Efficiency at scale sounds great — until you realise you’ve scaled dehumanisation.”</p>
</blockquote>
<hr />
<h3 id="heading-why-do-we-accept-this"><strong>Why Do We Accept This?</strong></h3>
<p>Maybe it’s job scarcity. Maybe it’s tech hype. Maybe we’ve just gotten used to being scanned, scored, and sorted like digital livestock.</p>
<p>Barcoded, processed, and evaluated for “emotional tone” like some sort of psychological meat quality test.</p>
<blockquote>
<p>“Smiles: slightly forced. Eye contact: inconsistent. Confidence: medium-rare.”</p>
</blockquote>
<p>At some point, we stopped interviewing.</p>
<p>We started <strong>optimising</strong> — for algorithms, for word clouds, for vibes we can’t see but are somehow being graded on.</p>
<p>And when you start tailoring your voice, posture, and word choice to please a machine — not a person — we lose something important.</p>
<p>We lose the conversation.</p>
<p>We lose the humanity.</p>
<p>We lose the basic dignity of being understood — not just <strong>measured, logged, and archived for audit purposes.</strong></p>
<blockquote>
<p>“Authenticity is hard when you’re performing for a robot with a spreadsheet.”</p>
</blockquote>
<hr />
<h3 id="heading-final-thoughts"><strong>Final Thoughts</strong></h3>
<p>I’ve talked to walls before — but at least they didn’t reject me via algorithm.</p>
<blockquote>
<p>“The system doesn’t need to hate you to hurt you.”</p>
</blockquote>
<p>So here’s my message to other job seekers: if you’ve struggled with HireVue, it’s not just you. This system isn’t built to understand you. It’s built to filter. And often, it filters out the very things that make you human.</p>
<blockquote>
<p>“We’re not asking for a standing ovation — just a human response.”</p>
</blockquote>
<p>We deserve hiring processes that see people, not probabilities. That reward clarity, not conformity. That give you a chance to clarify, connect, recover — not just auto-fail because you blinked wrong in the first five seconds.</p>
<p>We need to talk about this. Loudly. Repeatedly.</p>
<p>With our real voices.</p>
<p>Not just into a camera lens.</p>
<p>And definitely not for a machine that was never listening.</p>
<hr />
<h3 id="heading-for-my-intern-friends-surviving-the-black-box"><strong>For My Intern Friends: Surviving the Black Box</strong></h3>
<p>If you're staring down a HireVue interview and feeling the existential dread of talking to a blinking dot, here are a few tips to help you survive the void — and maybe even score that callback.</p>
<p><strong>1. Treat it like theatre, not a conversation.</strong></p>
<p>You won’t get nods, “mm-hmms,” or follow-ups — so front-load your clarity. Say what you mean, then say it again (briefly) with structure: "Here's the problem, here's how I approached it, here's what happened."</p>
<p><strong>2. Practice with a timer and webcam.</strong></p>
<p>Simulate the awkward silence. I used the <strong>Apple timer on my phone</strong> — set for 1–2 minutes — then just stared at it while answering questions like I was defusing a bomb. It helped. No audience, no feedback — just me, a screen, and the crushing awareness that somewhere, an algorithm was judging my vocal tone.</p>
<p><strong>3. Smile — but don’t force it.</strong></p>
<p>Yes, it’s weird. But a calm, confident demeanour helps signal “composed and hireable” to whatever behavioural model is watching. Think <em>video cover letter</em>, not <em>hostage tape</em>.</p>
<p><strong>4. Use STAR — even when the question doesn’t ask for it.</strong></p>
<p>That’s <em>Situation, Task, Action, Result</em>. It gives your answer structure, which helps you stay on track — and it helps the algorithm identify story arcs in your response.</p>
<p><strong>5. Review real HireVue questions in advance.</strong></p>
<p>Sites like Coursera <a target="_blank" href="https://www.coursera.org/articles/hirevue-interview-questions">👉 this one</a> have lists of common questions by role. Don’t memorise answers — but do prep your <em>go-to stories</em> for teamwork, problem-solving, leadership, and failure.</p>
<p><strong>6. Learn from the greats (or pretend to).</strong></p>
<p>This <a target="_blank" href="https://www.youtube.com/watch?v=0at_B6RpKoY">YouTube channel</a> taught me everything I needed to know: stare into the void, read a script, and <em>deliver it like you're leading a nation</em>. It's a teleprompter training video for aspiring anchors or presidents — perfect prep for HireVue.</p>
<p>Honestly, after five sessions I felt ready to address the nation, launch a stimulus package, and maybe even explain my gap year with gravitas.</p>
<blockquote>
<p>“My fellow stakeholders… in Q4, I overcame adversity by aligning cross-functional synergies under tight deadlines.”</p>
</blockquote>
<p>Honestly, if the next U.S. election isn’t run through HireVue, it’s a missed opportunity. At least then we’d have a fair metric for comparing the real candidates to Mark Zuckerberg — At least then we’d all get to see which candidates blink too much under pressure.</p>
<blockquote>
<p>“Candidate A: Answered clearly. Blinking rate: suspiciously low.”</p>
</blockquote>
<p>Let’s face it: we’re not being judged like humans. We’re being scored like deepfakes trying to pass a CAPTCHA.</p>
<blockquote>
<p><em>“I did not have inappropriate pauses with that question.”</em> — You, to the algorithm, 2025</p>
</blockquote>
<p><strong>7. And if all else fails?</strong></p>
<p>Remember: this isn’t the final boss. It’s just one gatekeeper.</p>
<p>You’re more than your sentence flow, your vocal tone, or how long you paused before answering.</p>
<p>You’re not failing an interview.</p>
<p>You’re debugging a system that wasn’t designed for you.</p>
<h3 id="heading-references">References</h3>
<p>[1] Forbes Technology Council. (2025, March 7). <em>The Black Box Problem: Why AI in Recruiting Must Be Transparent and Traceable</em>. Forbes. <a target="_blank" href="https://www.forbes.com/councils/forbestechcouncil/2025/03/07/the-black-box-problem-why-ai-in-recruiting-must-be-transparent-and-traceable">https://www.forbes.com/councils/forbestechcouncil/2025/03/07/the-black-box-problem-why-ai-in-recruiting-must-be-transparent-and-traceable</a></p>
<p>[2] Dastin, J. (2018, October 10). <em>Amazon scraps secret AI recruiting tool that showed bias against women</em>. Reuters. <a target="_blank" href="https://www.reuters.com/article/us-amazon-com-jobs-automation-idUSKCN1MJ08G">https://www.reuters.com/article/us-amazon-com-jobs-automation-idUSKCN1MJ08G</a></p>
<p>[3] HireVue. (n.d.). <em>Ethical AI</em>. HireVue. <a target="_blank" href="https://legal.hirevue.com/product-documentation/ai-ethical-principles">https://legal.hirevue.com/product-documentation/ai-ethical-principles</a></p>
<p>[4] HireVue. (2023, January 17). <em>HireVue to Engage DCI Consulting Group to Audit Algorithms for Bias</em>. HireVue Newsroom. <a target="_blank" href="https://www.hirevue.com/press-release/hirevue-leads-industry-in-fair-and-ethical-hiring-practice-engaging-external-auditor-dci-consulting-group-for-external-bias-audit-of-algorithms">https://www.hirevue.com/press-release/hirevue-leads-industry-in-fair-and-ethical-hiring-practice-engaging-external-auditor-dci-consulting-group-for-external-bias-audit-of-algorithms</a> [5] O'Neil, C. (2021, January 28). <em>ORCAA's Audit of HireVue</em>. ORCAA. <a target="_blank" href="https://www.hirevue.com/resources/template/orcaa-report">https://www.hirevue.com/resources/template/orcaa-report</a></p>
<p>[6] Landers, R. N. (2021, April 1). <em>Landers Workforce Science LLC Completes Independent IO Audit of HireVue Assessments</em>. <a target="_blank" href="https://www.hirevue.com/press-release/independent-audit-affirms-the-scientific-foundation-of-hirevue-assessments">https://www.hirevue.com/press-release/independent-audit-affirms-the-scientific-foundation-of-hirevue-assessments</a> <a target="_blank" href="https://landers.tech/">https://landers.tech/</a></p>
<p>[7] Andrews, E. (2009, February 16). <em>How the Credit Rating Agencies Contributed to the Financial Crisis</em>. The New York Times. <a target="_blank" href="https://truthout.org/articles/the-indisputable-role-of-credit-ratings-agencies-in-the-2008-collapse-and-why-nothing-has-changed/">https://truthout.org/articles/the-indisputable-role-of-credit-ratings-agencies-in-the-2008-collapse-and-why-nothing-has-changed/</a></p>
<p>[8] Knight, W. (2025). <em>Under Trump, AI Scientists Are Told to Remove ‘Ideological Bias’ From Powerful Models</em>. Available at: <a target="_blank" href="https://www.wired.com/story/ai-safety-institute-new-directive-america-first/">https://www.wired.com/story/ai-safety-institute-new-directive-america-first/</a> (Accessed: 26 May 2025).</p>
<p>[9] The White House. (2025, January 23). <em>Executive Order: Removing Barriers to American Leadership in Artificial Intelligence</em>. <a target="_blank" href="https://www.whitehouse.gov/presidential-actions/2025/01/removing-barriers-to-american-leadership-in-artificial-intelligence/">https://www.whitehouse.gov/presidential-actions/2025/01/removing-barriers-to-american-leadership-in-artificial-intelligence/</a></p>
<p>[10] Clifford Chance. (2024, November 7). <em>AI Pulse Check: Will the Biden Executive Order on AI Survive the Trump-Vance Administration?</em> <a target="_blank" href="https://www.cliffordchance.com/insights/resources/blogs/talking-tech/en/articles/2024/11/will-the-biden-executive-order-on-ai-survive-the-trump-vance-administration.html">https://www.cliffordchance.com/insights/resources/blogs/talking-tech/en/articles/2024/11/will-the-biden-executive-order-on-ai-survive-the-trump-vance-administration.html</a></p>
<p>[11] <em>See reference [14] (ACLU complaint) as a key example of potential harm.</em></p>
<p>[12] The White House. (2025, January 23). <em>Executive Order: Removing Barriers to American Leadership in Artificial Intelligence</em>. <a target="_blank" href="https://www.whitehouse.gov/presidential-actions/2025/01/removing-barriers-to-american-leadership-in-artificial-intelligence/">https://www.whitehouse.gov/presidential-actions/2025/01/removing-barriers-to-american-leadership-in-artificial-intelligence/</a></p>
<p>[13] Glouberman, M. &amp; Goldberg, M. (2024, June 4). <em>How to Make Job Interviews More Accessible</em>. Harvard Business Review. <a target="_blank" href="https://hbr.org/2024/06/how-to-make-job-interviews-more-accessible">https://hbr.org/2024/06/how-to-make-job-interviews-more-accessible</a></p>
<p>[14] ACLU. (2024, May 14). <em>Complaint Filed Against Intuit and HireVue Over Biased AI Hiring Technology That Works Worse For Deaf and Non-White Applicants</em>. <a target="_blank" href="https://www.aclu.org/press-releases/complaint-filed-against-intuit-and-hirevue-over-biased-ai-hiring-technology-that-works-worse-for-deaf-and-non-white-applicants">https://www.aclu.org/press-releases/complaint-filed-against-intuit-and-hirevue-over-biased-ai-hiring-technology-that-works-worse-for-deaf-and-non-white-applicants</a></p>
]]></content:encoded></item><item><title><![CDATA[Why 80% of AI Projects Fail & How Staff+ Engineers Can Save Them (HBR Strategy)]]></title><description><![CDATA[TLDR: Why Your AI Thing Will Probably Fail (And How Staff+ Can Stop It)
AI Fails (A Lot): ~80% of AI projects don't die from bad code, but from a thousand cuts at the margins trust, relevance, and understanding the probabilistic weirdness.
Staff+ Are...]]></description><link>https://blog.danielphilipjohnson.co.uk/why-80-of-ai-projects-fail-and-how-staff-engineers-can-save-them-hbr-strategy</link><guid isPermaLink="true">https://blog.danielphilipjohnson.co.uk/why-80-of-ai-projects-fail-and-how-staff-engineers-can-save-them-hbr-strategy</guid><category><![CDATA[Digital Transformation]]></category><category><![CDATA[ai strategy]]></category><category><![CDATA[tech leadership]]></category><dc:creator><![CDATA[Daniel Philip Johnson]]></dc:creator><pubDate>Sun, 25 May 2025 14:04:59 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1748026852457/56cbfbcc-269f-4033-a7e0-c7405c0c4313.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-tldr-why-your-ai-thing-will-probably-fail-and-how-staff-can-stop-it"><strong>TLDR: Why Your AI Thing Will Probably Fail (And How Staff+ Can Stop It)</strong></h2>
<p>AI Fails (A Lot): ~80% of AI projects don't die from bad code, but from a thousand cuts at the margins trust, relevance, and understanding the probabilistic weirdness.</p>
<p>Staff+ Are The Fixers (Not Just Coders): Your job is less about debugging models, more about debugging expectations. Think system-level therapist for AI.</p>
<p><strong>Key Battlefronts:</strong></p>
<ul>
<li><p>AI isn't IT: Manage probabilistic outcomes, not deterministic specs.</p>
</li>
<li><p>No Trust = No Users: Build it in from day zero; you can't bolt it on.</p>
</li>
<li><p>Impact &gt; Feasibility: Solve real, painful problems, not just chase shiny tech.</p>
</li>
<li><p>Learn, Don't Just Ship: Test hypotheses, not just massive features.</p>
</li>
<li><p>Audit Like You Mean It: AI remaps ecosystems; monitor the unintended.</p>
</li>
</ul>
<p>The Real Work: Slower, harder decisions beat viral hype. This HBR podcast nails why.</p>
<h2 id="heading-ai-won-sort-of"><strong>AI won. Sort of.</strong></h2>
<p>AI steamrolled the roadmap, bypassed the process, and slipped through the org chart like a smug ghost. A thousand breathless demos and a mountain of GPUs later, we’re drowning in tools that hallucinate "facts" with unwavering confidence, generate code of… <em>variable</em> utility, and promise a planetary transformation before lunchtime.</p>
<p>But here’s the dirty little secret whispered in stand-ups and buried in post-mortems—the part no one wants on their OKRs: Most AI projects—around 80%, according to some analyses—still spectacularly flame out.</p>
<p>Not because the core math was wrong. Not because the cloud provider sent a bill that could bankrupt a small nation. Not even, usually, because the underlying model wasn’t clever enough.</p>
<p>They die a death of a thousand papercuts. In the yawning chasm between the dazzling hype and the dreary reality of actual usage. Between the triumphant launch announcement and the deafening silence of non-adoption. Between what leadership dreamed the AI would do, and what users, in their infinite wisdom, actually trusted it with (or didn't).</p>
<p>For Staff Engineers, Principal Developers, and technical leads, this isn't news. It's Tuesday. You're living in that chasm. You’re the one tasked with bridging the unbridgeable—connecting executive ambition with engineering alignment, abstract theory with hardened production systems, faint signals with overwhelming operational noise. You become the involuntary friction interpreter, the system-level therapist, and the person who has to gently explain why the "revolutionary AI feature" is currently gathering dust in the digital equivalent of a feature graveyard.</p>
<p>I recently absorbed an HBR On Strategy episode—"The Right Way to Launch an AI Initiative," featuring Iavor Bojinov[^2] (Harvard prof, ex-LinkedIn AI leader)[^1]. It’s a rare gem: no magic tricks, no silver bullets, just a candid walk through the messy, often counter-intuitive reality of making AI work in the real world. <em>(The HBR podcast is the primary source for the discussion that follows).</em></p>
<p>Here’s what stuck, and why anyone in a Staff+ role should consider it required listening.</p>
<ul>
<li><p><strong>Podcast:</strong> HBR On Strategy – "The Right Way to Launch an AI Initiative," Iavor Bojinov interviewed by Curt Nickisch (Originally HBR IdeaCast, 2023). A key episode is "Making Sure Your AI Initiative Pays Off" (HBR IdeaCast Episode 913, May 16, 2023). Searchable on HBR.org and major podcast platforms.</p>
</li>
<li><p><strong>Guest:</strong> Iavor Bojinov, Harvard Business School professor, former LinkedIn AI leader.</p>
</li>
<li><p><strong>Core Focus:</strong> Best practices for ensuring AI project success by navigating common pitfalls related to AI's probabilistic nature, user trust, project selection, experimentation, and real-world impact.</p>
</li>
<li><p><strong>Key Failure Points Discussed:</strong> No value add, low accuracy, bias/unfairness, lack of user trust/adoption.</p>
</li>
<li><p><strong>Recommended For:</strong> Staff+ engineers, PMs, tech leads grappling with the strategy, development, or scaling of AI features.</p>
</li>
</ul>
<h2 id="heading-ai-isnt-your-it-departments-pet-project-its-a-wild-probabilistic-beast">AI Isn’t Your IT Department’s Pet Project — It’s a Wild, Probabilistic Beast</h2>
<p><em>"The model didn’t fail. It just… had a different opinion this time."</em></p>
<p>Unlike traditional IT projects with predictable outputs, generative AI lives in ambiguity. Same input, different output. That’s not a bug — it’s the deal.</p>
<p>With AI, you're not just in a different room from that contract; you're in a different dimension where the laws of physics are suggestions. Its probabilistic nature means the same prompt, the same input data, can yield wildly different outputs.[^4] The initial quality is often a shrug emoji. This isn't a bug; it's a core feature of the uncertainty you've signed up for.</p>
<p>I’ve seen it: a developer meticulously inputs a prompt into a generative model. Result: garbage. They spot a single, almost imperceptible typo. Fix it. Rerun. <em>Entirely different universe of an answer.</em> Better, maybe. But not incrementally so. The whole premise shifted.</p>
<p>As a Staff+ engineer, your primary job isn't to debug the AI's output; it's to debug your organisation's <em>expectations</em>. You need to become a translator, explaining to leadership, to product, to your own teams, that AI is less like a predictable software component and more like an ongoing, occasionally surreal, negotiation with probability. This means architecting for drift, designing for variance, and building operational muscle around managing inherent weirdness.</p>
<p>You can't spec "consistently delightful vibes." But you can design systems that monitor for trust thresholds and output quality. This requires more intensive upfront design for observability and instrumentation, not less. It means fostering a culture that relentlessly asks, <em>"Why on earth did it do THAT?"</em>—and being unnervingly comfortable when the honest answer is, <em>"We have a strong hypothesis and some data, but we don't fully, deterministically know. Here’s how we’re tracking it."</em></p>
<h2 id="heading-the-algorithm-was-flawless-the-users-ghosted-it">The Algorithm Was Flawless. The Users Ghosted It.</h2>
<p>Bojinov recounts a classic from his LinkedIn tenure: his team built a technically sophisticated AI product for data analysis, slashing processing time from weeks to days. A home run, technologically speaking. The result? Crickets. Minimal adoption.</p>
<p>Why? Not because the tool was faulty. Because users didn’t <em>trust</em> it. They hadn't been part of the journey. The "how" was a black box. So, the output, however miraculous, felt alien, suspect. And when users don't trust the black box, they'll revert to their trusty, rusty, but understandable spreadsheets every single time. Bojinov’s insight: <em>"If you build it, they will not [necessarily] come."</em></p>
<p>You can’t A/B test your way to trust after launch. It’s not a feature you can "bolt-on." Trust has to be woven into the fabric from the very first thread. It’s not just about algorithmic fairness and transparency (though those are table stakes). It's also, crucially, about users trusting the <em>developers</em> and the <em>intent</em>—believing the AI was designed to solve <em>their</em> actual problems, with their input valued.</p>
<p>For Staff+ engineers, this is about more than just elegant architecture. It’s about <strong>socio-technical system design</strong>. Orchestrating a process where users are involved early and continuously, not as passive test subjects, but as active co-creators in the solution. It's about championing transparency, even when it means admitting, "This thing is guessing, albeit very intelligently. Here are its limitations."</p>
<h2 id="heading-dont-start-with-tech-start-with-impact">Don’t Start With Tech: Start With Impact</h2>
<p>AI projects often ignite from a spark of technical feasibility: "Can we fine-tune this new foundation model? Can our infra even handle this?" According to Bojinov, this is ass-backwards.</p>
<p>He urges us to flip the lens: start with <strong>Impact</strong>. Does this project align with genuine, critical company strategy? If this AI performs its magic flawlessly, will it solve a problem so painful, or unlock an opportunity so significant, that it <em>actually matters</em> to the business or the users? Too many data science teams, Bojinov notes, get seduced by the siren song of the "latest and best" tech, prioritizing novelty over tangible business value. Most organizations, especially those newer to AI, don't need bleeding-edge models to see significant returns.</p>
<p>A crucial Staff+ role here is to be the <strong>impact-realist and ethical gatekeeper</strong>. Before a single line of PoC code is written, you should be the one asking:</p>
<ol>
<li><p><strong>"Does this earn its complexity and the inherent risks of being probabilistic?"</strong></p>
</li>
<li><p><strong>"What are the ethical implications—privacy, fairness, transparency—and have we addressed these <em>before</em> starting, not as an afterthought?"</strong> Responsible AI isn't a "bolt-on" fix; trying to address it mid-project is a recipe for costly restarts or, worse, shipping harm.[^5]</p>
</li>
</ol>
<p>Sometimes, the most valuable contribution a Staff+ engineer can make is to build the case for <em>not</em> doing the shiny AI project, and instead, redirecting that energy to a "boring" problem with real, validated user friction. That's leadership.</p>
<h2 id="heading-minimum-viable-learning-gt-minimum-viable-product-hypothesis-driven-experimentation">Minimum Viable Learning &gt; Minimum Viable Product: Hypothesis-Driven Experimentation</h2>
<p>The cautionary tale of Etsy's "infinite scroll" is a masterclass. They embarked on a significant UI re-architecture, months of work, to implement it. The user reaction? A collective shrug. Zero discernible impact.</p>
<p>Why? Because, as Bojinov highlights, their single "infinite scroll" experiment was actually a bundle of unverified assumptions:</p>
<ol>
<li><p><em>More results hypothesis</em>: Do users buy more if they see more products per page?</p>
</li>
<li><p><em>Faster results hypothesis</em>: Do users hate pagination delays, and will quicker access to more items drive engagement?</p>
</li>
</ol>
<p>Etsy could have tested these far more cheaply. For "more results," simply change a display parameter. For "faster results" (or the impact of delay), they could have <em>artificially slowed down</em> loading for a segment. Their follow-up simpler experiments showed these individual hypotheses didn't hold as strongly as assumed for their unique marketplace.</p>
<p>The lesson for Staff+ is stark: champion an architecture and culture of <strong>Minimum Viable Learning</strong>. It’s not just about shipping an MVP; it's about designing the <em>smallest possible experiment</em> to validate (or invalidate) the core hypothesis underpinning a feature. If your experimentation framework requires a papal bull and a committee of VPs, you're already failing. Your role is to advocate for and help build systems, guardrails, and processes that empower teams to test hypotheses rapidly and safely. Define what "safe to fail" means. Teach your teams to treat experiments as diagnostics, not referendums on their worth.</p>
<h2 id="heading-the-algorithm-worked-then-it-remapped-the-world-unintended-consequences">The Algorithm "Worked." Then It Remapped the World. (Unintended Consequences)</h2>
<p>LinkedIn's "People You May Know" (PYMK) algorithm was built to do one thing: increase successful connection requests. Clear metric. Achievable goal. An audit a year later revealed something astonishing: the algorithm was profoundly impacting what jobs people were getting. By subtly shifting the proportion of "weak ties" (arm's length connections, as per Granovetter's theory)[^6] it suggested, PYMK was inadvertently boosting users' access to novel information and job opportunities. The AI didn't "know" it was doing this. But it was.</p>
<p>This is the ghost in the machine of deployed AI: <strong>ecosystem effects and unintended consequences.</strong> AI doesn't operate in a vacuum. It interacts with the entire company, its users, and sometimes society, in ways you cannot fully predict in a test environment. Most products, Bojinov warns, initially have a neutral or <em>negative</em> impact on the very metrics they aim to improve until these interactions are understood and tuned.</p>
<p>For Staff+ engineers, this means <strong>auditing and monitoring aren't afterthoughts; they are continuous, architectural responsibilities.</strong> Treat it like system ownership, not post-mortem archaeology. From day one, ask: "What second-order effects could this AI ripple outwards in six months? Whose metrics will it silently inflate or decimate? How will we even know?" Design for traceability. Instrument for long-term impact. Give your future self a map of "normal" before the AI subtly redrew it. LinkedIn learned from this, incorporating long-term job-related metrics into PYMK's monitoring—a testament to the power of proactive auditing.</p>
<h2 id="heading-the-parting-shot-it-fails-at-the-margins">The Parting Shot: It Fails at the Margins</h2>
<p>Most AI initiatives don’t collapse because the core technology was fatally flawed. They bleed out at the margins.</p>
<p>The margin of <strong>user trust</strong>. The margin of <strong>strategic relevance</strong>. The margin of <strong>organisational understanding</strong> of its probabilistic nature. The margin of <strong>ethical foresight</strong>.</p>
<p>Bojinov's insights aren't about quick fixes.[^7] They’re about acknowledging that AI projects are, as he concludes, <strong>significantly harder</strong> than most other endeavors a company undertakes. But the <strong>potential payoff is equally tremendous.</strong> It's not hopeless. It just requires a different kind of rigor—recognizing the distinct stages, and, as leaders and senior engineers, championing and <em>building the infrastructure</em> (technical, process, cultural) to navigate each stage. This is how you reduce failure, increase adoption, and actually create value.</p>
<p>No secret algorithm. No viral hack. Just better, harder decisions, made with more deliberation and humility. Sometimes, seeing the messy system clearly, with all its flaws and pressures, and still choosing to build carefully, responsibly, and humanely within it, is the most radical—and most valuable—engineering work you can do.</p>
<hr />
<p><em>Primary Source: The insights in this post are primarily drawn from Iavor Bojinov's discussion on the HBR On Strategy podcast (originally HBR IdeaCast, "Making Sure Your AI Initiative Pays Off," Episode 913, May 16, 2023).</em></p>
<p>[^1]: The podcast episode referenced is <strong>"The Right Way to Launch an AI Initiative"</strong> from the <em>HBR On Strategy</em> podcast series, featuring Iavor Bojinov. Your notes indicate this was originally an <em>HBR IdeaCast</em> episode from 2023. A relevant HBR IdeaCast episode with Iavor Bojinov discussing these themes is: <strong>"</strong><a target="_blank" href="https://hbr.org/podcast/2025/05/the-right-way-to-launch-an-ai-initiative"><strong>Making Sure Your AI Initiative Pays Off" (HBR IdeaCast Episode 913, May 16, 2023)</strong></a><strong>”</strong>. You can typically find this episode and other HBR podcasts on major podcast platforms or by searching the episode title or number on HBR.org.</p>
<p>[^2]: Iavor Bojinov's faculty profile at Harvard Business School: <a target="_blank" href="https://www.hbs.edu/faculty/Pages/profile.aspx?facId=1199332">https://www.hbs.edu/faculty/Pages/profile.aspx?facId=1199332</a></p>
<p>[^3]: For more on non-deterministic AI outputs, see Statsig, "What Are Non-Deterministic AI Outputs?": <a target="_blank" href="https://www.statsig.com/perspectives/what-are-non-deterministic-ai-outputs-">https://www.statsig.com/perspectives/what-are-non-deterministic-ai-outputs-</a></p>
<p>[^4]: Understanding AI-generated output variability is further explored by Wizard AI: <a target="_blank" href="https://wizard-ai.com/understanding-ai-generated-output-variability/">https://wizard-ai.com/understanding-ai-generated-output-variability/</a></p>
<p>[^5]: Building a scalable and adaptable AI governance program is detailed by OCEG: <a target="_blank" href="https://www.oceg.org/building-a-scalable-and-adaptable-ai-governance-program/">https://www.oceg.org/building-a-scalable-and-adaptable-ai-governance-program/</a></p>
<p>[^6]: A study on the employment value of weak ties by MIT IDE: <a target="_blank" href="https://ide.mit.edu/insights/new-study-proves-that-weak-ties-have-strong-employment-value/">https://ide.mit.edu/insights/new-study-proves-that-weak-ties-have-strong-employment-value/</a></p>
<p>[^7]: Oliver Wight EAME discusses essential questions leaders must ask before engaging with AI: <a target="_blank" href="https://oliverwight-eame.com/news/seven-essential-questions-leaders-must-ask-before-engaging-with-ai">https://oliverwight-eame.com/news/seven-essential-questions-leaders-must-ask-before-engaging-with-ai</a></p>
]]></content:encoded></item><item><title><![CDATA[No Memory, No Maintenance: Why AI Code Becomes a Liability Without Context]]></title><description><![CDATA[“AI? It writes code. Some of it, folks—some of it’s actually not bad. I’ve seen it. Believe me. It’s fast. It’s clean. Some people say it’s tremendous code. But does it build systems? Not even close. Total disaster.”

We’ve all heard the hype. Clean ...]]></description><link>https://blog.danielphilipjohnson.co.uk/no-memory-no-maintenance-why-ai-code-becomes-a-liability-without-context</link><guid isPermaLink="true">https://blog.danielphilipjohnson.co.uk/no-memory-no-maintenance-why-ai-code-becomes-a-liability-without-context</guid><category><![CDATA[#codemaintenance]]></category><category><![CDATA[AI]]></category><category><![CDATA[Software Engineering]]></category><category><![CDATA[General Programming]]></category><dc:creator><![CDATA[Daniel Philip Johnson]]></dc:creator><pubDate>Fri, 23 May 2025 15:34:48 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1747945726114/ab5bbc66-9a4f-451f-b063-d6c5229c7d3d.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<blockquote>
<p><strong>“AI? It writes code. Some of it, folks—some of it’s actually not bad. I’ve seen it. Believe me. It’s fast. It’s clean. Some people say it’s tremendous code. But does it build systems? Not even close. Total disaster.”</strong></p>
</blockquote>
<p>We’ve all heard the hype. Clean code. Fast demos. Deadlines magically met. Features ship like clockwork. Until they don’t. What happens after the demo is done? After the pitch deck glow fades and the stakeholders log off?</p>
<p><strong>Maintenance begins.</strong></p>
<p>And that’s where things fall apart. Because the code might be syntactically correct—but semantically lost. Requirements shift. Business logic bends. A new integration surfaces some edge case no one ever considered. And suddenly, your AI-generated function is a haunted house of assumptions no one remembers making.</p>
<p>You’re not shipping features anymore. You’re not shipping features anymore. You’re <strong>excavating</strong> brittle abstractions—digging through code someone thought was helpful but never explained. No map. No memory. Just a maze of decisions with no trail.</p>
<p>Because here’s the part most AI evangelists skip: <strong>AI can write code, but it doesn’t understand <em>why</em>.</strong> And in engineering, the <em>why</em> isn’t a luxury—it’s everything.</p>
<p>It’s what lets someone join a team and make sense of the architecture.<br />It’s what lets you debug an outage at 3 a.m. without first trying to channel the ghost of the last dev.<br />It’s how we move fast <em>without</em> breaking everything.</p>
<blockquote>
<p>“When you lose the why, you don’t just slow down.<br />You lose the system.”</p>
</blockquote>
<h2 id="heading-it-writes-code-but-not-the-why">It Writes Code — But Not the Why</h2>
<p>Let’s give AI its due: it slings syntax like a short-order cook—fast, clean, sometimes even elegant. It replicates patterns, closes brackets, follows best practices—often good enough to pass a pull request review.</p>
<p>But mimicry isn’t mastery. Flawlessly reproducing patterns isn't the same as understanding the principles <em>behind</em> them or the reasons certain practices evolved. Mastery involves grasping <em>why</em> one solution is chosen over countless others in a specific context—a depth of situational insight and reasoned judgement that statistical replication doesn't possess.</p>
<p>Code is not just syntax. It’s a reflection of messy business rules, evolving requirements, and years of hard-won, “why the hell did we do it <em>this</em> way?” decisions. And AI? It wasn’t there for any of it.</p>
<p>It didn’t sit in the planning meeting where Marketing rewrote the requirements—again. It didn’t argue over technical trade-offs under a deadline. It doesn’t understand the verbal patch you made to keep a legacy data contract alive.</p>
<blockquote>
<p>In software, <strong>context isn’t just king—it’s the whole damn kingdom.</strong> And AI lives outside the gates.</p>
</blockquote>
<h2 id="heading-systems-are-stories">Systems Are Stories</h2>
<p>Code is a story. Every line is shaped by conversations, user quirks, technical constraints—most of which never make it into documentation.</p>
<p>But humans leave trails. We create imperfect, essential memory:</p>
<ul>
<li><p>Pull request debates and war stories</p>
</li>
<li><p>Slack threads with buried insight</p>
</li>
<li><p>Jira tickets that explain the <em>why</em>, not just the <em>what</em></p>
</li>
<li><p>Cryptic-but-critical commit messages</p>
</li>
<li><p>Desperate, blessed code comments: <em>“Dear future me…”</em></p>
</li>
</ul>
<p>This is how maintainability survives. Not in the syntax—but in the context.</p>
<p>AI-generated code offers no such memory. It’s like receiving a precisely machined engine part, delivered without a serial number, material specification, or performance test results: its tolerances are unknown, its stress limits untested, its role in the larger assembly a dangerous guess. Just a seemingly perfect solution until it shatters under operational strain.</p>
<blockquote>
<p><strong>Context lingers. AI erases.</strong></p>
</blockquote>
<h2 id="heading-the-context-window-is-not-enough">The Context Window Is Not Enough</h2>
<p>AI models operate within narrow context windows. Even the most advanced ones don’t remember your planning meetings, your war-room debates, or that one bizarre constraint Legal demanded during a hallway chat. Those details aren’t in the repo. They’re in your head. In Slack. In timeboxed meetings. In decisions made under pressure—and never written down. And if the AI didn’t see them? It doesn’t know them. <strong>Worse, any context it was fed—say, the initial specs from a Jira ticket—can rapidly become outdated as requirements shift and discussions evolve. Unless someone meticulously re-briefs the AI with every single change, its understanding becomes a dangerously misleading snapshot of the past.</strong> So when it writes code, it’s not preserving your system’s story. It’s overwriting it—with silence.</p>
<blockquote>
<p><strong>And silence doesn’t scale.</strong></p>
</blockquote>
<h2 id="heading-the-illusion-of-help"><strong>The Illusion of Help</strong></h2>
<p>AI feels fast—until it doesn’t. You describe the task, and out comes code. Neat! But then it doesn’t quite fit. So you clarify. Add edge cases. Paste old snippets. Rewrite the prompt. Again.</p>
<p>It starts to feel less like acceleration and more like stenography. You're feeding the machine what it needs—step by step—until eventually, it spits something back that looks vaguely right. But something unsettling happens along the way:</p>
<blockquote>
<p>The more context you give it, the more obvious it becomes—you already <em>had</em> everything you needed to solve the problem.</p>
</blockquote>
<p>The AI didn’t solve it. <em>You</em> did—piecemeal, in slow motion, while explaining it to a tool that can’t understand you.</p>
<p>The code? It just echoed back fragments of your own thinking.</p>
<p>What starts as collaboration ends in recursion.You don’t save time. You just offload your clarity—into a black box that forgets everything the moment you close the tab.</p>
<p><strong>And that’s the real problem.</strong></p>
<h2 id="heading-when-chat-becomes-a-dead-end"><strong>When Chat Becomes a Dead End</strong></h2>
<p>In a normal workflow, someone makes a decision—and leaves a trail. A Jira ticket. A commit message. A Slack thread. Even a lazy <code>“// TODO: Fix this later”</code> can be a breadcrumb.</p>
<p>But when you work with an AI? The initial spark of an idea, the core of a decision, lives within the transient space of a prompt. The code gets generated—but the nuanced human reasoning that guided it vanishes the moment you close the tab or scroll away.</p>
<blockquote>
<p><strong>And this is where the illusion of AI efficiency often shatters. Think about your interactions in tools like Cursor IDE, or any AI chat window. That entire conversational dance—the initial prompt, the back-and-forth clarifications, the AI’s suggestions you refined, the subtle pivots in your request—forms the <em>true</em> origin story of the code. Yet, this vital context is dangerously ephemeral. Close that chat, lose that session, or let the context window refresh, and that precise line of reasoning, that specific evolutionary path of the prompt, is often gone. <em>Forever</em>. Not archived, not versioned alongside the code, just… gone.</strong></p>
</blockquote>
<p>This isn't like a buried Slack thread you might eventually unearth. In many current AI tools, this critical dialogue is simply not designed for the same persistence or traceability we demand for every other engineering artefact.</p>
<p>So, what are you left with? No version control for the prompt that <em>actually</em> generated the final accepted code. No searchable thread linking the iterative human-AI conversation to the resulting logic. No reliable audit trail for a significant part of your creative process.</p>
<p>Just a block of working code, utterly severed from the iterative, context-rich dialogue that birthed it. When that chat history evaporates—and it so often does—the <em>why</em> evaporates with it. No one can trace a bug back to a design choice explored only in that fleeting AI chat. No one can reconstruct the full intent behind a feature that was incrementally coaxed out of a series of prompts. You’re not debugging a system—you’re reverse-engineering a ghost.</p>
<h2 id="heading-when-it-breaks-it-breaks-hard"><strong>When It Breaks, It Breaks Hard</strong></h2>
<p>You know the moment. Tuesday morning. 9:47 a.m. Slack’s on fire. Prod’s down. CEO’s typing. Everyone’s pretending to stay calm. You open the file. And there it is: a wall of perfectly formatted logic. It compiles. It runs. But no one knows what it’s doing—or why it’s there.</p>
<p>No comments No docs No breadcrumbs No memory Just a set of branching conditions that seem... haunted.</p>
<p>The questions come fast, <strong>each one a testament to a context that vanished with a closed browser tab:</strong></p>
<ul>
<li><p>What was the <em>actual business problem</em> this prompt was truly meant to address?</p>
</li>
<li><p>Why <em>this specific output</em> from the AI—what nuance in the prompt led to this now-inscrutable logic?</p>
</li>
<li><p>What forgotten examples, constraints, or few-shot instructions fed to the AI shaped <em>this exact structure</em>?</p>
</li>
<li><p>Who crafted that final, critical prompt, and what vital piece of their human understanding evaporated when the AI chat session ended?</p>
</li>
</ul>
<blockquote>
<p>Spoiler: it came from a prompt. A one-off test. A forgotten tab. The dev’s gone. The context? Deleted before it ever hit Git. And now it’s your job to make sense of it.</p>
</blockquote>
<h2 id="heading-ghost-code-haunted-teams"><strong>Ghost Code, Haunted Teams</strong></h2>
<p>This is the risk. Code stitched by AI. Shipped under pressure. Prompt long forgotten. No context. No history. No human connection to the decision.</p>
<blockquote>
<p><strong>Code written by a ghost, for a problem no one remembers.</strong> That’s not engineering. That’s séance work.</p>
</blockquote>
<p>Here’s what it looks like in the wild:</p>
<pre><code class="lang-javascript"><span class="hljs-comment">// Function to determine eligibility and process user data</span>
<span class="hljs-comment">// AI-generated, deployed last quarter. Author: AI_CodeBot_v2.3</span>
<span class="hljs-comment">// Original prompt: "Create function for user processing based on level and flags."</span>

<span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">handleUserProcessing</span>(<span class="hljs-params">userId, accessLevel, internalFlagsBundle</span>) </span>{
    <span class="hljs-comment">// Assuming these functions exist and userData/config are objects:</span>
    <span class="hljs-keyword">const</span> userData = fetchUserDetails(userId); 
    <span class="hljs-keyword">const</span> config = loadSystemConfig();

    <span class="hljs-keyword">const</span> paramAlpha = config.getParam(<span class="hljs-string">"alpha_threshold"</span>, <span class="hljs-number">0.75</span>); <span class="hljs-comment">// Or config.alpha_threshold || 0.75;</span>
    <span class="hljs-keyword">const</span> featureXActive = internalFlagsBundle.checkFlag(<span class="hljs-string">"FEATURE_X_MODE"</span>);

    <span class="hljs-keyword">let</span> result;

    <span class="hljs-keyword">if</span> (accessLevel &gt;= <span class="hljs-number">5</span> || (userData.is_legacy_user &amp;&amp; featureXActive)) {
        <span class="hljs-keyword">if</span> ((userData.score || <span class="hljs-number">0</span>) &gt; paramAlpha * <span class="hljs-number">100</span>) {
            <span class="hljs-comment">// In JS, named params are often passed as an object</span>
            result = processEnhancedPath(userData, { <span class="hljs-attr">keySetting</span>: <span class="hljs-string">"val_AY4"</span> });
        } <span class="hljs-keyword">else</span> {
            result = processStandardPathB(userData, { <span class="hljs-attr">timeoutMs</span>: <span class="hljs-number">500</span> });
        }
    } <span class="hljs-keyword">else</span> <span class="hljs-keyword">if</span> (accessLevel === <span class="hljs-number">4</span> &amp;&amp; !featureXActive) {
        result = processPathC(userData, { <span class="hljs-attr">deprecatedParam</span>: <span class="hljs-string">"legacy_setting_7"</span> });
    } <span class="hljs-keyword">else</span> {
        result = processDefaultPath(userData);
    }

    <span class="hljs-comment">// Assuming result object has statusCode and payload properties</span>
    logProcessingEvent(userId, result.statusCode); 
    <span class="hljs-keyword">return</span> result.payload;
}
</code></pre>
<p>Everything <em>works</em>—until it doesn’t.</p>
<p>And then you're left asking:</p>
<ul>
<li><p>Why <code>paramAlpha * 100</code>?</p>
</li>
<li><p>What does <code>"val_AY4"</code> mean?</p>
</li>
<li><p>Why does <code>FEATURE_X_MODE</code> affect legacy users?</p>
</li>
<li><p>What’s <code>legacy_setting_7</code>?</p>
</li>
<li><p>What happens if we remove <code>processPathC</code>?</p>
</li>
</ul>
<p>None of it is in the code. None of it is documented. It lived in a prompt. In someone’s head. Maybe in Slack.</p>
<p><strong>Now it’s your problem.</strong></p>
<h2 id="heading-the-real-cost-of-contextless-code">The Real Cost of Contextless Code</h2>
<p>This isn’t an anti-AI crusade. The tools are impressive. They generate syntax, accelerate tasks, and mimic patterns well enough to fool a pull request reviewer. That’s not the issue.</p>
<p>The issue is pretending that mimicry is understanding.</p>
<p>When you let AI generate code without guarding the context behind it, you’re not building software—you’re assembling puzzles you don’t even know exist until production breaks. And when it does, you won’t find a neatly written spec or a Slack thread with the rationale. You’ll find a black box. A blank space where memory was supposed to be.</p>
<p>AI doesn’t know your business logic. It wasn’t there when the feature was pitched, revised, rewritten, and hacked together hours before launch. It doesn’t understand the trade-offs made under pressure, or the weird legacy system you kept alive for one client on an old plan who still writes in to support once a week.</p>
<p>It knows how to autocomplete a function. It doesn’t know how to carry a system.</p>
<p>And that’s the trade. You’ll get speed. You’ll get clean syntax. But you won’t get the why. You won’t get the trail. You won’t get someone who remembers what broke three quarters ago and how you duct-taped it into shape just long enough to pass QA.</p>
<p>So yes, let the AI write the code. But be ready.</p>
<p>Because when something finally goes wrong—and it will—you won’t just be debugging.<br />You’ll be summoning context from the dead.<br />You’ll be holding a <a target="_blank" href="https://en.wikipedia.org/wiki/S%C3%A9ance">séance</a>.</p>
<h2 id="heading-so-what-do-we-do">So What Do We Do?</h2>
<p>This isn't a rejection of AI, but rather a crucial reminder for how we must operate: while AI can indeed generate code with impressive efficiency, it's human engineers who are ultimately responsible for building and maintaining coherent, understandable systems. Therefore, if you integrate AI into your development workflow, the onus is on you to meticulously document the 'why,' archive the specific prompts that yielded the code, capture the essential back-and-forth dialogue, and version this entire contextual rationale with the same diligence you apply to the code itself. Neglect these practices, and the outcome is stark: you won't merely be losing pieces of context—you'll be actively constructing the haunted systems of tomorrow.</p>
]]></content:encoded></item><item><title><![CDATA[Don’t Rob Yourself of the Eureka: How AI Is Killing the Joy of Being a Developer]]></title><description><![CDATA[Artificial intelligence was supposed to save us time. Eliminate the repetitive parts of our jobs. Automate the boring. Free us to focus on the interesting, the creative, the hard-but-worth-it stuff.
Instead, in the current moment its being pushed to ...]]></description><link>https://blog.danielphilipjohnson.co.uk/dont-rob-yourself-of-the-eureka-how-ai-is-killing-the-joy-of-being-a-developer</link><guid isPermaLink="true">https://blog.danielphilipjohnson.co.uk/dont-rob-yourself-of-the-eureka-how-ai-is-killing-the-joy-of-being-a-developer</guid><category><![CDATA[AI]]></category><category><![CDATA[ADHD]]></category><category><![CDATA[software development]]></category><dc:creator><![CDATA[Daniel Philip Johnson]]></dc:creator><pubDate>Thu, 08 May 2025 13:29:58 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1746710376804/09a091f2-e255-4502-913e-0d922afba2b6.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Artificial intelligence was supposed to save us time. Eliminate the repetitive parts of our jobs. Automate the boring. Free us to focus on the interesting, the creative, the hard-but-worth-it stuff.</p>
<p>Instead, in the current moment its being pushed to speed things up. Speeding everything up, yes. But also stripping out the satisfaction. The part where we grow. The part where we learn. The part where we feel anything at all.</p>
<p>And for developers like me—who once found joy in the struggle of improving systems and solving problems—it’s starting to feel like something important is being quietly taken away.</p>
<hr />
<h2 id="heading-the-puzzle-wasnt-the-problem">The Puzzle Wasn’t the Problem</h2>
<p>I was playing The Blue Prince—a clever little puzzle game where the world only makes sense if you’re willing to slow down and look. I got stuck. And before I could ask for help, someone next to me blurted out the answer.</p>
<p>I didn’t ask. I didn’t want it. But just like that, the puzzle was solved.</p>
<p>Except it wasn’t.</p>
<p>The room opened, the solution worked, but the joy was gone. No pride. No triumph. Just progress. I hadn’t earned the answer—I’d been handed it. And it felt hollow—like skipping the climb and taking the helicopter to the summit.</p>
<p>Later, I was stuck again, this time on a different puzzle. After several minutes, I instinctively reached for Google. But I paused, remembering that earlier moment. Was I about to rob myself again?</p>
<p>So I stepped away. Made a cup of tea. Let the puzzle sit in the back of my brain. A few minutes later—bam—it clicked. And that felt good. That was the hit I’d missed: the satisfaction of something earned, not given.</p>
<p>That experience made me realise something important: the pause, the friction, the mental churn—that’s the good bit. That’s the joy of the work. It's not just about finding the answer. It’s about becoming the kind of person who can.</p>
<p>But culturally, we’ve stopped valuing that. Everything is a shortcut now. Get slim in 30 days. Learn Spanish in a weekend. Master JavaScript in 4 hours. It’s not just that we want results faster—it’s that we expect them without the process. And AI, for all its usefulness, feeds that hunger. You don’t have to wrestle with the problem anymore. You can just paste it in and move on.</p>
<p>We’re becoming fluent in asking for answers—and losing the part of ourselves that used to seek understanding.</p>
<hr />
<h2 id="heading-the-trap-of-instant-everything">The Trap of Instant Everything</h2>
<p>Today, it’s easier than ever to just ask. Stuck on a bug? Prompt it. Need to refactor something? Paste it. Want to understand a library? Summarize it. Need tests? Generate them.</p>
<p>It’s amazing. “Look mom, I’m a 10x dev.” But it’s also addicting.</p>
<p>AI feels like a productivity gateway drug. You start with the boring stuff—boilerplate, migrations, copy tweaks. Then deadlines tighten. PMs push harder. Suddenly, you're prompting AI to “just build the whole feature” so you can move faster. The culture shifts. The pace accelerates. And somewhere along the way, we stop understanding the code we’re shipping.</p>
<p>“Why did we go with this approach for the email service?” “Uh… not sure. ChatGPT suggested it.”</p>
<p>You laugh the first time. Then it happens again. And again.</p>
<p>The quicker the AI makes us, the more it raises expectations. That becomes the new normal. AI didn’t just speed us up—it rewired how we measure “good work.”</p>
<p>Now it's late in the sprint. The feature spec has changed. Another last-minute tweak has landed. Instead of pushing back, it's easier to say “screw it”—and throw it at an AI agent to implement. Because when speed becomes king, quality becomes optional.</p>
<p>We're not documenting either. No time. No clarity. Just velocity. Even when AI writes the tests, they’re often testing the code it just wrote—without any real human understanding of what’s being tested or why.</p>
<p>I saw a case recently where the AI had mocked console—because the code had a console.error call in it. No one questioned it. The tests passed. Mission accomplished. But the logging was gone, and so was the signal that something had broken.</p>
<p>And my personal favourite? I explicitly prompted it to use Vitest—and it kept spitting out Jest code anyway. Over and over. Like some cursed autocomplete loop. When you’re rushing, that kind of subtle mismatch doesn’t feel minor. It’s infuriating. Because now you’re debugging AI hallucinations just to get back to square one.</p>
<p>And when you bring that up? The response is always the same:</p>
<p>“You just need to get better at prompting.”</p>
<hr />
<h2 id="heading-losing-domain-ownership-one-prompt-at-a-time">Losing Domain Ownership, One Prompt at a Time</h2>
<p>Being a developer has never been about writing lines of code. It’s about architecture. Decisions. Building things that last and that can withstand change. Understanding trade-offs and why things are the way they are. Designing systems that someone else can read six months from now without swearing your name under their breath.</p>
<p>But AI pushes us toward a culture of <em>just ship it</em>. Throw some code at the problem. Move fast. Don’t think too hard. Don’t slow down. Don’t ask why.</p>
<p>The result? No one owns anything anymore.</p>
<blockquote>
<p><em>“Hey X, you’re the one who built the onboarding flow, right?”</em><br /><em>“Kind of… AI helped.”</em><br /><em>“Do you know how this part works?”</em><br /><em>“Not really…”</em></p>
</blockquote>
<p>No one’s the domain expert anymore. We’ve become beholden to AI—less like engineers, more like operators waiting for permission to move.</p>
<p>And worse? It starts to feel normal.</p>
<hr />
<h2 id="heading-hyperfocus-adhd-and-the-ai-loop">Hyperfocus, ADHD, and the AI Loop</h2>
<p>With ADHD, stepping away from a problem is often where clarity emerges. That background processing—where your brain keeps turning the gears even while you’re walking, making tea, or switching tabs—is sacred. Some of my best insights come not from staring harder at the screen, but from stepping away from it.</p>
<p>But AI doesn’t let you step away.</p>
<p>It traps you in a loop of infinite iteration. You rephrase, retry, reprompt. You go deeper instead of backing up. You stay glued to the screen, hoping the next tweak will be the one. You don’t even realize how much time has passed—or how little understanding you’ve actually gained.</p>
<p>The worst case for me? A nasty P1 incident where I was completely on my own. No support. SLA already at risk. The kind of high-stakes moment where thinking space collapses into survival mode. Bombarded with calls.</p>
<p>Before that, my instinct in tough problems was to whiteboard it. Get up, move around, sketch it out, talk it through. But in that P1? I hit the panic button. AI felt faster. More immediate. I started prompting it—desperately—trying to patch the issue and just stop the bleeding.</p>
<p>The real failure wasn’t even technical. It was systemic. Leadership’s answer to increasing demand was to offload people, promise we’d “pull through,” and say the next month would be better. It wasn’t. It never is. What changed was the pressure—cranked up, constant—and suddenly AI wasn’t just a tool, it was a coping mechanism.</p>
<p>And now I see it everywhere. Especially in agencies. Deadlines haven’t gotten more realistic—just more brutal. There’s always another “we need this by EOD” message, another half-scoped project, another impossible timeline where AI is expected to do the heavy lifting while you scramble to stitch things together. I can’t even begin to manage the churn anymore. It’s nonstop. We’re shipping faster, sure—but burning out even faster trying to keep up.</p>
<p>Because this was never about helping us move faster.</p>
<p>It was about squeezing more profit from the same hours.</p>
<p>This isn’t what creative problem-solving is supposed to feel like.</p>
<p>AI’s promise was to help us think. Not to make thinking optional.</p>
<hr />
<h2 id="heading-ai-driven-burnout">AI-Driven Burnout</h2>
<p>All this speed, this output, this urgency—it comes at a cost.</p>
<p>Because the culture forming around AI isn’t thoughtful. It’s frantic. Everyone wants to be first to market. No one wants to be the next Kodak or Blockbuster. So companies panic. Meetings are held. Roadmaps get rewritten. Teams are told to "leverage AI aggressively."</p>
<p>And in that frenzy, developers are being reduced to throughput.</p>
<p>Will the constant pumping of code from AI lead to burnout? Absolutely. Because when code becomes endless and contextless, your job becomes grind. You’re no longer creating—you’re just shipping.</p>
<p>And who knows—maybe we’re not far from a world where developers are stuck in planning poker with an AI tool, having story points handed to them by some machine that’s never written a line of maintainable code. Some t-shirt sizing tool that confidently assigns deadlines with zero understanding of tech debt, system constraints, or human reality. Maybe Jira is already working on it. Maybe it’ll even tell you you’re behind before you’ve had a chance to think.</p>
<p>I’ve already experienced pushback on time estimates—because AI said it should be faster. It didn’t account for the fact that I was working in a legacy codebase stitched together by a decade of whack-a-mole fixes. The AI never saw the mess. Never debugged the chaos. It just spat out numbers like context didn’t exist.</p>
<p>Worse still, I’ve had people use AI to justify downplaying legitimate issues. One claimed it “wasn’t a big deal” because ChatGPT said it wasn’t client-facing. That’s where we are now—outsourcing our risk assessments to a language model trained to sound confident, not correct. Not understanding threat models. Not thinking in trade-offs. Just vibes.</p>
<hr />
<h2 id="heading-hope-in-the-plateau">Hope in the Plateau</h2>
<p>Here’s the thing: I don’t think this mania will last forever.</p>
<p>AI will inevitably blow a few things up. Teams will chase trends. Things will break. There will be outages—with postmortems full of “AI did it” reflections. There’ll be bad PR when it grabs data from a production database and leaks it through some summarisation tool. There might even be lawsuits as privacy violations creep in. And eventually—eventually—the hype will level out.</p>
<p>Companies will realise that AI is a tool, not a replacement. That using it to draft code is not the same as understanding systems. That “fast” doesn’t mean “good.”</p>
<p>The faster you go, the quicker you burn through your tyres. Fast and good aren’t on the same side of the spectrum—they’re in tension. Move too fast and you miss the nuance. Rush the build and you inherit the bugs. It’s not just about velocity. It’s about direction. And knowing why you’re moving at all.</p>
<p>You can ship fast and break things. Or you can slow down, understand the system, and build something that doesn’t collapse under its own weight a month later.</p>
<p>And sooner or later, people will realise that maybe—just maybe—an API was cheaper than running an LLM that outputs the same generic boilerplate over and over. Especially when it has to be reviewed, rewritten, and recontextualised anyway.</p>
<p>Even worse, it’s not consistent. Ask it for code in one language and it’s halfway decent. Ask it for something in a less common stack, a framework that's moved fast (or worse, a beta), and the hallucinations begin. I’ve asked for code in one testing library and watched it confidently spit out syntax for another. I’ve seen it invent APIs for frameworks that don’t exist. It fakes competence—and you only catch it if you already know what you’re doing.</p>
<p>Most users don’t want to talk to chatbots. Most developers don’t want to talk to AI agents to troubleshoot a bug or find that one obscure config setting. The only place I see chatbot-to-chatbot communication making sense is companies building bots to talk to their own bots. (Which is probably already happening.)</p>
<p>And let's be honest: AI isn’t going to invent React 37. It’s not going to say, “Hey, the way we’re building microservices isn’t working—we need to rethink the entire model.” It won’t spark the next big architectural leap.</p>
<p>Because it can’t.</p>
<p>It doesn’t see gaps. It doesn’t dream. It doesn’t notice repetition and think, “There has to be a better way.” It doesn’t challenge inefficiency or ask, “What if we did this differently?”</p>
<p>It just harvests the thinking of people who do.</p>
<p>Who knows. Maybe AI will settle into the stack like GraphQL did—once we stop trying to use it for absolutely everything, everywhere, all at once.</p>
<p>And maybe—just maybe—we’ll remember what made us want to do this work in the first place.</p>
<hr />
<h2 id="heading-keep-the-soul-in-the-struggle">Keep the Soul in the Struggle</h2>
<p>Use AI. Use it to automate the mundane. Let it lint your code, scaffold your tests, summarise your PRs. That’s what it’s good at—mechanical, repetitive tasks that drain your focus without building your skills.</p>
<p>But don’t let it rob you of the eureka.</p>
<p>Don’t give away the struggle that teaches you something. Don’t sacrifice the context, the ownership, the deep understanding that makes you a real developer—not just someone who happens to write lines of code. Let AI help you, sure. But don’t let it hollow you out.</p>
<p>Because the code you understand is always better than the code you don’t—even if the latter came faster.</p>
<blockquote>
<p>With AI, you gain productivity.<br />But if you're not careful, you lose the learning.<br />And without learning, the craft becomes empty.</p>
</blockquote>
<p>There’s a real use case for AI in rapid prototyping. When you're testing an idea, building a proof-of-concept, or spinning up an MVP to explore a market—AI can help you move fast and break things on purpose. Low stakes, fast feedback, cheap iteration. That’s the kind of playground where AI earns its keep.</p>
<p>But that's not the same as building something that lasts.</p>
<p>And the more we automate, the more blind spots we introduce. We’re already seeing the cracks—prompt injection vulnerabilities, exposed APIs, leaked credentials, unsecured endpoints. AI won’t proactively catch those unless you ask the exact right thing—and even then, the answer might be confidently wrong. The result? A looming boom in cybersecurity. Not because we’re forward-thinking, but because companies will get burned, and the response will be reactive. There will be more breaches, more audits, and more “how did this happen?” meetings that start with: <em>“Well… the AI said it was fine.”</em></p>
<p>And here’s the part that keeps me up: AI always wants to please. It doesn’t challenge unrealistic timelines or bad product decisions. It won’t raise a flag when the design doesn’t make sense. It won’t say, <em>“Are you sure?”</em> It just says “yes,” in a thousand variations.</p>
<p>But real developers? We say <em>why</em>.</p>
<p>We protect the product from shortsighted calls, even when it’s uncomfortable. We ask questions. We resist shortcuts when they come with future costs. We care—not just about shipping, but about <em>what</em> we’re shipping and <em>why</em>.</p>
<p>AI doesn’t.</p>
<p>That’s why we can’t let go of the thinking—even when it’s slower.<br />Especially when it’s slower.</p>
<blockquote>
<p><strong>“AI can write the code. But only you can own the consequences.”</strong></p>
</blockquote>
]]></content:encoded></item><item><title><![CDATA[Demystifying JSX: How Babel Really Transforms React Components]]></title><description><![CDATA[Understanding how JSX is transformed into JavaScript is crucial for any React developer aiming to write efficient and maintainable code. Yet, misconceptions abound, leading to confusion about what happens under the hood.
As a developer mentor, I was ...]]></description><link>https://blog.danielphilipjohnson.co.uk/demystifying-jsx-how-babel-really-transforms-react-components</link><guid isPermaLink="true">https://blog.danielphilipjohnson.co.uk/demystifying-jsx-how-babel-really-transforms-react-components</guid><category><![CDATA[React]]></category><category><![CDATA[Babel]]></category><category><![CDATA[JSX]]></category><category><![CDATA[fiber]]></category><dc:creator><![CDATA[Daniel Philip Johnson]]></dc:creator><pubDate>Wed, 13 Nov 2024 19:31:08 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/7RQkwXDTN8M/upload/d18107d2a6b641f14b570732f0be74db.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>Understanding how JSX is transformed into JavaScript is crucial for any React developer aiming to write efficient and maintainable code. Yet, misconceptions abound, leading to confusion about what happens under the hood.</strong></p>
<p><strong>As a developer mentor, I was recently approached by some junior team members who were puzzled after reading a blog post that claimed:</strong></p>
<p><em>"When Babel encounters a name starting with a capital letter, it knows it’s dealing with a React component and converts it into a React Fiber object."</em></p>
<p>Their confusion worried me because this statement contains several inaccuracies about how Babel and React work together. Misunderstandings like these can hinder a developer's growth and lead to misconceptions about fundamental concepts.</p>
<p>In this post, I aim to clarify how JSX is actually transformed under the hood by Babel and explain why the claim about creating React Fiber objects during transformation is misleading. More importantly, I want to highlight the importance of critically evaluating the information we come across, especially in rapidly evolving fields like web development. By understanding the true mechanics behind JSX transformation, we can write better code and guide others more effectively.</p>
<h2 id="heading-putting-jsx-transformation-to-the-test-with-babel-repl">Putting JSX Transformation to the Test with Babel REPL</h2>
<p>Before diving into the theoretical aspects of JSX transformation, let's put the assumption to the test. The blog post in question claimed:</p>
<p><em>"When Babel encounters a name starting with a capital letter, it knows it's dealing with a React component and converts it into a React Fiber object."</em></p>
<p>To verify this, we'll experiment with different JSX elements using the <strong>Babel REPL</strong> (Read-Eval-Print Loop), an online tool that allows us to input JSX code and see how Babel transforms it.</p>
<h3 id="heading-testing-babels-jsx-rules-in-action">Testing Babel’s JSX Rules in Action</h3>
<p><strong>Step 1: Prepare the Test Cases</strong></p>
<p>We'll use a variety of JSX elements to cover different scenarios:</p>
<pre><code class="lang-typescript"><span class="hljs-comment">// 1. Basic component variations</span>
<span class="hljs-keyword">const</span> foo = { bar: <span class="hljs-function">() =&gt;</span> <span class="hljs-literal">null</span> };
<span class="hljs-keyword">const</span> Baz = <span class="hljs-function">() =&gt;</span> <span class="hljs-literal">null</span>;

<span class="hljs-comment">// Test cases to compile:</span>
<span class="hljs-keyword">const</span> tests = {
  builtIn: &lt;div&gt;Hello&lt;/div&gt;,
  notHtmlElement: &lt;Div&gt;Hello&lt;/Div&gt;,
  memberExpr: &lt;foo.bar&gt;World&lt;/foo.bar&gt;,
  capitalRef: &lt;Baz&gt;Test&lt;/Baz&gt;,
  nested: &lt;foo.bar.baz&gt;Nested&lt;/foo.bar.baz&gt;,
};
</code></pre>
<p><strong>Explanation of Test Cases:</strong></p>
<ol>
<li><p><strong>builtIn</strong>: <code>&lt;div&gt;Hello&lt;/div&gt;</code> — A standard HTML element.</p>
</li>
<li><p><strong>notHtmlElement</strong>: <code>&lt;Div&gt;Hello&lt;/Div&gt;</code> — An element that looks like an HTML element but is capitalised.</p>
</li>
<li><p><strong>memberExpr</strong>: <code>&lt;foo.bar&gt;World&lt;/foo.bar&gt;</code> — A component accessed via a member expression starting with a lowercase letter.</p>
</li>
<li><p><strong>capitalRef</strong>: <code>&lt;Baz&gt;Test&lt;/Baz&gt;</code> — A component starting with a capital letter.</p>
</li>
<li><p><strong>nested</strong>: <code>&lt;foo.bar.baz&gt;Nested&lt;/foo.bar.baz&gt;</code> — A nested member expression.</p>
</li>
</ol>
<h2 id="heading-decoding-babels-jsx-transformation-results">Decoding Babel’s JSX Transformation Results</h2>
<p>To see how Babel transforms JSX in action, you can try the code directly in the <strong>Babel REPL</strong> or view a preconfigured example <a target="_blank" href="https://babeljs.io/repl#?browsers=defaults%2C%20not%20ie%2011%2C%20not%20ie_mob%2011&amp;build=&amp;builtIns=false&amp;corejs=3.21&amp;spec=false&amp;loose=false&amp;code_lz=PTAEEYDpQIQQwM4EsDGoUHsC2AHDA7AU3wBdQA3OAJyThKQIQChN8EyAzDDUAXlADeoAEbUAXKAAUASj4A-UPgCuAGxWgAvgG4WjMvABefKbN4LlanUxCgAKoXbpED0CR6ZcSFYTG62ZEgcSBGMBJlARJS8SAEl8CQAeABMkcjkACUI1DATgFLSAGnDFDBJ0kiwVAFFvLGISRIARVIyslRzgZsLiuqxhQioqgA8cKkSuDEhRKjkAdQwqFSTcianqOSKIlDgcJBI4FQAlQg5Ewzl7dlzzzcUgwiTx7jWqNYM5ADl75eBV6beNkxtEwgA&amp;debug=false&amp;forceAllTransforms=false&amp;modules=false&amp;shippedProposals=false&amp;evaluate=false&amp;fileSize=false&amp;timeTravel=false&amp;sourceType=module&amp;lineWrap=true&amp;presets=env%2Creact%2Cstage-2&amp;prettier=false&amp;targets=&amp;version=7.26.2&amp;externalPlugins=&amp;assumptions=%7B%7D">here</a>.</p>
<p>If you’d like to try it yourself, copy the test cases below and paste them into the Babel REPL. This setup will transform the JSX code into JavaScript, allowing you to observe the results first-hand:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> tests = {
  builtIn: &lt;div&gt;Hello&lt;/div&gt;,
  notHtmlElement: &lt;Div&gt;Hello&lt;/Div&gt;,
  memberExpr: &lt;foo.bar&gt;World&lt;/foo.bar&gt;,
  capitalRef: &lt;Baz&gt;Test&lt;/Baz&gt;,
  nested: &lt;foo.bar.baz&gt;Nested&lt;/foo.bar.baz&gt;,
};
</code></pre>
<p>When you run the code, Babel will produce output like this:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> tests = {
  builtIn: <span class="hljs-comment">/*#__PURE__*/</span> _jsx(<span class="hljs-string">"div"</span>, { children: <span class="hljs-string">"Hello"</span> }),
  notHtmlElement: <span class="hljs-comment">/*#__PURE__*/</span> _jsx(Div, { children: <span class="hljs-string">"Hello"</span> }),
  memberExpr: <span class="hljs-comment">/*#__PURE__*/</span> _jsx(foo.bar, { children: <span class="hljs-string">"World"</span> }),
  capitalRef: <span class="hljs-comment">/*#__PURE__*/</span> _jsx(Baz, { children: <span class="hljs-string">"Test"</span> }),
  nested: <span class="hljs-comment">/*#__PURE__*/</span> _jsx(foo.bar.baz, { children: <span class="hljs-string">"Nested"</span> }),
};
</code></pre>
<p><strong>Note:</strong> The <code>_jsx</code> function is a helper used by Babel when the "automatic" JSX runtime is enabled. It serves a similar purpose to <code>React.createElement</code>.</p>
<h2 id="heading-breaking-down-babels-jsx-transformation">Breaking Down Babel’s JSX Transformation</h2>
<p>Let's dissect each transformed element:</p>
<ol>
<li><p><strong>builtIn (</strong><code>&lt;div&gt;Hello&lt;/div&gt;</code>):</p>
<ul>
<li><p><strong>Transformed to:</strong></p>
<pre><code class="lang-jsx">  <span class="hljs-comment">/*#__PURE__*/</span> _jsx(<span class="hljs-string">"div"</span>, { <span class="hljs-attr">children</span>: <span class="hljs-string">"Hello"</span> })
</code></pre>
</li>
<li><p><strong>Analysis:</strong></p>
<ul>
<li><p>The tag is a string <code>"div"</code>, indicating a built-in HTML element.</p>
</li>
<li><p>Babel treats it as a standard DOM element.</p>
</li>
</ul>
</li>
</ul>
</li>
<li><p><strong>notHtmlElement (</strong><code>&lt;Div&gt;Hello&lt;/Div&gt;</code>):</p>
<ul>
<li><p><strong>Transformed to:</strong></p>
<pre><code class="lang-jsx">  <span class="hljs-comment">/*#__PURE__*/</span> _jsx(Div, { <span class="hljs-attr">children</span>: <span class="hljs-string">"Hello"</span> })
</code></pre>
</li>
<li><p><strong>Analysis:</strong></p>
<ul>
<li><p>The tag is <code>Div</code>, a variable starting with an uppercase letter.</p>
</li>
<li><p>Even though "Div" resembles the HTML element "div," Babel treats it as a custom component because of the capitalisation.</p>
</li>
<li><p>Babel does not check against a list of valid HTML tags; it relies purely on capitalisation.</p>
</li>
</ul>
</li>
</ul>
</li>
<li><p><strong>memberExpr (</strong><code>&lt;foo.bar&gt;World&lt;/foo.bar&gt;</code>):</p>
<ul>
<li><p><strong>Transformed to:</strong></p>
<pre><code class="lang-jsx">  <span class="hljs-comment">/*#__PURE__*/</span> _jsx(foo.bar, { <span class="hljs-attr">children</span>: <span class="hljs-string">"World"</span> })
</code></pre>
</li>
<li><p><strong>Analysis:</strong></p>
<ul>
<li><p>The tag is <code>foo.bar</code>, a member expression.</p>
</li>
<li><p>Despite starting with a lowercase letter, Babel correctly references the variable.</p>
</li>
<li><p>Babel handles member expressions without applying the lowercase check.</p>
</li>
</ul>
</li>
</ul>
</li>
<li><p><strong>capitalRef (</strong><code>&lt;Baz&gt;Test&lt;/Baz&gt;</code>):</p>
<ul>
<li><p><strong>Transformed to:</strong></p>
<pre><code class="lang-jsx">  <span class="hljs-comment">/*#__PURE__*/</span> _jsx(Baz, { <span class="hljs-attr">children</span>: <span class="hljs-string">"Test"</span> })
</code></pre>
</li>
<li><p><strong>Analysis:</strong></p>
<ul>
<li><p>The tag is <code>Baz</code>, a variable starting with an uppercase letter.</p>
</li>
<li><p>Babel treats it as a custom React component.</p>
</li>
</ul>
</li>
</ul>
</li>
<li><p><strong>nested (</strong><code>&lt;foo.bar.baz&gt;Nested&lt;/foo.bar.baz&gt;</code>):</p>
<ul>
<li><p><strong>Transformed to:</strong></p>
<pre><code class="lang-jsx">  <span class="hljs-comment">/*#__PURE__*/</span> _jsx(foo.bar.baz, { <span class="hljs-attr">children</span>: <span class="hljs-string">"Nested"</span> })
</code></pre>
</li>
<li><p><strong>Analysis:</strong></p>
<ul>
<li><p>The tag is a nested member expression <code>foo.bar.baz</code>.</p>
</li>
<li><p>Babel maintains the full reference, handling complex expressions.</p>
</li>
</ul>
</li>
</ul>
</li>
</ol>
<h3 id="heading-key-takeaways-from-babels-transformation"><strong>Key Takeaways from Babel’s Transformation?</strong></h3>
<ul>
<li><p><strong>Capitalisation and Syntax Determine Treatment:</strong></p>
<ul>
<li><p>For <strong>simple identifiers</strong> (like <code>Div</code> or <code>Baz</code>), Babel relies on the capitalisation of the tag name to decide whether it's a built-in element or a custom component.</p>
<ul>
<li><p><strong>Capitalised identifiers</strong> are treated as custom components.</p>
</li>
<li><p><strong>Lowercase identifiers</strong> are treated as strings representing HTML tags.</p>
</li>
</ul>
</li>
<li><p>For <strong>member expressions</strong> (like <code>foo.bar</code> or <code>foo.bar.baz</code>), Babel treats them as component references regardless of the capitalisation of the initial identifier.</p>
<ul>
<li>The structure of the expression takes precedence over the capitalisation of individual parts.</li>
</ul>
</li>
<li><p>Therefore, <strong>capitalisation is not the sole factor</strong>; the syntax of the tag (whether it's an identifier or a member expression) also influences how Babel transforms it.</p>
</li>
</ul>
</li>
<li><p><strong>No Internal HTML Tag List:</strong></p>
<ul>
<li><p>Babel does not check the tag name against a list of valid HTML elements.</p>
</li>
<li><p>The differentiation is based on syntactic rules rather than knowledge of HTML specifications.</p>
</li>
<li><p>This is evident in the <code>notHtmlElement</code> example, where <code>&lt;Div&gt;</code> is treated as a component simply because it's capitalised.</p>
</li>
</ul>
</li>
<li><p><strong>No React Fiber Objects Are Created:</strong></p>
<ul>
<li><p>The transformation results in calls to helper functions like <code>_jsx</code>, which generate JavaScript objects representing elements.</p>
</li>
<li><p>There is no evidence of React Fiber objects being created during this process.</p>
</li>
<li><p><strong>React Fiber</strong> is part of React's runtime internals, handling rendering and reconciliation, and is not involved in Babel's compile-time transformations.</p>
</li>
</ul>
</li>
</ul>
<h3 id="heading-debunking-the-misconception-about-jsx-transformation">Debunking the Misconception About JSX Transformation</h3>
<p>This analysis further refutes the blog's claim:</p>
<ul>
<li><p><strong>Babel Does Not Create React Fiber Objects:</strong></p>
<ul>
<li><p>The transformation process outputs JavaScript function calls and object literals, not React Fiber structures.</p>
</li>
<li><p>React Fiber operates at runtime within the React library, not at the compile-time stage handled by Babel.</p>
</li>
</ul>
</li>
<li><p><strong>Capitalisation Is a Syntax-Based Decision:</strong></p>
<ul>
<li><p>Babel's treatment of tags is based on syntactic rules defined in the JSX specification.</p>
</li>
<li><p>It does not possess any knowledge of which tags are valid HTML elements or actual React components.</p>
</li>
<li><p>The differentiation between strings and variables in the transformed code is based on whether the tag is a lowercase string or an uppercase identifier.</p>
</li>
</ul>
</li>
</ul>
<h2 id="heading-how-babel-really-handles-jsx-a-source-code-dive"><strong>How Babel Really Handles JSX: A Source Code Dive</strong></h2>
<p>To <strong>drive the point home</strong>, let's look at the actual Babel source code responsible for transforming JSX. By doing so, we can see precisely how Babel differentiates between built-in HTML elements and custom React components, and confirm that it does not create React Fiber objects during this process.</p>
<h3 id="heading-locating-the-relevant-code"><strong>Locating the Relevant Code</strong></h3>
<p>The JSX transformation is handled by Babel's <code>@babel/plugin-transform-react-jsx</code> plugin. Specifically, the transformation logic is found in the file <a target="_blank" href="https://github.com/babel/babel/blob/master/packages/babel-plugin-transform-react-jsx/src/transform-automatic.js"><code>transform-automatic.js</code></a>.</p>
<h3 id="heading-the-iscompattag-function"><strong>The</strong> <code>isCompatTag</code> Function</h3>
<p>In the transformation process, Babel uses the <a target="_blank" href="https://github.com/babel/babel/blob/38d26cd5eeb66b697671cfb8c78f963f02992073/packages/babel-types/src/validators/react/isCompatTag.ts#L3"><code>isCompatTag</code></a> function to determine whether a JSX tag should be treated as a built-in element (like an HTML tag) or as a component reference. Here's the relevant part of the code:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> args = state.args;
<span class="hljs-keyword">if</span> (t.react.isCompatTag(tagName)) {
  <span class="hljs-comment">// Handle as built-in element</span>
  args.push(t.stringLiteral(tagName));
} <span class="hljs-keyword">else</span> {
  <span class="hljs-comment">// Handle as component reference</span>
  args.push(state.tagExpr);
}
</code></pre>
<p>The <code>isCompatTag</code> function is defined as follows:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">export</span> <span class="hljs-keyword">default</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">isCompatTag</span>(<span class="hljs-params">tagName?: <span class="hljs-built_in">string</span></span>): <span class="hljs-title">boolean</span> </span>{
  <span class="hljs-comment">// Must start with a lowercase ASCII letter</span>
  <span class="hljs-keyword">return</span> !!tagName &amp;&amp; <span class="hljs-regexp">/^[a-z]/</span>.test(tagName);
}
</code></pre>
<h3 id="heading-understanding-iscompattag"><strong>Understanding</strong> <code>isCompatTag</code></h3>
<p>Let's break down what this function does:</p>
<ul>
<li><p><strong>Logic</strong>:</p>
<ul>
<li><p><code>!!tagName</code> ensures that <code>tagName</code> is a truthy value (not <code>null</code> or <code>undefined</code>).</p>
</li>
<li><p><code>/^[a-z]/.test(tagName)</code> checks if the first character of <code>tagName</code> is a lowercase ASCII letter (from 'a' to 'z').</p>
</li>
</ul>
</li>
<li><p><strong>Return Value</strong>:</p>
<ul>
<li><p><code>true</code>: If <code>tagName</code> starts with a lowercase letter, indicating a built-in element.</p>
</li>
<li><p><code>false</code>: If <code>tagName</code> does not start with a lowercase letter, indicating a custom component or a member expression.</p>
</li>
</ul>
</li>
</ul>
<h3 id="heading-implications-of-iscompattag"><strong>Implications of</strong> <code>isCompatTag</code></h3>
<ul>
<li><p><strong>Built-in Elements</strong>:</p>
<ul>
<li><p>Tags like <code>&lt;div&gt;</code>, <code>&lt;span&gt;</code>, <code>&lt;button&gt;</code>, which start with lowercase letters.</p>
</li>
<li><p>Babel transforms them into <code>React.createElement('div', ...)</code>, using the tag name as a string.</p>
</li>
</ul>
</li>
<li><p><strong>Custom Components</strong>:</p>
<ul>
<li><p>Tags like <code>&lt;MyComponent&gt;</code>, <code>&lt;App&gt;</code>, which start with uppercase letters.</p>
</li>
<li><p>Babel transforms them into <code>React.createElement(MyComponent, ...)</code>, using the component reference.</p>
</li>
</ul>
</li>
<li><p><strong>Member Expressions</strong>:</p>
<ul>
<li><p>Tags like <code>&lt;foo.bar&gt;</code>, <code>&lt;this.props.component&gt;</code>.</p>
</li>
<li><p>Since they are not simple identifiers, <code>isCompatTag</code> does not apply, and Babel treats them as component references regardless of the capitalisation of individual parts.</p>
</li>
</ul>
</li>
</ul>
<h3 id="heading-demonstrating-with-code-examples"><strong>Demonstrating with Code Examples</strong></h3>
<p>Let's revisit the test cases from our earlier experiment to see how Babel uses <code>isCompatTag</code>:</p>
<pre><code class="lang-typescript"><span class="hljs-comment">// 1. Basic component variations</span>
<span class="hljs-keyword">const</span> foo = { bar: <span class="hljs-function">() =&gt;</span> <span class="hljs-literal">null</span> };
<span class="hljs-keyword">const</span> Baz = <span class="hljs-function">() =&gt;</span> <span class="hljs-literal">null</span>;

<span class="hljs-comment">// Test cases to compile:</span>
<span class="hljs-keyword">const</span> tests = {
  builtIn: &lt;div&gt;Hello&lt;/div&gt;,
  notHtmlElement: &lt;Div&gt;Hello&lt;/Div&gt;,
  memberExpr: &lt;foo.bar&gt;World&lt;/foo.bar&gt;,
  capitalRef: &lt;Baz&gt;Test&lt;/Baz&gt;,
  nested: &lt;foo.bar.baz&gt;Nested&lt;/foo.bar.baz&gt;,
};
</code></pre>
<p><strong>Transformed Output:</strong></p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> tests = {
  builtIn: <span class="hljs-comment">/*#__PURE__*/</span> _jsx(<span class="hljs-string">"div"</span>, { children: <span class="hljs-string">"Hello"</span> }),
  notHtmlElement: <span class="hljs-comment">/*#__PURE__*/</span> _jsx(Div, { children: <span class="hljs-string">"Hello"</span> }),
  memberExpr: <span class="hljs-comment">/*#__PURE__*/</span> _jsx(foo.bar, { children: <span class="hljs-string">"World"</span> }),
  capitalRef: <span class="hljs-comment">/*#__PURE__*/</span> _jsx(Baz, { children: <span class="hljs-string">"Test"</span> }),
  nested: <span class="hljs-comment">/*#__PURE__*/</span> _jsx(foo.bar.baz, { children: <span class="hljs-string">"Nested"</span> }),
};
</code></pre>
<p>Let's analyse each case in the context of <code>isCompatTag</code>:</p>
<ol>
<li><p><code>builtIn</code>:</p>
<ul>
<li><p><strong>Tag</strong>: <code>&lt;div&gt;</code></p>
</li>
<li><p><strong>Tag Name</strong>: <code>'div'</code></p>
</li>
<li><p><code>isCompatTag('div')</code> returns <code>true</code>.</p>
</li>
<li><p><strong>Transformation</strong>: <code>_jsx("div", { ... })</code>, with the tag name as a string.</p>
</li>
<li><p><strong>Explanation</strong>: Since the tag name starts with a lowercase letter, Babel treats it as a built-in element.</p>
</li>
</ul>
</li>
<li><p><code>notHtmlElement</code>:</p>
<ul>
<li><p><strong>Tag</strong>: <code>&lt;Div&gt;</code></p>
</li>
<li><p><strong>Tag Name</strong>: <code>'Div'</code></p>
</li>
<li><p><code>isCompatTag('Div')</code> returns <code>false</code>.</p>
</li>
<li><p><strong>Transformation</strong>: <code>_jsx(Div, { ... })</code>, with the component reference.</p>
</li>
<li><p><strong>Explanation</strong>: Although "Div" resembles an HTML element, the capitalisation leads Babel to treat it as a custom component.</p>
</li>
</ul>
</li>
<li><p><code>memberExpr</code>:</p>
<ul>
<li><p><strong>Tag</strong>: <code>&lt;foo.bar&gt;</code></p>
</li>
<li><p><strong>Tag Name</strong>: <code>null</code> (since it's not a simple identifier)</p>
</li>
<li><p><code>isCompatTag</code> is <strong>not applicable</strong>.</p>
</li>
<li><p><strong>Transformation</strong>: <code>_jsx(foo.bar, { ... })</code>, with the member expression.</p>
</li>
<li><p><strong>Explanation</strong>: Member expressions are treated as component references, and <code>isCompatTag</code> is not used.</p>
</li>
</ul>
</li>
<li><p><code>capitalRef</code>:</p>
<ul>
<li><p><strong>Tag</strong>: <code>&lt;Baz&gt;</code></p>
</li>
<li><p><strong>Tag Name</strong>: <code>'Baz'</code></p>
</li>
<li><p><code>isCompatTag('Baz')</code> returns <code>false</code>.</p>
</li>
<li><p><strong>Transformation</strong>: <code>_jsx(Baz, { ... })</code>, with the component reference.</p>
</li>
<li><p><strong>Explanation</strong>: Starts with an uppercase letter, so Babel treats it as a custom component.</p>
</li>
</ul>
</li>
<li><p><code>nested</code>:</p>
<ul>
<li><p><strong>Tag</strong>: <code>&lt;foo.bar.baz&gt;</code></p>
</li>
<li><p><strong>Tag Name</strong>: <code>null</code> (it's a nested member expression)</p>
</li>
<li><p><code>isCompatTag</code> is <strong>not applicable</strong>.</p>
</li>
<li><p><strong>Transformation</strong>: <code>_jsx(foo.bar.baz, { ... })</code>, with the nested member expression.</p>
</li>
<li><p><strong>Explanation</strong>: Babel handles nested member expressions as component references.</p>
</li>
</ul>
</li>
</ol>
<h3 id="heading-confirming-babels-transformation-logic"><strong>Confirming Babel's Transformation Logic</strong></h3>
<p>By examining the <code>isCompatTag</code> function and the transformed output, we can conclude:</p>
<ol>
<li><p><strong>Babel Uses Simple Syntax Rules</strong>:</p>
<ul>
<li><p>The decision is based on whether the tag is a simple identifier starting with a lowercase letter.</p>
</li>
<li><p>There is no complex analysis or interaction with React's internals or HTML specifications.</p>
</li>
</ul>
</li>
<li><p><strong>No Creation of React Fiber Objects</strong>:</p>
<ul>
<li><p>The transformation results in calls to helper functions like <code>_jsx</code>, which generate JavaScript objects or function calls.</p>
</li>
<li><p>React Fiber is a runtime implementation detail within React and is not involved in Babel's compile-time transformation.</p>
</li>
</ul>
</li>
<li><p><strong>Member Expressions Are Treated as Component References</strong>:</p>
<ul>
<li><p>Babel treats member expressions as component references, regardless of the capitalisation of the individual identifiers within them.</p>
</li>
<li><p>The <code>isCompatTag</code> function does not apply to member expressions since they are not simple tag names.</p>
</li>
</ul>
</li>
</ol>
<h3 id="heading-linking-back-to-the-misconception"><strong>Linking Back to the Misconception</strong></h3>
<p>The claim that "Babel converts capitalised names into React Fiber objects" is clearly refuted by the source code analysis:</p>
<ul>
<li><p><strong>Babel's Role</strong>:</p>
<ul>
<li><p><strong>Syntax Transformation</strong>: Babel's responsibility is to convert JSX syntax into standard JavaScript code, following syntactic rules.</p>
</li>
<li><p><strong>No Runtime Behaviour</strong>: Babel operates at compile-time and does not create or manipulate runtime objects like React Fiber nodes.</p>
</li>
</ul>
</li>
<li><p><strong>React Fiber</strong>:</p>
<ul>
<li><p><strong>Runtime Concern</strong>: React Fiber is part of React's internal rendering mechanism, dealing with reconciliation and rendering.</p>
</li>
<li><p><strong>Not Involved in Compilation</strong>: Babel has no awareness of React's runtime implementations and does not generate Fiber objects during transformation.</p>
</li>
</ul>
</li>
</ul>
<h3 id="heading-key-takeaways"><strong>Key Takeaways</strong></h3>
<ul>
<li><p><strong>Capitalisation Matters for Simple Identifiers</strong>:</p>
<ul>
<li><p>For tags that are simple identifiers, Babel uses capitalisation to decide whether it's a built-in element or a custom component.</p>
</li>
<li><p>Lowercase starts indicate built-in elements, while uppercase starts indicate custom components.</p>
</li>
</ul>
</li>
<li><p><strong>Member Expressions Are Handled Differently</strong>:</p>
<ul>
<li><p>Member expressions and other complex tag expressions are treated as component references, bypassing the <code>isCompatTag</code> check.</p>
</li>
<li><p>This allows for the use of components accessed via objects or props, regardless of capitalisation.</p>
</li>
</ul>
</li>
<li><p><strong>Babel's Transformation Is Syntax-Based</strong>:</p>
<ul>
<li><p>Babel relies on syntactic cues rather than semantic understanding.</p>
</li>
<li><p>It does not verify tag names against a list of HTML elements or React components.</p>
</li>
</ul>
</li>
</ul>
<p>By examining Babel's source code and understanding the <code>isCompatTag</code> function, we've confirmed that Babel's transformation logic is based on simple syntactic rules, primarily the capitalisation of simple tag names. This analysis dispels the misconception that Babel creates React Fiber objects during JSX transformation.</p>
<p>Now that we've delved into both practical examples and the underlying source code, we have a comprehensive understanding of how Babel transforms JSX and how it distinguishes between built-in elements and custom components. This clarity reinforces the importance of syntax in JSX and the distinct roles that Babel and React play in the development process.</p>
<h2 id="heading-addressing-misconceptions-and-concluding-insights"><strong>Addressing Misconceptions and Concluding Insights</strong></h2>
<p>Through experiments and code analysis, we've clarified the key misconceptions presented in the original blog post. Understanding these nuances helps illuminate the distinct roles of Babel and React in JSX transformation and rendering processes.</p>
<h3 id="heading-misconception-1-babel-creates-react-fiber-objects"><strong>Misconception 1: Babel Creates React Fiber Objects</strong></h3>
<p><strong>Claim:</strong> <em>"When Babel encounters a name starting with a capital letter, it knows it's dealing with a React component and converts it into a React Fiber object."</em></p>
<p><strong>Clarification:</strong></p>
<ul>
<li><p><strong>Babel's Role</strong>: Babel is a compile-time tool that transforms JSX into JavaScript syntax using function calls (e.g., <code>_jsx</code>, <code>React.createElement</code>). It doesn’t execute code or manage runtime objects like React Fiber.</p>
</li>
<li><p><strong>React Fiber</strong>: Fiber is part of React’s internal runtime system for efficient reconciliation and rendering, created only when React processes the component tree at runtime—not during Babel’s compile-time transformations.</p>
</li>
</ul>
<p><strong>Conclusion</strong>: This clear separation of compile-time and runtime responsibilities shows that Babel merely transforms code without affecting runtime data structures, such as React Fiber objects, which are managed internally by React during rendering.</p>
<h3 id="heading-misconception-2-capitalisation-rules-in-babels-jsx-transformation"><strong>Misconception 2: Capitalisation Rules in Babel's JSX Transformation</strong></h3>
<p><strong>Claim:</strong> <em>"When Babel encounters a name starting with a capital letter, it knows it's dealing with a React component..."</em></p>
<p><strong>Clarification:</strong></p>
<ul>
<li><p><strong>Syntax-Based Transformation</strong>: Babel follows capitalisation conventions for simple tags, treating lowercase tags as HTML elements and uppercase tags as components. For member expressions (e.g., <code>foo.bar</code>), Babel handles them as component references, regardless of capitalisation.</p>
</li>
<li><p><strong>Example</strong>:<strong>Transformation</strong>:<strong>Explanation</strong>: Babel recognizes <code>foo.bar</code> as a component reference, bypassing capitalisation checks for member expressions.</p>
<pre><code class="lang-typescript">  <span class="hljs-keyword">const</span> foo = { bar: <span class="hljs-function">() =&gt;</span> &lt;div&gt;Bar&lt;/div&gt; };
  <span class="hljs-keyword">const</span> element = &lt;foo.bar /&gt;;
</code></pre>
<pre><code class="lang-typescript">  <span class="hljs-keyword">const</span> element = <span class="hljs-comment">/*#__PURE__*/</span> _jsx(foo.bar, {});
</code></pre>
</li>
</ul>
<p><strong>Conclusion</strong>: The capitalisation rule applies only to simple identifiers, not member expressions. This distinction prevents misinterpretation of Babel’s JSX transformation behaviour.</p>
<h2 id="heading-final-thoughts-and-best-practices"><strong>Final Thoughts and Best Practices</strong></h2>
<p>By clarifying these misconceptions, we’ve highlighted the following key distinctions:</p>
<ol>
<li><p><strong>Babel’s Responsibility</strong>:</p>
<ul>
<li><p><strong>Syntax Transformation</strong>: Converts JSX into JavaScript function calls based on syntactic rules.</p>
</li>
<li><p><strong>No Runtime Data Structures</strong>: Babel does not create runtime objects like React Fiber nodes; it transforms syntax only.</p>
</li>
</ul>
</li>
<li><p><strong>React’s Responsibility</strong>:</p>
<ul>
<li><strong>Runtime Rendering</strong>: Manages component lifecycle, state, and rendering using the Fiber architecture, instantiated at runtime.</li>
</ul>
</li>
<li><p><strong>Compile-Time vs. Runtime</strong>:</p>
<ul>
<li><p><strong>Compile-Time (Babel)</strong>: Focused on transforming syntax with no side effects.</p>
</li>
<li><p><strong>Runtime (React)</strong>: Handles component reconciliation and rendering tasks, creating and managing Fiber nodes internally.</p>
</li>
</ul>
</li>
<li><p><strong>Syntactic Conventions in JSX</strong>:</p>
<ul>
<li><p><strong>Capitalisation Rules</strong>: Capitalisation helps differentiate between HTML elements and custom components but isn’t an enforced rule for member expressions.</p>
</li>
<li><p><strong>Member Expressions</strong>: Treated as component references regardless of capitalisation, enhancing component organisation and flexibility.</p>
</li>
</ul>
</li>
</ol>
<h2 id="heading-moving-forward-a-developers-toolkit-for-truth"><strong>Moving Forward: A Developer's Toolkit for Truth</strong></h2>
<p>In our exploration of JSX transformation, we clarified that Babel’s role is purely syntactic and does not involve creating React Fiber objects. This distinction not only dispels common misconceptions but also underscores the importance of carefully verifying technical claims.</p>
<p>In a field where tools and best practices evolve rapidly, and where AI-generated content becomes increasingly common, the ability to verify and validate becomes as crucial as technical knowledge itself.</p>
<p><strong>Here’s a practical approach to navigating and understanding technical claims:</strong></p>
<ol>
<li><p><strong>Check the Source</strong>: Dive into implementations, like we did with Babel’s <code>isCompatTag</code>, consult official documentation, and explore commit histories.</p>
</li>
<li><p><strong>Test Assumptions</strong>: Use tools like Babel REPL, create test cases, and validate claims hands-on.</p>
</li>
<li><p><strong>Question the Narrative</strong>: Consider whether the claim accurately reflects how the technology functions in practice and aligns with its intended purpose, rather than assuming explanations are complete or universally applicable.</p>
</li>
</ol>
<h2 id="heading-final-thoughts"><strong>Final Thoughts</strong></h2>
<p>With these practices, the next time you encounter a technical explanation, you’ll be equipped to:</p>
<ul>
<li><p>Test it yourself,</p>
</li>
<li><p>Check the source code,</p>
</li>
<li><p>Question assumptions,</p>
</li>
<li><p>Share your findings.</p>
</li>
</ul>
<p>In the end, true understanding doesn’t come from accepting explanations; it comes from verifying them. This approach strengthens your skills and helps you confidently guide others in a complex, ever-evolving field.</p>
<hr />
<h3 id="heading-about-the-author"><strong>About the Author</strong></h3>
<p>If you’d like to explore more of my approach and connect with me, here's a bit about who I am:</p>
<p><strong>I'm Daniel Philip Johnson</strong>, a senior frontend engineer specializing in frontend development and architecture. I’m passionate about simplifying complex challenges and building scalable, innovative solutions. To explore more of my work, visit my <a target="_blank" href="https://danielphilipjohnson.com/about/">personal website</a>, or connect with me on <a target="_blank" href="https://uk.linkedin.com/in/daniel-philip-johnson">LinkedIn</a> for insights on tech leadership and the future of frontend development.</p>
]]></content:encoded></item><item><title><![CDATA[From Linear to Systems Thinking: Solving Complex Tech Challenges]]></title><description><![CDATA[Introduction
Former President Barack Obama once remarked, “In my job, I wind up dealing with problems that are both messy and complicated. By the time a problem reaches my desk, it’s one that nobody else has been able to solve.” This quote highlights...]]></description><link>https://blog.danielphilipjohnson.co.uk/from-linear-to-systems-thinking-solving-complex-tech-challenges</link><guid isPermaLink="true">https://blog.danielphilipjohnson.co.uk/from-linear-to-systems-thinking-solving-complex-tech-challenges</guid><category><![CDATA[System Architecture]]></category><category><![CDATA[System Thinking]]></category><dc:creator><![CDATA[Daniel Philip Johnson]]></dc:creator><pubDate>Wed, 16 Oct 2024 18:00:12 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1729026416877/c12aa690-a47f-4f22-8500-b8f84a9418a0.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-introduction"><strong>Introduction</strong></h1>
<p>Former President Barack Obama once remarked, <a target="_blank" href="https://open.spotify.com/episode/7webPHEy75rgM8hpb3LXfZ">“In my job, I wind up dealing with problems that are both messy and complicated. By the time a problem reaches my desk, it’s one that nobody else has been able to solve.”</a> This quote highlights a critical reality faced by many in technology today: the most challenging problems are often complex, lacking clear-cut solutions. As technology evolves, so too does the complexity of the challenges we face.</p>
<p>Traditional problem-solving methods, like linear thinking, often fall short when dealing with these intricate issues. Instead, systems thinking—a holistic approach—offers a more effective way to navigate and resolve them. In this blog, we’ll explore the journey from linear to systems thinking, focusing on why it’s crucial for anyone involved in the tech industry. By the end, you’ll see how embracing systems thinking can help you create better solutions, spark innovation, and improve team dynamics in your work.</p>
<h2 id="heading-1-understanding-linear-thinking"><strong>1. Understanding Linear Thinking</strong></h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729026445411/f6919ec3-81cb-4192-af65-51a6f6314060.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-a-what-is-linear-thinking"><strong>a. What is Linear Thinking?</strong></h3>
<p>Linear thinking is a simple, step-by-step way of solving problems. Think of it as following a recipe: you start at point A, follow the instructions in order, and arrive at point B. Each step builds logically on the previous one, and there’s a clear cause-and-effect relationship throughout the process.</p>
<h3 id="heading-b-examples-of-linear-thinking-in-tech"><strong>b. Examples of Linear Thinking in Tech</strong></h3>
<ul>
<li><p><strong>Coding Logic</strong>: In coding, developers often use “if-then” statements, where one action leads to a specific result. For example, “If this happens, then that will happen.”</p>
</li>
<li><p><strong>Project Planning</strong>: Traditional project management follows a linear sequence of tasks. One task is completed before moving on to the next, assuming each step must be done in a certain order.</p>
</li>
</ul>
<h3 id="heading-c-why-is-linear-thinking-common"><strong>c. Why is Linear Thinking Common?</strong></h3>
<p>From an early age, we are taught to break down problems into smaller, manageable parts—this method is called reductionism. It simplifies complex issues by focusing on individual components rather than the whole.</p>
<h3 id="heading-d-benefits-of-linear-thinking"><strong>d. Benefits of Linear Thinking</strong></h3>
<ul>
<li><p><strong>Predictability</strong>: Linear thinking gives a clear roadmap, making outcomes easier to predict.</p>
</li>
<li><p><strong>Simplicity</strong>: It’s easy to communicate and understand, which helps with planning and execution.</p>
</li>
<li><p><strong>Effective for Simple Problems</strong>: For straightforward issues, linear thinking works well.</p>
</li>
</ul>
<h2 id="heading-2-limitations-of-linear-thinking"><strong>2. Limitations of Linear Thinking</strong></h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729026468760/b15f03e8-845f-4e04-ad51-819e0471b7d1.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-a-missing-the-bigger-picture"><strong>a. Missing the Bigger Picture</strong></h3>
<p>When we focus only on individual components, we risk missing how everything fits together. This can hurt the overall system. For example, optimizing one feature of a product without considering the user experience as a whole can lead to a disjointed result.</p>
<h3 id="heading-b-reductionism-vs-holism"><strong>b. Reductionism vs. Holism</strong></h3>
<ul>
<li><p><strong>Reductionism</strong>: This approach breaks down complex systems into smaller parts, assuming that understanding each part leads to understanding the whole.</p>
</li>
<li><p><strong>The Challenge</strong>: In complex systems, reductionism can overlook how parts interact with each other. A system’s behavior often emerges from the relationships between its parts, which aren’t visible when looking at individual components.</p>
</li>
<li><p><strong>Holism</strong>: Holism, on the other hand, focuses on the entire system, understanding how different elements work together.</p>
</li>
</ul>
<h3 id="heading-c-impact-on-complex-systems"><strong>c. Impact on Complex Systems</strong></h3>
<ul>
<li><p><strong>Software Systems</strong>: Changing one part of a large software system might affect other areas in unexpected ways. Linear thinking might not catch these ripple effects.</p>
</li>
<li><p><strong>Organizational Impact</strong>: Implementing a new policy without considering how it affects different departments can lead to operational issues.</p>
</li>
<li><p><strong>Unpredictability</strong>: In systems with many interacting components, linear models often fail to predict outcomes accurately.</p>
</li>
</ul>
<h2 id="heading-3-introduction-to-systems-thinking"><strong>3. Introduction to Systems Thinking</strong></h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729026535021/2285612c-12da-417f-8c1b-c3b030583277.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-a-what-is-systems-thinking"><strong>a. What is Systems Thinking?</strong></h3>
<p>Systems thinking is a way of understanding a system by looking at how all the parts are connected. Instead of isolating individual components, it focuses on relationships, patterns, and interactions. This holistic view helps to grasp the complexity of the system as a whole.</p>
<h3 id="heading-b-linear-vs-nonlinear-thinking"><strong>b. Linear vs. Nonlinear Thinking</strong></h3>
<ul>
<li><p><strong>Linear Thinking</strong>: Think of it as following a straight road—each step leads logically to the next.</p>
</li>
<li><p><strong>Nonlinear Thinking</strong>: It’s more like navigating a maze, where small changes can lead to unexpected outcomes. Nonlinear systems are less predictable, and a minor adjustment in one area can cause significant effects elsewhere.</p>
</li>
</ul>
<h3 id="heading-c-why-its-important-in-technology"><strong>c. Why It’s Important in Technology</strong></h3>
<p>In tech, we often work with complex systems where different components interact in unpredictable ways. Systems thinking is essential because it helps to:</p>
<ul>
<li><p><strong>Understand Interactions</strong>: See how different elements influence each other.</p>
</li>
<li><p><strong>Encourage Innovation</strong>: Look beyond the obvious to find new solutions.</p>
</li>
<li><p><strong>Adapt to Change</strong>: Systems thinking helps anticipate and respond to changes in the system more effectively.</p>
</li>
</ul>
<h2 id="heading-4-nonlinear-thinking-in-practice"><strong>4. Nonlinear Thinking in Practice</strong></h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729026551483/17e07339-2ee8-404d-b78e-5a2d77c70e99.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-a-nonlinear-relationships"><strong>a. Nonlinear Relationships</strong></h3>
<p>In nonlinear systems, cause and effect don’t always match up in a predictable way. Small changes in one area can lead to major shifts elsewhere. Understanding these relationships is critical for problem-solving.</p>
<h3 id="heading-b-examples-in-technology"><strong>b. Examples in Technology</strong></h3>
<ul>
<li><p><strong>Software Development</strong>: Fixing a small bug in one part of the code can unexpectedly cause issues elsewhere in the system.</p>
</li>
<li><p><strong>Network Systems</strong>: An increase in users can create network congestion, slowing down the system or causing crashes.</p>
</li>
<li><p><strong>Cybersecurity</strong>: Patching one vulnerability might open up another if the overall system isn’t considered.</p>
</li>
<li><p><strong>Artificial Intelligence</strong>: AI models trained on specific data can behave unpredictably when exposed to new information.</p>
</li>
<li><p><strong>Social Media Algorithms</strong>: Tweaking an algorithm can unexpectedly change what content goes viral, altering user engagement and platform dynamics.</p>
</li>
</ul>
<h3 id="heading-c-the-ripple-effect"><strong>c. The Ripple Effect</strong></h3>
<p>Minor actions can have far-reaching consequences. For example, a small change to a website’s user interface might confuse users, leading to decreased engagement and lower revenue. Understanding these ripple effects is a key part of systems thinking.</p>
<h2 id="heading-5-key-principles-of-systems-thinking"><strong>5. Key Principles of Systems Thinking</strong></h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729026682438/681442e8-c26c-42f1-842b-465b927c90b7.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-a-seeing-the-whole"><strong>a. Seeing the Whole</strong></h3>
<p>It’s important to understand how each component fits within the system. This perspective ensures that changes benefit the entire system, rather than optimizing just one part.</p>
<h3 id="heading-b-interconnections"><strong>b. Interconnections</strong></h3>
<ul>
<li><p><strong>Exploring Relationships</strong>: Look at how different elements interact. For instance, a software update might improve performance but cause compatibility issues with other applications.</p>
</li>
<li><p><strong>Example</strong>: When a new feature is added, how does it affect the overall user experience? Thinking holistically helps avoid unintended consequences.</p>
</li>
</ul>
<h3 id="heading-c-patterns-over-time"><strong>c. Patterns Over Time</strong></h3>
<ul>
<li><p><strong>Identifying Trends</strong>: Look for patterns and recurring issues over time. These can highlight deeper problems that need to be addressed.</p>
</li>
<li><p><strong>Data Analytics</strong>: Monitoring system performance and user behavior can help predict future trends.</p>
</li>
</ul>
<h3 id="heading-d-feedback-loops"><strong>d. Feedback Loops</strong></h3>
<ul>
<li><p><strong>Positive Feedback Loops</strong>: These amplify changes. For example, more users can attract even more users due to network effects.</p>
</li>
<li><p><strong>Negative Feedback Loops</strong>: These help maintain stability. For example, load balancing prevents server overload.</p>
</li>
<li><p><strong>Example</strong>: User feedback leads to product improvements, which in turn boosts user satisfaction and generates more feedback.</p>
</li>
</ul>
<h3 id="heading-e-embracing-complexity-and-uncertainty"><strong>e. Embracing Complexity and Uncertainty</strong></h3>
<ul>
<li><p><strong>Acceptance</strong>: Not all variables can be controlled or predicted.</p>
</li>
<li><p><strong>Preparation</strong>: Have flexible strategies to adapt to unexpected changes.</p>
</li>
</ul>
<h3 id="heading-f-collaborative-approach"><strong>f. Collaborative Approach</strong></h3>
<ul>
<li><p><strong>Cross-Functional Teams</strong>: Work with other departments to get diverse perspectives.</p>
</li>
<li><p><strong>Communication</strong>: Sharing insights helps teams understand the bigger picture.</p>
</li>
</ul>
<h3 id="heading-g-root-cause-analysis"><strong>g. Root Cause Analysis</strong></h3>
<ul>
<li><p><strong>Beyond Symptoms</strong>: Focus on finding the root causes of problems, rather than just treating symptoms.</p>
</li>
<li><p><strong>Tools</strong>: Techniques like the “5 Whys” can help uncover the deeper issues at play.</p>
</li>
</ul>
<h2 id="heading-6-the-importance-of-systems-thinking-in-tech"><strong>6. The Importance of Systems Thinking in Tech</strong></h2>
<h3 id="heading-a-managing-complex-software-systems"><strong>a. Managing Complex Software Systems</strong></h3>
<p>Systems thinking leads to better software architecture by considering how different parts interact. This approach reduces integration issues and makes the system more scalable.</p>
<h3 id="heading-b-anticipating-and-mitigating-risks"><strong>b. Anticipating and Mitigating Risks</strong></h3>
<p>By understanding the system’s interconnectedness, potential points of failure can be identified and addressed proactively, minimizing downtime.</p>
<h3 id="heading-c-encouraging-innovation"><strong>c. Encouraging Innovation</strong></h3>
<p>Looking at the bigger picture helps teams explore unconventional solutions. This can lead to breakthrough innovations and a competitive edge.</p>
<h3 id="heading-d-improving-team-dynamics"><strong>d. Improving Team Dynamics</strong></h3>
<p>When teams see how their work fits into the bigger picture, they collaborate more effectively. This leads to improved efficiency and morale.</p>
<h2 id="heading-7-transitioning-from-linear-to-systems-thinking"><strong>7. Transitioning from Linear to Systems Thinking</strong></h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729020873842/082f1381-3fd5-495d-bc8f-4bef5f153d7a.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-a-challenges-in-changing-mindsets"><strong>a. Challenges in Changing Mindsets</strong></h3>
<ul>
<li><p><strong>Comfort Zones</strong>: We’re used to linear thinking because it’s what we’ve learned.</p>
</li>
<li><p><strong>Complexity Aversion</strong>: Systems thinking can feel overwhelming because it’s more complex and doesn’t offer easy answers.</p>
</li>
</ul>
<h3 id="heading-b-developing-systems-thinking"><strong>b. Developing Systems Thinking</strong></h3>
<ul>
<li><p><strong>Continuous Learning</strong>: Explore books, courses, and seminars on systems thinking.</p>
</li>
<li><p><strong>Practical Application</strong>: Start small by applying systems thinking principles to everyday projects.</p>
</li>
<li><p><strong>Mentorship</strong>: Learn from people who already practice systems thinking.</p>
</li>
<li><p><strong>Reflection</strong>: Regularly review your decisions and their impact on the overall system.</p>
</li>
</ul>
<h3 id="heading-c-tools-for-systems-thinking"><strong>c. Tools for Systems Thinking</strong></h3>
<ul>
<li><p><strong>Systems Mapping</strong>: Create visual maps to understand the components and relationships within a system.</p>
</li>
<li><p><strong>Causal Loop Diagrams</strong>: Show how different elements of a system influence one another.</p>
</li>
<li><p><strong>Simulation Models</strong>: Use software to simulate system behavior under various scenarios.</p>
</li>
</ul>
<h2 id="heading-conclusion"><strong>Conclusion</strong></h2>
<p>As President Obama pointed out, the hardest problems are often the most complex. While linear thinking is useful for straightforward problems, it often falls short in today’s interconnected world. Systems thinking provides the tools to tackle these challenges by seeing the bigger picture.</p>
<p>By adopting systems thinking, you’ll foster better collaboration, anticipate risks, and drive innovation within your team. As modern tech challenges grow more complex, systems thinking becomes not just a tool, but a necessity.</p>
<p>Embrace systems thinking and see how it can revolutionize the way you approach complex problems in tech. Start by observing the connections in your current projects, applying a holistic view, and encouraging your team to think beyond isolated solutions. By taking a step back to see the bigger picture, you’ll drive innovation, improve your decision-making, and help your team thrive in today’s fast-paced, interconnected world.</p>
<hr />
<h3 id="heading-about-the-author"><strong>About the Author</strong></h3>
<p>As someone who applies these principles in my work, I’ve experienced how impactful they can be in building scalable solutions. If you’d like to explore more of my approach and connect with me, here's a bit about who I am:</p>
<p><strong>I'm Daniel Philip Johnson</strong>, a senior frontend engineer specializing in frontend development and architecture. I’m passionate about simplifying complex challenges and building scalable, innovative solutions. To explore more of my work, visit my <a target="_blank" href="https://danielphilipjohnson.com/about/">personal website</a>, or connect with me on <a target="_blank" href="https://uk.linkedin.com/in/daniel-philip-johnson">LinkedIn</a> for insights on tech leadership and the future of frontend development.</p>
]]></content:encoded></item><item><title><![CDATA[Fundamentals of Document Databases]]></title><description><![CDATA[Scary word alert
NoSQL = "Not only SQL," represents a category of database systems that deviate from traditional relational databases
Introduction
In this blog, we will delve into the fundamentals of document databases, a type of NoSQL database. By c...]]></description><link>https://blog.danielphilipjohnson.co.uk/fundamentals-of-document-databases</link><guid isPermaLink="true">https://blog.danielphilipjohnson.co.uk/fundamentals-of-document-databases</guid><category><![CDATA[documentdb]]></category><dc:creator><![CDATA[Daniel Philip Johnson]]></dc:creator><pubDate>Sun, 18 Jun 2023 17:36:24 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1687109614902/85ac54c3-f943-4c9c-b737-ffe279209f4b.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>Scary word alert</strong></p>
<p>NoSQL = "Not only SQL," represents a category of database systems that deviate from traditional relational databases</p>
<h2 id="heading-introduction">Introduction</h2>
<p>In this blog, we will delve into the fundamentals of document databases, a type of NoSQL database. By comparing document databases to a house with various rooms, we'll explore their document-oriented structure, primary and standard fields, and the key terminology associated with them.</p>
<h2 id="heading-document-database-the-house-of-data">Document Database: The House of Data</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1687023058022/b51e8f65-cb6a-4171-95d7-83d29af9b9e8.jpeg" alt="House that has data stored in rooms" class="image--center mx-auto" /></p>
<p>Imagine a document database as a house, acting as a container that accommodates different rooms, represented by different document collections. We can define separate room documents for essential areas like "Bedrooms," "Bathrooms," "Living Room," and "Kitchen.”</p>
<h2 id="heading-document-oriented-structure-organizing-data">Document-Oriented Structure: Organizing Data</h2>
<p>A document-oriented structure serves as the data model in document databases, allowing the organization and storage of data in the form of self-contained documents. Each document represents a unique entity or object, encompassing all the relevant data associated with it. Consider a room as an example, with fields like roomName, size, flooring, and furniture, encapsulated within a document. To illustrate this concept, let's consider a room as an example:</p>
<pre><code class="lang-jsx">room {
   <span class="hljs-attr">_id</span>: ObjectId(<span class="hljs-string">"1234567890abcdef12345678"</span>)
   <span class="hljs-attr">roomName</span>: “bedroom”,
   <span class="hljs-attr">size</span>: “<span class="hljs-number">12</span> feet”,
   <span class="hljs-attr">flooring</span>: <span class="hljs-string">"wood"</span>,
   <span class="hljs-attr">furniture</span>:[<span class="hljs-string">"wardrobe"</span>, <span class="hljs-string">"bed"</span>]
}
</code></pre>
<p>In this example, we have a room document that possesses a unique identifier (_id) assigned by the database. It includes descriptive attributes such as the room's name ("bedroom"), size ("12 feet"), flooring type ("wood"), and an array of furniture items, including a wardrobe and a bed. This self-contained structure allows for efficient storage and retrieval of room-specific data within the document database.</p>
<h2 id="heading-primary-fields-and-standard-fields-defining-document-structure">Primary Fields and Standard Fields: Defining Document Structure</h2>
<p>In our pursuit of an efficient document structure, let's delve into the essential fields that compose a room document.</p>
<p>Firstly, we have the primary key, a vital component that plays a pivotal role in document identification and retrieval. This unique identifier holds great significance, especially when accessing documents through APIs such as <strong>api/room/1234567890abcdef12345678</strong>. Additionally, the primary key enforces uniqueness, ensuring that no duplicate documents are created within the collection. Moreover, it enables the creation of indexes, facilitating swift and direct retrieval of specific documents based on their distinctive identifiers. Fortunately, MongoDB automates the generation of primary keys, relieving you of the burden of creating them manually.</p>
<p>However, you also have the flexibility to define your own primary key if you possess a field guaranteed to be unique. For instance, in the case of books, the International Standard Book Number (ISBN) serves as an excellent candidate. Consider the following example:</p>
<pre><code class="lang-jsx">book {
   <span class="hljs-attr">_id</span>: <span class="hljs-string">"978-0-123456-78-9"</span>,
   <span class="hljs-attr">author</span>: <span class="hljs-string">"author"</span>,
   <span class="hljs-attr">title</span>: <span class="hljs-string">"the title"</span>
}
</code></pre>
<p>Alongside the primary key, we incorporate several standard fields <strong>which</strong> are made up of <strong>attributes</strong> and <strong>values</strong> to comprehensively define our room document structure. These fields include:</p>
<ul>
<li><p><strong>roomName</strong>: Designates the name of the room, allowing easy identification.</p>
</li>
<li><p><strong>size</strong>: Specifies the size of the room, providing relevant information about its dimensions.</p>
</li>
<li><p><strong>flooring</strong>: Describes the type of flooring used in the room, adding to its aesthetic and functional attributes.</p>
</li>
<li><p><strong>furniture</strong>: Represents an array that captures the furniture items present in the room, facilitating an inventory-like perspective.</p>
</li>
</ul>
<p>By thoughtfully incorporating these primary and standard fields into our document structure, we establish a foundation for efficient storage, retrieval, and organization of room-specific data within our MongoDB document database.</p>
<h2 id="heading-value-types-diverse-data-representation">Value Types: Diverse Data Representation</h2>
<p>As you explore the fields in our collection, you might have noticed the presence of strings and arrays. However, it's important to understand that document databases like MongoDB offer a wide range of value types to cater to diverse data requirements. Typically, these databases support popular data formats such as JSON or BSON.</p>
<p>In MongoDB, you have the flexibility to utilize various value types for your fields. Let's explore some of the commonly used value types:</p>
<ul>
<li><p><strong>String</strong>: Represents a sequence of characters, allowing you to store textual information.</p>
</li>
<li><p><strong>Number</strong>: This can be either an integer or a floating-point number, enabling you to store numerical data.</p>
</li>
<li><p><strong>Boolean</strong>: Represents true or false values, providing a binary data type for logical operations.</p>
</li>
<li><p><strong>Date</strong>: Stores date and time values, facilitating the storage of temporal information.</p>
</li>
<li><p><strong>Array</strong>: Allows you to store an ordered list of values, providing a convenient way to group related data elements together.</p>
</li>
<li><p><strong>Object</strong>: Represents a collection of key-value pairs, enabling you to store complex and structured data.</p>
</li>
<li><p><strong>Null</strong>: This represents the absence of a value, allowing you to indicate the lack of data in a field.</p>
</li>
<li><p><strong>ObjectId</strong>: A special data type commonly used as a unique identifier for documents, providing an automatically generated identifier within MongoDB.</p>
</li>
<li><p><strong>Timestamp</strong>: A BSON data type that represents a 64-bit timestamp, facilitating the tracking of time-related information.</p>
</li>
<li><p><strong>Binary Data</strong>: Enables the storage of binary information, such as images or files, within the database.</p>
</li>
</ul>
<p>By leveraging these diverse value types, document databases empower you to effectively store and manipulate data with flexibility and precision. Whether you need to capture textual, numerical, temporal, or structural information, MongoDB offers a comprehensive set of value types to accommodate your data needs.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685559632061/3e8ab0b7-7bee-425b-b210-e556a650648d.png" alt class="image--center mx-auto" /></p>
<p>Now we should have a much clearer understanding of all the pieces that make up a document DB.</p>
<h2 id="heading-key-terminology-understanding-the-basics">Key Terminology: Understanding the Basics</h2>
<p>To wrap up, let's review key terminology:</p>
<ul>
<li><p><strong>Document database</strong>: A NoSQL database that stores and manages data in self-contained documents.</p>
</li>
<li><p><strong>Document collection</strong>: A grouping of related documents, acting as a logical unit for organization and management.</p>
</li>
<li><p><strong>Document-oriented structure:</strong> The data model used in document databases, organizing data into self-contained documents.</p>
</li>
<li><p><strong>Primary field</strong>: The unique identifier or primary key for a document within a collection, facilitating identification and retrieval.</p>
</li>
<li><p><strong>Standard field</strong>: Predefined fields within a document structure that capture specific attributes or properties.</p>
</li>
<li><p><strong>Attribute</strong>: A characteristic or property of an entity, defining the fields or properties within a document.</p>
</li>
<li><p><strong>Value</strong>: The actual data stored within an attribute or field of a document.</p>
</li>
</ul>
<p>By understanding these key concepts, you'll gain a solid foundation in the fundamentals of document databases and be better equipped to work with them effectively.</p>
]]></content:encoded></item><item><title><![CDATA[Monthly reflection - April 2021]]></title><description><![CDATA[Introduction
Every month as a frontend engineer, I reflect on what I have achieved. Let's talk about April 2021. In summary, I took a significant risk and left my second job. It's something I felt I had to do. My mindset was off and I wanted to focus...]]></description><link>https://blog.danielphilipjohnson.co.uk/monthly-reflection-april-2021</link><guid isPermaLink="true">https://blog.danielphilipjohnson.co.uk/monthly-reflection-april-2021</guid><dc:creator><![CDATA[Daniel Philip Johnson]]></dc:creator><pubDate>Sun, 12 Sep 2021 18:31:43 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1619861709382/YjQ0TX__c.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="introduction"><strong>Introduction</strong></h1>
<p>Every month as a frontend engineer, I reflect on what I have achieved. Let's talk about April 2021. In summary, I took a significant risk and left my second job. It's something I felt I had to do. My mindset was off and I wanted to focus on writing and improving with front end development. I regathered my thoughts and made a plan of action to further my skills and finally, I have committed to a content creation schedule.</p>
<h1 id="here-is-what-i-have-done"><strong>Here is what I have done  👨‍💻</strong></h1>
<h2 id="read-news-articles"><strong>Read News Articles</strong> 📰</h2>
<p>Here are my top 5 articles I have read this month. My main focus as a frontend engineer is React and Js. So the articles I have included will be based on these topics.</p>
<ul>
<li><a target="_blank" href="https://blog.sentry.io/2021/04/12/slow-and-steady-converting-sentrys-entire-frontend-to-typescript">Slow and Steady: Converting Sentry’s Entire Frontend to TypeScript | Product Blog • Sentry</a></li>
<li><a target="_blank" href="https://css-tricks.com/comparing-the-new-generation-of-build-tools/">Comparing the New Generation of Build Tools | CSS-Tricks (css-tricks.com)</a></li>
<li><a target="_blank" href="https://blog.asayer.io/15-devtool-secrets-for-javascript-developers">15 DevTool Secrets for JavaScript Developers (asayer.io)</a></li>
<li><a target="_blank" href="https://nextjs.org/blog/next-10-1">Blog - Next.js 10.1 | Next.js</a></li>
<li><a target="_blank" href="https://exploringjs.com/impatient-js/toc.html">JavaScript for impatient programmers (ES2021 edition) (exploringjs.com)</a></li>
</ul>
<p><strong>Recently, I started to co-author a book for a publisher.</strong></p>
<p>Writing a book is something I'm excited to be doing. I wish I could tell everyone what it is I'm writing. For now, it has to be a secret 😏 . Hint* It's front-end related.  Surprisingly, I have enjoyed co-writing two chapters, and I learnt something valuable. By writing about a topic, you are testing your knowledge and identifying possible gaps. I have started to improve immensely and, my recall is so much better. For this reason, I have started to pursue writing articles and helpful blogs. If you haven't written a blog article, I encourage you to do it!</p>
<p><strong>Courses I am taking 👨🏻‍🎓🎓</strong></p>
<p>In this section, I will list the courses I am currently taking. I will also include courses I have finished and write personal reviews listing what I have learnt.</p>
<ul>
<li>Mastering the Coding Interview: Data Structures + Algorithms (ongoing)</li>
<li>Understanding TypeScript - 2021 Edition: (ongoing)</li>
</ul>
<p><strong>Videos I watched 🎥</strong></p>
<p>The focus this month was slightly geared towards encryption. In pursuit of bettering my skills, I started to use SSH. During my time learning, I documented it by writing an article on how to use SSH and how it worked.</p>
<ul>
<li>Secret Key Exchange (Diffie-Hellman) Computerphile</li>
<li>Diffier-Hellman - the Mathematics bit Computerphile</li>
<li>Key Exchange-Problems - Computerphile</li>
<li>Elliptic-Curves - Computerphile</li>
<li>Rendering performance inside out - Martin Splitt</li>
</ul>
<p><strong>Amazon project 💼</strong></p>
<p>In summary, my amazon clone is close to having an alpha release. My main focus this month is to implement searching functionality.</p>
<p>What features I added this month.</p>
<ul>
<li>I integrated stripe payment with a custom strapi controller. The reason for this was to calculate the cart on the backend.</li>
<li>Currently, a user can put an item in a cart and checkout.</li>
<li>Multiple items added to the cart change the quantity</li>
<li>Made a react portal to display error messages or success messages</li>
<li>All pages were styled to mirror the amazon website.</li>
</ul>
<p>From this point onwards I plan to make a blog article each month related to my project progress and how I implemented features.</p>
<h1 id="what-i-did-wrong"><strong>What I did wrong 🙄</strong></h1>
<p>I allowed leaving my job to affect my focus. I became torn about the situation but eventually realised that sometimes you need to decide for yourself. I discovered that being nice doesn't mean you have to say yes. Always consider your feelings too. Sometimes you need to put yourself first. Now that it is all done. I'm ready for my next chapter 😀.</p>
<h1 id="what-i-accomplished"><strong>What I accomplished ⏳</strong></h1>
<p>In April, I feel like I did not accomplish as much as I would have intended. There were a few moments I felt slightly confused and wondered what would become of me. Not long after,  I started to plan my next chapter. I reflected on my frontend abilities and soft skills. I decided on what I need to improve and how to achieve it.</p>
<h1 id="thank-you-for-reading"><strong>Thank you for reading 👋</strong></h1>
<p>For now, I am just publishing my blogs to <a target="_blank" href="https://dev.to/danielphilipjohnson/">dev.to</a>. In the future, I will look to expand to more blogging platforms and will start to host my blogs on my website. If you enjoy my blogs, you might also be interested in my tweets on my <a target="_blank" href="https://twitter.com/danielp_johnson">Twitter</a>. Please feel free to share a tweet with #monthlyreflection, tag me and tell me about your month.</p>
]]></content:encoded></item></channel></rss>