A Gentle Introduction to Concurrency — Through the Lens of Office Work
PJ: This post is guest-written by Claude Opus 4.6, and came from a thought that occurred to me as I was thinking about work in the office. What if concurrency problems manifest in the office? And what if actually, most of the problems that we see in the office are actually some form of concurrency bugs?
So that led to me asking Claude, which then led to the generation of this article. You can see the prompts that I used at the bottom of this article. Enjoy!
What Is Concurrency?
Concurrency is a concept from computer science that deals with how multiple things happen at the same time. When a computer runs several tasks simultaneously — downloading a file, playing music, and responding to your clicks — it’s doing concurrent work. Software engineers spend a lot of time thinking about how to split work across multiple “workers” (called threads or processes) so that things run faster.
But here’s the catch: not all work can be safely split up. Some tasks have hidden connections — they share resources, depend on each other’s results, or need to happen in a specific order. When programmers ignore these connections and treat dependent work as if it were independent, a well-known set of problems emerges.
These same problems show up every day in offices, teams, and organizations. You don’t need to write code to run into them. Understanding the vocabulary gives you a surprisingly useful framework for diagnosing why teamwork breaks down.
Embarrassingly Parallel vs. Not
In computing, some tasks are called embarrassingly parallel — a cheeky term for work that’s so easy to split up, it’s almost embarrassing. If you need to resize 1,000 images, you can hand 500 to one processor and 500 to another. They don’t interact. They don’t share anything. Each piece is fully independent.
The office equivalent: calling 100 customers. Give 50 to Alice and 50 to Bob. No coordination needed.
The trouble starts when you think work is embarrassingly parallel but it isn’t. When you split up tasks that actually have hidden dependencies, you get the problems below.
Race Conditions
A race condition occurs when the outcome of work depends on the timing or order in which people (or threads) do things — and that timing isn’t controlled.
In code, the classic example is two threads both reading a counter as 5, each adding 1, and both writing back 6. The answer should be 7, but because they “raced” each other, an update was lost.
In the office: Sarah updates the project budget spreadsheet on her laptop. At the same time, James opens the same spreadsheet and makes different changes. Sarah saves. James saves. James’s version overwrites Sarah’s work entirely. Neither did anything wrong individually — the problem was that they accessed a shared resource concurrently without coordination.
This also shows up when two people reply to the same customer email with different answers, or when two managers independently promise the same budget to different projects.
Data Corruption
When race conditions go undetected, they lead to data corruption — the shared state of your system becomes internally inconsistent.
In software, this might mean a data structure is half-updated: one part reflects the new state, another part still reflects the old state.
In the office: You split a project plan into three sections and assign each to a different person. Person A assumes a March launch. Person B writes their section around April. Person C removes a feature that Person A’s entire timeline depends on. Nobody reconciles the pieces. The plan looks complete but contradicts itself. This is corrupted shared state — it happened because interconnected work was treated as independent.
Ordering and Visibility Issues
Modern CPUs aggressively reorder instructions and cache data locally for performance. This means one thread might not immediately “see” what another thread wrote to memory. Programmers use memory barriers (also called fences) to force visibility — ensuring that when Thread A writes something, Thread B actually sees it before proceeding.
In the office: You make a decision in Monday’s meeting to change direction on a feature. Tom wasn’t in the meeting. Nobody tells Tom directly. He spends the entire week building the old version because his “local cache” (his understanding of the plan) was never updated.
Email is essentially a weak memory model — there’s no guarantee about when someone will read your message. Slack is slightly better but still unreliable. The office equivalent of a memory barrier is a synchronous check-in: “Don’t proceed until you’ve confirmed you have the latest context.” Organizations that rely entirely on async communication without deliberate sync points are running on a relaxed memory model, and will eventually see baffling coordination failures.
Lost Updates
A lost update occurs when two concurrent operations both read the same value, modify it independently, and write back — causing one modification to silently disappear.
In the office: Two managers both submit budget requests that assume the same pool of money. Two recruiters contact the same candidate with different offers. Two designers independently redesign the same screen. The work isn’t truly destroyed, but effort is wasted and someone’s contribution gets discarded — which is demoralizing on top of being inefficient.
Deadlocks
A deadlock is when two or more workers are each waiting for the other to finish, and neither can proceed. The system freezes completely.
Computer science identifies four necessary conditions for deadlock: mutual exclusion (the resource can’t be shared), hold and wait (you hold one resource while waiting for another), no preemption (nobody can force you to release what you’re holding), and circular wait (A waits for B, B waits for A).
In the office: Engineering says, “We can’t build until product gives us the spec.” Product says, “We can’t finalize the spec until engineering tells us what’s feasible.” Both teams sit and wait. Nothing happens. All four conditions are present: each team holds information the other needs, neither will release their position, and nobody steps in to break the cycle.
The fix in both computing and office life is the same: break one of the four conditions. Establish an ordering (one team always goes first with a rough draft), or allow preemption (a manager steps in and makes a decision to unblock everyone).
Livelocks
A livelock is similar to a deadlock, but instead of everyone frozen, everyone is actively moving — yet still making no progress. The threads (or people) keep reacting to each other in a way that cancels out.
The classic computing analogy: two people in a hallway who keep stepping aside in the same direction, endlessly blocking each other while both trying to be polite.
In the office: Two teams keep deferring to each other. “We’ll adjust our timeline once you adjust yours.” “Great, we’ll adjust ours once yours is set.” Emails fly. Meetings happen. Everyone looks busy. Nothing converges.
Another form: a document goes through round after round of review. Each round of feedback triggers changes that prompt new feedback from someone else. The system is churning but never settling.
Priority Inversion
Priority inversion happens when a high-priority task is blocked because a low-priority task is holding a shared resource. In operating systems, this can cause real-time systems to miss critical deadlines. (A famous example: the Mars Pathfinder rover experienced priority inversion in 1997, causing system resets on the surface of Mars.)
In the office: The CEO needs a five-minute legal review to close a major deal, but the only available lawyer is buried in a weeks-long, low-stakes compliance audit. The entire deal stalls because a minor task is consuming the bottleneck resource.
Or: your most important product launch can’t ship because the deploy pipeline is occupied by a routine bug fix that’s taking forever.
The solution — called priority inheritance in computing — maps directly: when you discover that something unimportant is blocking something important, temporarily elevate the blocker’s priority. Assign more people, escalate it, or pause it to free the resource.
Performance Degradation from Over-Synchronization
In software, the naive response to concurrency bugs is to add locks everywhere — mechanisms that force one thread to wait while another finishes. But excessive locking causes lock contention: threads spend more time waiting for each other than doing useful work. Paradoxically, the “parallel” version becomes slower than just doing everything sequentially.
In the office: After a few coordination failures, the organization reacts by adding more meetings, more approval gates, and more mandatory cc’s on every email. Now nothing moves because everyone is waiting for the next sync meeting, the next sign-off, or the next committee review. You’ve gone from chaos to gridlock.
“We need to have a meeting before we can have the meeting” is essentially mutex contention in human form.
Teams tend to oscillate between these two failure modes — too little coordination (race conditions and lost updates everywhere) and too much (everything blocked by process overhead).
The Deeper Lesson
Software engineers have learned — often painfully — that the solution isn’t to serialize everything (which kills throughput) or to naively parallelize everything (which causes all the problems above). It’s to carefully identify which work is truly independent and which has hidden dependencies, then design coordination mechanisms that match.
Interestingly, some programming languages address this at a fundamental level. Clojure, for example, makes data immutable by default — once a value is created, it can never be changed in place. If you want a new version, you create it alongside the old one. This eliminates most concurrency problems at the root, because there’s no shared mutable state to fight over.
The office analog to immutability: working with snapshots, append-only records, and versioned documents rather than a single shared Google Sheet that everyone edits in place. Decision records, dated memos, and versioned specs are the organizational equivalent of immutable values — they give everyone a stable reference point that won’t shift under their feet.
The art of running a team is the same as the art of concurrent programming: figure out where the real dependencies are, coordinate around those, and let everything else flow freely.
Prompts Used to Generate This Article
-
Prompt 1: “What happens when you confuse embarrassingly parallel tasks With tasks that are actually not embarrassingly parallel? what types of problems occur in this type of concurrency?”
-
Prompt 2: “can you share the references for each of these problems?”
-
Prompt 3: “how would you extend these issues arising from confusing embarrassingly parallel tasks with non-embarrassingly parallel tasks, into office work?”
-
Prompt 4: “note down your observations into a MD file, capturing the analogies for office work, and at the end, list down all the prompts used to generate the observations including this prompt.”
-
Prompt 5: “write another version of the observations in md, that is for a complete lay person who might not understand concurrency.”
-
Prompt 6: “ok this version is too extreme. i want the article to be an intro to concurrency for the lay reader, and how concurrency concepts can apply to office work, not to remove all concurrency lingo altogether. and also include the list of prompts that generated the article at the end, including this prompt.”