The most common failure mode in complex systems is not component failure. It is coordination failure. The individual parts work. The bridges between them do not. This pattern is so pervasive that it should be treated as a law rather than an observation: as the number of competent participants in any system increases, the probability of coordination failure increases faster than the probability of component failure decreases.
This is observable at every scale. A surgical team with excellent individual skills loses a patient because the handoff between anesthesia and recovery was ambiguous. A multinational enforcement operation identifies a criminal network across four jurisdictions but cannot execute simultaneously because each agency operates on its own timeline, its own evidence standard, and its own authorization chain. A family with abundant resources fails to preserve wealth across three generations because the legal structures, the investment vehicles, and the governance mechanisms were designed by different advisors who never spoke to one another.
The common thread is not incompetence. It is the absence of a shared protocol — a set of rules that each participant can execute independently while producing outputs that are structurally compatible with every other participant's outputs.
The instinct when coordination fails is to build a bigger coordinator — a central authority with the power to compel alignment. This instinct is almost always wrong. Central coordinators become bottlenecks. They introduce latency at precisely the moments when speed matters most. They create single points of failure that adversaries can target and allies can resent. And they require political capital to establish that is rarely available when it is most needed.
The history of institutional design is littered with central coordinators that began as solutions and became problems. The committee that was formed to streamline decision-making becomes the primary obstacle to decisions. The oversight body that was created to ensure quality becomes the reason quality cannot be delivered on time. The holding company that was established to unify strategy becomes the bureaucratic layer that prevents any strategy from being executed.
The alternative is protocol. Not coordination by authority, but coordination by design. A shared set of rules — explicit, testable, enforceable by structure rather than by hierarchy — that each participant can execute independently, without requiring permission from a central authority, while producing outputs that interoperate by default.
The internet works this way. TCP/IP does not require a central coordinator to route packets. Each node follows the protocol. The result is a system that coordinates billions of participants without any single point of authority. Financial settlement works this way. SWIFT messages follow a format that any participating bank can process without calling the other bank for clarification. Air traffic control works this way. Pilots and controllers follow procedures that produce safe separation without requiring either party to understand the other's complete situation.
The domains where coordination still fails catastrophically — cross-border law enforcement, multi-party asset recovery, humanitarian response, complex estate administration — are domains that have not yet been given a protocol. They operate on relationships, phone calls, and memoranda of understanding that have no enforcement mechanism beyond goodwill. Each participant brings their own format, their own timeline, their own evidentiary standard, and their own definition of success. The result is not collaboration. It is parallel effort with occasional intersection.
Goodwill does not scale. Protocol does.
The design of effective coordination protocols requires three properties that most institutional designers neglect. First, the protocol must be format-native — it must define not just what information is shared but how it is structured, so that outputs from one participant can be ingested by another without manual translation. Second, the protocol must be temporally explicit — it must define not just what happens but when, so that participants operating in different time zones, legal regimes, and organizational cultures can synchronize without real-time communication. Third, the protocol must be failure-aware — it must define what happens when a participant cannot perform, so that the system degrades gracefully rather than halting entirely.
The organizations that will dominate the next era of complex operations are not those with the most talented individuals or the largest budgets. They are those that design the best protocols — systems of coordination that allow talented individuals to operate at their full capacity without being constrained by the limitations of every other participant in the chain.
The question is not whether these domains will be protocolized. The question is whether the protocols will be designed by the participants who understand the work — who have operated in the field, who have felt the friction, who know where the handoffs break — or imposed by institutions that have never coordinated anything more complex than their own internal meetings.
The difference between these two outcomes is the difference between a protocol that works and a protocol that is merely documented. The former changes the operating environment. The latter occupies shelf space.
The work of protocol design is not glamorous. It is taxonomic, procedural, and deeply specific. It requires understanding not just what each participant does, but what each participant needs from every other participant, and in what form, and by when. It is the architectural equivalent of plumbing — invisible when it works, catastrophic when it does not.
The builders who understand this will build the infrastructure that others operate on. The builders who do not will spend their careers coordinating by phone.