The grid was built for a different era. Massive, centralised plants and long transmission lines made sense when generation was expensive and demand was concentrated. That architecture has a cost: one break in the chain and millions go dark. It is not bad engineering; it is a design that scaled well past its limits.

At Blueprint Power, the strategy was simple: push generation and load as close together as possible. Move to the edge. The shorter the distance between where power is made and where it is used, the less you depend on the transmission backbone. The networking mindset was the means, not the message.

The Networking Lineage

Digital networking led the charge for resilience. ARPAnet, BGP, and OSPF were built for distributed resilience decades before smart grids existed. They proved you could route around failure without a central command.

A historical map of the ARPAnet nodes across the United States

That same logic applies to the home. Solar, batteries, and Home Assistant turn a house into a grid node, not just a drain. You are generating and routing electrons locally. Modern tools like Matrix and Meshtastic extend that lineage. Federation returns control to the user. Meshtastic, for instance, bypasses the ISP backbone entirely using LoRa. The lesson is clear: centralisation is a vulnerability. Whether it is a packet or a kilowatt, the only way to survive a blackout is to generate, store, and route it at the edge. Stop relying on the mainframe. Build the mesh.

Orchestration as the Logic Layer

A mesh only works if you can coordinate the load. Orchestration is the logic layer that makes this possible. Kubernetes treats electrons like packets, managing the load by following the price of an electron in real-time. We can schedule high-compute tasks, like training AI models, to fire only when power is cheap or the sun is at its zenith.

This vision of a “data centre in a box” isn’t a new fantasy. In the mid-2000s, Sun Microsystems looked at this seriously with Project Blackbox (the Sun Modular Datacenter). They realised then that a standard 20-foot shipping container was the perfect form factor for a portable, rapidly deployable node.

The Sun Modular Datacenter at the Internet Archive

It was ahead of its time; a solution looking for a problem that hadn’t yet reached a breaking point.

Solving the Scaling Crisis

Twenty years later, the problem is here. The era of the gigawatt-scale data centre may be hitting a wall. These monolithic sites create massive grid bottlenecks and thermal nightmares. Smaller, modular data centres at the edge solve this by distributing the burden:

  • Thermal Efficiency: Cooling a 200 kW edge site is a manageable task. Cooling a 500 MW site requires a dedicated water utility.
  • Zero Line Loss: By placing nodes next to solar farms, you consume the electron exactly where it is harvested. This bypasses the 5% to 10% loss common in long-distance transmission.
  • Data Federation: With SD-WANs and tools like Ceph, you park compute and data where the energy is cleanest. Distributed storage ensures that if one node falls, the data survives.

Following the Fuel

This isn’t just moving bits; it is shifting load to match the heartbeat of the grid. By routing AI workloads to the edge, we turn a massive energy drain into a load-bearing, responsive node.

The future of compute should not be in a central cloud. We must mimic what has worked so well for networking: generate the power locally and follow the input “fuel” to do so. The goal is a system that is federated, resilient, and sovereign.