Facebook's fb303 and Prometheus share no direct lineage. Despite both implementing pull-based metrics collection—and superficially resembling each other in architecture—extensive research across creator interviews, GitHub histories, conference talks, and technical documentation reveals zero acknowledged influence between the two systems. Prometheus descends exclusively from Google's Borgmon, built by ex-Google SREs who never worked at Facebook. fb303 emerged independently at Facebook three years after Borgmon, solving the same problem with different technology. The deeper story is one of convergent evolution: three of the world's largest engineering organizations independently arrived at nearly identical monitoring architectures, connected only by the flow of engineers between Google and Facebook who carried shared intuitions about infrastructure design.
Prometheus's genealogy is exceptionally well-documented. Matt T. Proud (Google SRE 2006–2012) and Julius Volz (also ex-Google SRE) joined SoundCloud in Berlin in 2012 and began building Prometheus as a side project. Both had direct, hands-on experience with Borgmon, Google's internal monitoring system created around 2003 alongside the Borg cluster manager. The first Prometheus commit landed on GitHub on November 24, 2012.
The creators have been unambiguous about the inspiration. Volz stated in multiple interviews: "Prometheus was inspired by Google's Borgmon monitoring system" and described PromQL as "admittedly similar to the query language of the Borgmon monitoring system at Google where I had just come from." Björn Rabenstein, another ex-Google SRE who joined the Prometheus team in October 2013, coined the definitive framing: "Kubernetes is Borg for mere mortals, while Prometheus is Borgmon for mere mortals."
The design parallels between Borgmon and Prometheus are precise and intentional. Both use HTTP-based pull collection (Borgmon scraped /varz, Prometheus scrapes /metrics). Both employ multi-dimensional label-based data models. Both feed into a time-series database with a rich query language. Both connect to a separate Alertmanager component for notification routing. The Google SRE book's Chapter 10, written by Jamie Wilkinson, explicitly confirms: "Prometheus shares many similarities with Borgmon, especially when you compare the two rule languages."
Critically, no Prometheus creator has ever cited fb303 or Facebook's monitoring infrastructure as an influence—not in conference talks, blog posts, podcasts, documentation, or the 2022 "Inside Prometheus" documentary. The attribution runs entirely to Borgmon.
fb303—named after the Roland TB-303 bass synthesizer ("bass is what lies underneath any strong tune")—was created by Mark Slee at Facebook in 2006 as part of the Thrift framework. It serves as the base class for all Facebook Thrift services, providing a standardized FacebookService interface with methods like getCounters(), getStatus(), and getExportedValues(). Every Facebook service inherits from this base, giving monitoring systems a uniform way to inspect any service's health and performance.
The collection model works as follows: services register counters in their fb303 ServiceData singleton, an agent called FBAgent periodically connects via Thrift RPC to call getCounters(), and FBAgent forwards the results to ODS (Operational Data Store), Facebook's time-series monitoring backend. By 2015, ODS stored 2 billion unique time series with 12 million data points ingested per second. Facebook later built Gorilla (published at VLDB 2015) as an in-memory caching layer that reduced query latency by 73x, and open-sourced it as Beringei in 2017.
The timeline confirms independent development. Borgmon existed at Google by ~2003. fb303 appeared at Facebook in ~2006. Prometheus arrived at SoundCloud in 2012. There is no evidence that Facebook's monitoring approach influenced Google's Borgmon (which predates fb303 by three years), or that Borgmon influenced fb303. These systems addressed identical problems at companies operating at similar scale, which explains their architectural convergence.
A thorough search found zero overlap in engineering personnel between fb303 and Prometheus. The core Prometheus team—Proud, Volz, Rabenstein, and Brian Brazil (another ex-Google SRE of seven years)—all came from Google. Mark Slee, fb303's creator, stayed in the Facebook ecosystem. No Prometheus contributor appears to have Facebook experience, and no fb303 contributor appears in Prometheus's history.
The one genuine personnel bridge between Facebook and Google's infrastructure worlds exists at the serialization layer. Thrift was created at Facebook by engineers who, according to multiple sources, had prior knowledge of Google's Protocol Buffers (which predates Thrift by approximately five years internally at Google). As one technical analysis noted: "Thrift was originally written by some ex-Googlers who left for Facebook, so naturally both systems have a lot in common." This shared DNA shows in the remarkable similarity between Thrift and Protocol Buffers: both use integer field tags, IDL-based code generation, and similar binary encoding approaches.
The most striking finding is how fb303, Borgmon, and Prometheus converged on essentially the same architecture without direct cross-pollination:
| Feature | fb303 (~2006) | Borgmon (~2003) | Prometheus (2012) |
|---|---|---|---|
| Collection model | Pull via Thrift RPC | Pull via HTTP /varz | Pull via HTTP /metrics |
| Data format | map<string, i64> binary | Plain text key=value | Text or protobuf |
| Dimensionality | Flat keys (dimensions encoded in name) | Labels added later | First-class labels from start |
| Query language | None (ODS has its own) | Borgmon rule language | PromQL |
| Service discovery | Internal registry | Borg Name Service | Kubernetes, Consul, DNS, etc. |
All three implement the same fundamental pattern: services passively expose their internal state through a standardized interface, and a central system actively polls them. This pull model has roots stretching back to SNMP in 1988 and was well-established in network management long before any of these systems existed. The convergence likely reflects the inherent advantages of pull-based collection in large service-oriented architectures: the monitoring system controls collection rate, failed scrapes double as health signals, and services need no knowledge of the monitoring topology.
The key architectural divergence is transport. fb303 chose Thrift RPC—a binary protocol requiring a generated client—while Borgmon and Prometheus chose plain HTTP, making metrics human-readable and debuggable with curl. Matt Proud explicitly evaluated and rejected Thrift for Prometheus, noting "numerous problems" including specification inconsistencies in enum wire encoding across language implementations. He chose Protocol Buffers instead, driven by his Google experience and protobuf's superior cross-language fidelity.
While fb303 and Prometheus share no direct monitoring lineage, they are connected through the Thrift–Protocol Buffers genealogy. Protocol Buffers was developed at Google around 2001 and open-sourced in July 2008. Thrift was developed at Facebook in 2006 by engineers with Google backgrounds and open-sourced in April 2007—actually beating protobuf to open source by over a year.
fb303 is built entirely on Thrift. Prometheus originally used Protocol Buffers for its exposition format (io.prometheus.client.MetricFamily), deprecated protobuf in version 2.0 (2017) in favor of text, then revived it in version 2.40+ (2022) for native histograms. gRPC, Google's open-source successor to the internal Stubby RPC system, uses protobuf and competes directly with Thrift as an RPC framework—but Prometheus chose raw HTTP over gRPC because gRPC was "not widely adopted" and "challenging to expose behind load balancers" at the time of its Remote Write protocol design.
The serialization connection is real but indirect: the same flow of engineers from Google to Facebook that created Thrift (and thus enabled fb303) also flowed from Google to SoundCloud to create Prometheus. Google's infrastructure DNA dispersed through its alumni into multiple companies, producing architecturally similar but independently implemented systems.
The fb303-Prometheus relationship is a case study in convergent evolution driven by shared engineering culture rather than direct technical inheritance. Three facts define the relationship: Prometheus descends from Borgmon through direct knowledge transfer by ex-Google SREs, with no acknowledged connection to fb303. fb303 predates Prometheus by six years but postdates Borgmon by three, with no evidence of cross-pollination in either direction. The deeper link is sociological—Google's engineering culture, carried by alumni to Facebook and SoundCloud alike, independently produced pull-based white-box monitoring systems that look remarkably alike despite having no shared code, engineers, or design documents. The true shared ancestor is not a system but a set of ideas about how large-scale infrastructure should be observed, ideas that proved so natural that three world-class engineering teams converged on them independently.