In the world of microservices and distributed systems, how our internal components talk to each other is a huge factor in overall performance. We often rely on standards like REST over HTTP or tools like gRPC. These are fantastic for many things, especially external APIs and broad interoperability. But what happens when you're deep inside your own infrastructure, connecting services you control, and every bit of latency or overhead feels like friction you wish wasn't there?
This is a common challenge, and it led me to develop my own internal solution: xSMB (Excerion Sun Messaging Backbone). It's a high-performance messaging library, written in Rust, born out of the need to push the boundaries of speed for my specific internal service-to-service communication needs. Think of it as a specialized tool built for when standard protocols, while capable, just aren't quite fast enough for the job at hand.
Why Not Just Stick with gRPC or HTTP? The Performance Angle
gRPC, built on HTTP/2 and Protobuf, is already a significant improvement over traditional REST/JSON. But even it carries the inherent structure and negotiation overhead of the HTTP stack. For internal communication where we own both ends and can agree on message formats, we saw an opportunity to streamline things further.
xSMB takes a more direct route, prioritizing raw throughput and low latency by leveraging two core technologies:
ZeroMQ (zmq): We use ZeroMQ as the transport layer. It provides highly efficient, asynchronous "smart sockets" over raw TCP. It expertly handles message queuing, connection patterns (like request-reply or pub-sub), and eliminates much of the handshaking and framing overhead found in higher-level protocols. It's a proven performer in demanding environments.
MsgPack (rmp-serde): For serialization, we opted for MsgPack. It's a binary format that's significantly more compact and faster to encode/decode than JSON, and often leaner than Protobuf for typical data structures, especially when paired tightly with Rust's serde framework. Less data means faster network transfer and less CPU spent on (de)serialization.
By combining these, xSMB effectively creates a lean path for sending strongly-typed Rust data structures as compact binary blobs over efficient ZMQ sockets. Fewer layers, less interpretation, lower overhead.
What Patterns Does it Offer?
While lean, xSMB isn't just about raw sockets. It provides familiar messaging patterns, optimized for this ZMQ/MsgPack foundation:
Optimized RPC:
Unary: Fast, asynchronous request/reply for quick interactions.
Server-to-Client Streaming (S2C): Efficiently pushing sequences of messages from a server to a client (e.g., data feeds, large result sets).
Client-to-Server Streaming (C2S): Allowing a client to stream potentially large amounts of data to a server after an initial handshake/authorization (e.g., log uploads, event batches).
Optimized Pub/Sub: Decoupled event distribution using ZMQ's native pub-sub capabilities.
Async & Typed: Built entirely around tokio for non-blocking operations and uses Rust's type system (serde + a simple XSMBMsg trait) for compile-time safety and automatic serialization.
Configurable: Offers controls for tuning aspects like worker threads, queue depths, and message bursting behavior.
RPC Niceties: Supports propagating XSMBMetaData (for things like tracing context), request deadlines, stream control signals (like cancellation), and hooks for intercepting messages (Listeners).
The Trade-Offs: Speed vs. Standardization
It's important to understand where xSMB fits. It consciously trades the broad interoperability and standardization of gRPC/HTTP for maximum performance in a controlled internal environment.
You gain significant speed and reduce overhead.
You lose the easy out-of-the-box interoperability with diverse languages and standard HTTP tooling (like browsers, generic load balancers, etc.).
For internal Rust services talking to each other in performance-critical loops, this trade-off makes a lot of sense. For services needing broader reach or standard integration, gRPC or REST remain the go-to choices.
In Essence
xSMB represents an engineering decision to optimize heavily for a specific context – high-throughput, low-latency internal communication primarily between Rust, Java, NodeJS services. By leveraging the strengths of ZeroMQ and MsgPack and cutting out intermediate layers, it provides a valuable performance option within my internal toolkit. It's a fun example of how tailoring communication protocols to specific needs can yield significant performance benefits.