System Design Interview: Stop Using Kafka for Everything (And What to Say Instead)
Stop overengineering your architecture diagrams. Discover why technical interviewers prefer simple task queues over heavy distributed systems.
Modern software architecture faces a massive challenge when handling heavy computational loads.
When an application processes thousands of heavy network requests simultaneously, the primary servers often freeze entirely.
This freezing happens because the servers are waiting for slow background computations to finish. To prevent a complete system crash, engineers must carefully separate the fast incoming requests from the slow background processing.
They achieve this separation by placing an intermediary software layer strictly between the different server components. Choosing the specific technology for this intermediary layer is a critical architectural decision.
Many engineering candidates automatically suggest massive event streaming platforms during technical evaluations. They incorrectly assume that proposing the most complex distributed system will immediately impress the interview panel.
However, defaulting to such heavy infrastructure often demonstrates a severe lack of practical engineering maturity.
Understanding exactly when a simple message broker is technically superior to a complex streaming platform is a crucial skill. Mastering this specific architectural trade off is absolutely required for passing senior system design evaluations.
The Core Problem of Synchronous Communication
In modern software architectures, an application is split into dozens of small, independent backend services. These services must communicate constantly to fulfill a single user network request.
In early software development, these discrete components communicated synchronously over the network.
This means one server sends a network request to another server and completely stops working while waiting for a reply.
Synchronous communication works perfectly fine for simple database lookups or fast internal calculations. However, it fails drastically when a system handles slow workloads or unpredictable network latency.
If a primary web server waits for a background server to process a massive file, the primary server cannot handle new traffic. The entire application interface locks up, and incoming user requests start to drop completely.
This tight dependency creates a dangerous cascade of failures across the entire server architecture.
To resolve this severe bottleneck, engineers use asynchronous communication.
In an asynchronous design, the main server hands off the raw data payload and immediately moves on to its next task.
The main server never waits for the heavy computation to finish.
To make this asynchronous handoff work safely, the architecture requires a highly reliable software middleman.
This middleman sits securely between the fast web servers and the slow processing servers. It holds the pending data safely in system memory until a background server has enough computing resources to process it.
What is a Simple Message Broker?
A message broker is a lightweight software component dedicated to routing temporary data between different backend systems. It acts as a highly efficient middleman for safe task delegation.
The software service generating the raw data is called the producer.
The background software service doing the actual heavy computation is called the consumer.
The producer creates a structured packet of data called a message.
The producer sends this message directly to the broker over the internal network. The broker receives the data and places it into a specific memory structure known as a queue.
A standard queue operates on a strict processing sequence where the oldest data is handled first.
Message brokers generally operate on a push-based software architecture.
The broker actively monitors all the connected consumer applications to see which ones are currently idle. It then pushes the next available message directly to an available consumer.
This entire routing process happens entirely within the fast system memory of the broker.
The Destructive Read and Acknowledgments
The absolute defining characteristic of a simple message broker is its strict data deletion policy. When a consumer finishes processing a message successfully, it must send a digital signal back to the broker.
This highly specific network signal is called an acknowledgment.
Once the broker receives this acknowledgment, it permanently deletes the message from its local memory.
This immediate deletion mechanism is called a destructive read. It makes message brokers incredibly fast and hardware-efficient.
The broker does not need to store massive amounts of historical data on expensive hard drives. Because it constantly clears out completed tasks, it requires very little server infrastructure to operate at lightning speed.
If a consumer application crashes while processing a message, it fails to send the acknowledgment signal.
The broker notices this missing signal and automatically places the message back into the queue safely.
If a message fails processing repeatedly, the broker moves it to a safe isolation area called a dead letter queue.
This built in error handling makes traditional brokers incredibly resilient.
Keep reading with a 7-day free trial
Subscribe to System Design Nuggets to keep reading this post and get 7 days of free access to the full post archives.


