9 open source tools for building a fault-tolerant system

Maximize uptime and minimize problems with these open source tools.
273 readers like this.
magnifying glass on computer screen, finding a bug in the code

Opensource.com

I've always been interested in web development and software architecture because I like to see the broader picture of a working system. Whether you are building a mobile app or a web application, it has to be connected to the internet to exchange data among different modules, which means you need a web service.

If you use a cloud system as your application's backend, you can take advantage of greater computing power, as the backend service will scale horizontally and vertically and orchestrate different services. But whether or not you use a cloud backend, it's important to build a fault-tolerant system—one that is resilient, stable, fast, and safe.

To understand fault-tolerant systems, let's use Facebook, Amazon, Google, and Netflix as examples. Millions and billions of users access these platforms simultaneously while transmitting enormous amounts of data via peer-to-peer and user-to-server networks, and you can be sure there are also malicious users with bad intentions, like hacking or denial-of-service (DoS) attacks. Even so, these platforms can operate 24 hours a day and 365 days a year without downtime.

Although machine learning and smart algorithms are the backbones of these systems, the fact that they achieve consistent service without a single minute of downtime is praiseworthy. Their expensive hardware and gigantic datacenters certainly matter, but the elegant software designs supporting the services are equally important. And the fault-tolerant system is one of the principles to build such an elegant system.

Two behaviors that cause problems in production

Here's another way to think of a fault-tolerant system. When you run your application service locally, everything seems to be fine. Great! But when you promote your service to the production environment, all hell breaks loose. In a situation like this, a fault-tolerant system helps by addressing two problems: Fail-stop behavior and Byzantine behavior.

Fail-stop behavior

Fail-stop behavior is when a running system suddenly halts or a few parts of the system fail. Server downtime and database inaccessibility fall under this category. For example, in the diagram below, Service 1 can't communicate with Service 2 because Service 2 is inaccessible:

Fail-stop behavior due to Service 2 downtime

Fail-stop behavior due to Service 2 downtime

But the problem can also occur if there is a network problem between the services, like this:

Fail-stop behavior due to network failure

Fail-stop behavior due to network failure

Byzantine behavior

Byzantine behavior is when the system continuously runs but doesn't produce the expected behavior (e.g., wrong data or an invalid value).

Byzantine failure can happen if Service 2 has corrupted data or values, even though the service looks to be operating just fine, like in this example:

Byzantine failure due to corrupted service

Byzantine failure due to corrupted service

Or, there can be a malicious middleman intercepting between the services and injecting unwanted data:

Byzantine failure due to malicious middleman

Byzantine failure due to malicious middleman

Neither fail-stop nor Byzantine behavior is a desired situation, so we need ways to prevent or fix them. That's where fault-tolerant systems come into play. Following are eight open source tools that can help you address these problems.

Tools for building a fault-tolerant system

Although building a truly practical fault-tolerant system touches upon in-depth distributed computing theory and complex computer science principles, there are many software tools—many of them, like the following, open source—to alleviate undesirable results by building a fault-tolerant system.

Circuit-breaker pattern: Hystrix and Resilience4j

The circuit-breaker pattern is a technique that helps to return a prepared dummy response or a simple response when a service fails:

Circuit breaker pattern

Circuit breaker pattern

Netflix's open source Hystrix is the most popular implementation of the circuit-breaker pattern.

Many companies where I've worked previously are leveraging this wonderful tool. Surprisingly, Netflix announced that it will no longer update Hystrix. (Yeah, I know.) Instead, Netflix recommends using an alternative solution like Resilence4j, which supports Java 8 and functional programming, or an alternative practice like Adaptive Concurrency Limit.

Load balancing: Nginx and HaProxy

Load balancing is one of the most fundamental concepts in a distributed system and must be present to have a production-quality environment. To understand load balancers, we first need to understand the concept of redundancy. Every production-quality web service has multiple servers that provide redundancy to take over and maintain services when servers go down.

Load balancer

Think about modern airplanes: their dual engines provide redundancy that allows them to land safely even if an engine catches fire. (It also helps that most commercial airplanes have state-of-art, automated systems.) But, having multiple engines (or servers) means that there must be some kind of scheduling mechanism to effectively route the system when something fails.

A load balancer is a device or software that optimizes heavy traffic transactions by balancing multiple server nodes. For instance, when thousands of requests come in, the load balancer acts as the middle layer to route and evenly distribute traffic across different servers. If a server goes down, the load balancer forwards requests to the other servers that are running well.

There are many load balancers available, but the two best-known ones are Nginx and HaProxy.

Nginx is more than a load balancer. It is an HTTP and reverse proxy server, a mail proxy server, and a generic TCP/UDP proxy server. Companies like Groupon, Capital One, Adobe, and NASA use it.

HaProxy is also popular, as it is a free, very fast and reliable solution offering high availability, load balancing, and proxying for TCP and HTTP-based applications. Many large internet companies, including GitHub, Reddit, Twitter, and Stack Overflow, use HaProxy. Oh and yes, Red Hat Enterprise Linux also supports HaProxy configuration.

Actor model: Akka

The actor model is a concurrency design pattern that delegates responsibility when an actor, which is a primitive unit of computation, receives a message. An actor can create even more actors and delegate the message to them.

Akka is one of the most well-known tools for the actor model implementation. The framework supports Java and Scala, which are both based on JVM.

Asynchronous, non-blocking I/O using messaging queue: Kafka and RabbitMQ

Multi-threaded development has been popular in the past, but this practice has been discouraged and replaced with asynchronous, non-blocking I/O patterns. For Java, this is explicitly stated in its Enterprise Java Bean (EJB) specifications:

"An enterprise bean must not use thread synchronization primitives to synchronize execution of multiple instances.

"The enterprise bean must not attempt to manage threads. The enterprise bean must not attempt to start, stop, suspend, or resume a thread, or to change a thread's priority or name. The enterprise bean must not attempt to manage thread groups."

Now, there are other practices like stream APIs and actor models. But messaging queues like Kafka and RabbitMQ offer the out-of-box support for asynchronous and non-blocking IO features, and they are powerful open source tools that can be replacements for threads by handling concurrent processes.

Other options: Eureka and Chaos Monkey

Other useful tools for fault-tolerant systems include monitoring tools, such as Netflix's Eureka, and stress-testing tools, like Chaos Monkey. They aim to discover potential issues earlier by testing in lower environments, like integration (INT), quality assurance (QA), and user acceptance testing (UAT), to prevent potential problems before moving to the production environment. 


What open source tools are you using for building a fault-tolerant system? Please share your favorites in the comments.

Tags
User profile image.
Bryant Jimin Son is an Octocat, which not official title but likes to be called that way, at GitHub, a company widely known for hosting most open source projects in the world. At work, he is exploring different git technology, GitHub Actions, GitHub security, etc. Previously, he was a Senior Consultant at Red Hat, a technology company known for its Linux server and opensource contributions.

6 Comments

What happens when the load balancer fails? This box looks like a single point of failure. Linux-ha, or quorum-based designs, might be a solution, but none of the tools listed seem to provide this.

If the data flows through one single point, then a single failure can cause a systemic failure. Increasing availability requires using some form of redundancy for everything: data, servers, network equipment, cabling, power, cooling systems, etc.

As a side note, distributed system design by itself increases failure rates. This is because the global mtbf decreases with the number of physical machines: if a machine fails once a year, having 365 machines causes an average of one failure a day. Fault tolerance is based on redundancy, which is a slightly different concept than distribution. Even if many distributed systems do include redundancy in their design, fault tolerance is not intrinsic to the distributed design patterns and one should not assume it is present by default.

That is an excellent point, Pascal. It will be a great idea if you can contribute to an article about it! I will also look for a way to extend this topic. Thank you.

In reply to by pascal martin (not verified)

Hi Pascal, indeed, just by introducing multiple solution/hardware you are increasing the likehood of failure. But, as with any solution should be, the new thing should be carefully analyzed and weighted before actually going along with that. In above example, yes, introducing a load balancer could introduce a spof, but as any engineer shoul do, it should be configured also in ha mode (vrrp vip etc). Also, more hardware does not necessarly give more redundancy. Multiple members in cluster, distributed on different hardware does however:)

In reply to by pascal martin (not verified)

I agree. The pfsense software, for example, has such capability. EMS tools can support redundancy as well (e.g. qpid).

It seems that the article views the term "fault tolerance" more in the context of software quality: design for scale, prefer EMS over threads, test well, and monitor constantly.

All are very good advices, but fault tolerance is not about avoiding fault as much as it is about keeping the system functioning, and the data safe, when a fault (hardware or software) eventually happens.

In reply to by MariusP

One solution we use is to employ two servers running ha proxy and keepalived. Works pretty well.

In reply to by pascal martin (not verified)

Creative Commons LicenseThis work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License.