NGINX: An open source platform for high-performance web architectures

No readers like this yet.
people on top of a connected globe

Pixabay

NGINX (pronounced "engine x") is an open source, high-performance HTTP server and reverse proxy server.

Since its public launch in 2004, NGINX has focused on high performance, high concurrency, and low memory usage. In 2011, NGINX, Inc. was formed to help develop and maintain the open source distribution, and to provide commercial subscriptions and services. In this article, I'll provide an introduction to NGINX Open Source and NGINX Plus, and tell you how to get involved with the community.

In an interview with Opensource.com earlier this year, NGINX community leader Sarah Novotny said, "In the web performance community, knowing about NGINX became practically a secret handshake—NGINX is the secret heart of the modern web." She also explained that in addition to technical benefits, the project has a growing international community.

So what's different about NGINX? NGINX uses a scalable, event-driven architecture instead of the more traditional process-driven architecture. This requires a lower memory footprint and makes memory usage more predictable when the concurrent connections scales up.

In a traditional web server architecture, each client connection is handled as a separate process or thread, and as the popularity of a website grows and the number of concurrent connections increases, the web server slows down, delaying responses to the users.

From a technical standpoint, spawning a separate process/thread requires switching the CPU to a new task and creating a new runtime context, which consumes additional memory and CPU time, negatively impacting performance.

NGINX was developed with goals of achieving performance that is 10 times better and optimizing the use of server resources, while also being able to scale and support dynamic growth of a website. As a result, NGINX became one the most well known modular, event-driven, asynchronous, single-threaded web servers and web proxies.

Why use NGINX?

These days, applications rule the world. They aren't just tools that run people's workplaces—they now run people's lives. Demand for immediate response, flawless behavior, and even more features is unprecedented. And, of course, people expect applications to work equally well across different types of device, especially on mobile. How fast an application performs is just as important as what it does.

NGINX's core features, such as its massively scalable event-driven architecture with high performance HTTP and reverse proxy server, access and bandwidth control, and the ability to integrate efficiently with a variety of applications, have helped to make it a platform of choice for any website or service requiring performance, scalability, and reliability.

About core features

Event-driven is an approach to handling various tasks as events. For example, an incoming connection is an event, disk read is an event, and so on. The idea is not to waste server resources unless there's an event to handle. Modern operating systems can notify the web server about initiation or completion of a task, which in turn enables NGINX workers to use proper resources in an organized way. Server resources can be allocated and released dynamically on demand, which results in optimized usage of network, memory, and CPU.

NGINX users' connections are processed in highly efficient runloops inside a limited number of single-threaded processes called workers. Each worker can handle thousands of concurrent connections and requests per second.

NGINX does not create a new process or thread for every connection. A worker process accepts the new requests from a shared listen queue and executes a highly efficient runloop across them to process thousands of connections per worker. The worker gets notifications about events from the mechanisms in the operating system kernel. When NGINX is started, an initial set of listening sockets is created. Workers then start to accept, read from, and write to sockets when processing HTTP requests and responses.

Asynchronous means the runloop doesn't get stuck on particular events. It sets conditions for alarms from the operating system about particular events and continues to monitor the event queue for alarms. When an alarm is raised, the runloop triggers actions. In turn, specific actions always try to utilize non-blocking interfaces to the operating system so that the worker doesn't stop on handling a particular event. This way, NGINX workers can use available shared resources concurrently in the most efficient manner.

Single-threaded means that many user connections can be handled by a single worker process, which in turn helps to avoid excessive context switching, and leads to more efficient usage of memory and CPU.

According to the project site, NGINX powers 40 percent of the Internet's 10,000 busiest sites and more than 20 percent of all web sites, including Dropbox, Github, and Zappos.

The NGINX community edition has all the required features and capabilities to help build web sites and services requiring performance, scalability, and reliability. NGINX PLUS takes this to next level by providing enhanced features and capabilities that will help users build a high-performance, trusted web server, and an application delivery solution that adds enterprise-ready features such as load balancing, session persistence, health checks, monitoring, and advanced management. The company site includes a page that explains what's included in the different editions.

Want to learn more? Check out the NGINX blog to stay up-to-date on the project.

A version of this article originally appeared on the Ashnik blog.

User profile image.
Sandeep Khuperkar is the Director and CTO at Ashnik. Sandeep brings more than 20 years of Industry experience, with 13+ years in open source and building open source and Linux business model. He is also visiting lecturer with few of Engineering colleges and works towards enabling them on open source technologies. He is also member of OSI and Linux Foundation.

1 Comment

Very well articulated, Sandeep.

Creative Commons LicenseThis work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License.