Appendix C. Deploying web apps – Ruby in Practice

Appendix C. Deploying web apps

In chapter 8, we looked at tools to deploy Ruby (both web and off-the-web) applications. Here, we’ll look at specifics and the architecture for deploying web applications. There is certainly no shortage of options. So which deployment method is right for you?

We’ll start by briefly reviewing the available options, then narrow it down to one particular architecture that works best across the board, from local use during development to deploying for production on a server farm. We’ll show you how to deploy for this architecture using Apache 2 and Nginx as frontend web servers and Thin and Mongrel as backend application servers.

C.1. An overview of deployment options

The original model for deploying applications behind web servers was CGI. When using CGI, the web server starts a new process to handle each incoming request. That may be good enough for simple and oft-used scripts, but if you’re using a web framework or opening database connections, the cost of setting these up for each request will quickly bring your server to its knees. Since most web applications fall into the latter category, we’ll turn our attention to better-performing options.

To work around the limitations of CGI, modern web servers start the application once and keep that process alive, dispatching incoming requests as they come along. One approach that emerged early on consists of the web server and application running in separate processes, using protocols like FastCGI or SCGI to connect the two. A fair number of web servers support FastCGI, SCGI, or both, either natively or through add-ons, and on the Ruby side you’ll find both the ruby-fcgi and scgi gems. In spite of their availability, we do not recommend either option. As it stands, FastCGI and SCGI fell behind and are harder to set up and administer, and they offer lackluster performance compared to the alternatives.

The second approach involves a web server that can run Ruby code in the same process. Most web servers support this model through modules, plugins, or components, although with limited availability—the choice of language dictates the choice of web servers able to run the code, and few servers are able to run Ruby code in the same process. As we’ll see in a minute, this is not necessarily a problem.

Apache uses modules for running applications in the same process, the most popular being mod_php. You won’t hear much about mod_ruby because its development stagnated, in no small part due to its inability to deal with Rails applications. Java web servers use components that are packaged and deployed as WAR files. These must be written in Java and use the Servlet API, but they fortunately do support Rails applications, using JRuby to run Ruby applications on the JVM, and Warble to package Rails applications as WAR files (as discussed in appendix B).

There are three web servers designed specifically for running Ruby applications. WEBrick is a pure-Ruby implementation bundled as part of the core Ruby library. When you’re running Rails in development mode, you’ll notice that it uses WEBrick by default. WEBrick’s best feature is being available everywhere Ruby is, but don’t expect much in terms of performance or scalability.

Mongrel is a lightweight web server that incorporates a native library—C code on most platforms, and Java code when running on JRuby—to handle the CPU-intensive portion of HTTP processing, offering simple setup and configuration with real-world performance. Thin is another lightweight web server that uses the Mongrel HTTP processing library in combination with Event Machine, a high-performance I/O network library, offering better throughput and scalability.

Mongrel and Thin are both viable options, and while Thin has the edge on performance and scalability, Mongrel has been around for longer and has better tooling support and more extensive documentation.

It was difficult for us to choose one server to cover. Fortunately, they’re similar enough in principles and basic usage, so we decided to base our examples on Thin, and to highlight the differences in sidebars.

Although Thin and Mongrel are excellent choices for handling the dynamic portion of an application, and they support a variety of frameworks (Rails, Merb, Camping, to name but a few), they do not offer the same capabilities you would expect from a full-fledged web server. Neither one is a web server we would expose directly to the internet. Rather, we’re going to delegate that task to a more capable frontend web server, and configure it to reverse proxy into our Thin/Mongrel application servers. We’ll explain the basics of this architecture next.

C.2. Reverse proxying

Proxy servers are commonly used as outbound gateways; they handle traffic emanating from clients on the local network, directed at servers on the internet. Reverse proxy servers act as gateways for inbound traffic, responding to requests coming from clients on the internet and dispatching them to applications deployed on the local network.

Reverse proxies are compelling for several reasons. To clients, they look like a single web server and provide a central location for handling encryption, access control, logging, virtual hosting, URL rewriting, static-content caching, throttling, load balancing, and everything else a web server is tasked with. That leaves the backend web servers to deal exclusively with the business logic, simplifying management and configuration when you have many different applications deployed throughout the network. The two communicate with each other using the HTTP protocol.

One obvious benefit of a reverse proxy architecture is the ease of scaling, from a single instance used during development all the way to a large-scale server farm. You can start small, deploying the frontend web server and a handful of application servers on the same machine. Since the intensive portions of the workload are handled by these application servers, scaling is a matter of adding more machines and distributing the application servers across them. A single frontend web server can handle massive traffic before it reaches its scalability limits. Beyond that, you can start looking at load-balancing proxy servers like Varnish, Pound, or PenBalance, or even go down the route of dedicated hardware appliances.

Another benefit is the variety of options available and the ability to mix and match them to create a best-of-breed configuration. Standardizing on the HTTP protocol allows you to pick from any number of frontend web servers and load balancers, and just as easily mix in different backend web applications, from the simplest ones all the way to mainframes. Because we’re exercising management and control through the frontend web server, we don’t have to standardize on a single provider for our web applications, and can easily mix languages and platforms, running Ruby side by side with PHP, Python, J2EE, and .Net.

The last benefit is application isolation. Since all backend applications run independent of each other—the shared-nothing architecture—we can add new applications without impacting existing ones. For example, we can roll out a new Rails 2.0 application and run it alongside an older Rails 1.1 application, without having to migrate the older but fully functional code, just because we’re using the newer framework for future development.

The ability to scale with ease, mix and match best-of-breed solutions, and roll out new applications alongside legacy ones makes reverse proxy our favorite deployment model. So let’s look at actual deployment. We’ll start by showing you how to set up Thin and Mongrel for the application servers, and then proceed to cover Apache and Nginx for the frontend web servers.

C.3. Setting up Thin

True to its name, Thin is a light web server that’s incredibly easy to set up (visit for more information). We’re going to start from an existing Rails application and work our way to having an operating system service that manages multiple application instances.

If you don’t already have Thin, now is the time to install it by running gem install thin. We’ll be conservative and start out by testing a single instance of the application. From the root directory of your Rails application, issue the following command:

$ thin start -e production --stats /stats

After you start Thin, you should see some output about a server starting up and serving on port 3000. Thin is quick enough that you’ll want to use it in development instead of WEBrick, and it knows that, so it defaults to run in development mode.

Performance benefits of a frontend web server

A well-tuned frontend web server does more than just shield your web applications from the internet and provide a central point for management and control. Here are five ways in which it can speed up your web applications:

  • It can serve static content directly, freeing your application to deal with dynamic content.
  • It can compress responses before sending them to the client, cutting down bandwidth and improving response time.
  • It can buffer responses on behalf of slow clients. Slow clients keep the application busy, waiting to transmit the full response; buffering frees the application to cater to the next incoming request.
  • Acting as a proxy server, it adds another layer of caching between client and server. Make sure to mark responses that are publicly cacheable by using the right Cache-Control directive.
  • By keeping connections open without tying up processing threads, it can take advantage of HTTP keep-alives and scale to a larger number of concurrent connections.

Here we’re dealing with deployment, so we need to force Thin to run in production mode just to make sure we got the configuration right. It’s easier to check for issues now than later on, when we run Thin as a background service.

Next, point your web browser to http://localhost:3000. We added the --stats option, so you can also navigate to http://localhost:3000/stats and investigate HTTP requests and responses. A common issue with reverse proxy configuration is forgetting to forward an essential request header, and the --stats option helps you spot these problems.

Once you have verified that everything works as expected, it’s time to create a configuration file. Now let’s generate a simple configuration that will run three instances of Thin that listen on ports 8000 through 8002:

$ thin config -s 3 -a -p 8000 -e production -C myapp.yml

In this configuration, the frontend web server and Thin all run on the same machine, and since we don’t want to expose Thin to clients directly, we told it to only accept requests using the loopback address If you’re running Thin on a separate machine from the frontend web server, use an IP address that is directly accessible to the frontend web server. The default ( will work if you don’t care which IP address receives incoming requests.

We now have a configuration file called myapp.yml. Let’s see what it looks like:

pid: tmp/pids/
log: log/thin.log
timeout: 30
port: 8000
max_conns: 1024
chdir: /var/www/myapp
environment: production
max_persistent_conns: 512
daemonize: true
servers: 3

If we wanted to add more servers, we could change the port, or IP assignment, or any other configuration option; we could run the command with different options; or we could edit this YAML file in a text editor.

Later, we’re going to run Thin as a service. We’re going to have one Thin service per machine, and that service may run any number of applications, so each configuration file must point to the root directory of the application it runs. If you move the application to a different location, make sure to change the value of the chdir configuration property.

Next, let’s start Thin using this configuration:

$ thin start -C myapp.yml

To check that it’s running, point your browser to each of the ports, http://localhost:8000 through 8002, or use the lsof command:

$ lsof -i tcp@ -P
ruby 3321 assaf 3u IPv4 0x4e74270 0t0 TCP localhost:8000 (LISTEN)
ruby 3324 assaf 3u IPv4 0xaf1e66c 0t0 TCP localhost:8001 (LISTEN)
ruby 3327 assaf 3u IPv4 0xb0a0e64 0t0 TCP localhost:8002 (LISTEN)

How you get Thin to run as a service depends on the operating system you’re using. On Linux, you can do that with a single command:

$ sudo thin install

This command creates a new directory for storing the configuration files (/etc/thin) and adds a script in /etc/init.d to control the Thin service. When you start the Thin service, it enumerates all the configurations it finds in the /etc/thin directory and starts a server based on each. If you have more than one application, make sure they all use different port ranges. If you run the same configuration on multiple machines, you could keep the configuration file in the application and link to it from the /etc/thin directory, like this:

$ sudo ln -s myapp.yml /etc/thin/myapp.yml

Finally, we’re going to set up the service to start and stop automatically, and then get it started. On Red Hat/CentOS, use these commands:

$ sudo /sbin/chkconfig --level 345 thin on
$ thin start --all /etc/thin

On Debian/Ubuntu, use these commands:

$ sudo /usr/sbin/update-rc.d thin defaults
$ thin start --all /etc/thin

Now that we’ve got Thin up and running, let’s configure the frontend web server using either Apache or Nginx.

Setting up Mongrel

Mongrel and Thin are similar enough that we can follow the same workflow and just highlight where they differ (visit for more information about Mongrel). To run a cluster (several instances) of Mongrel, you’ll want to install the mongrel and mongrel_cluster gems with this command:

$ gem install mongrel mongrel_cluster

We’ll first start a single instance in production mode:

$ mongrel_rails start -e production

To create a configuration that runs three instances on ports 8000 through 8002, use this command:

$ mongrel_rails cluster::configure -e production \
-N 3 -p 8000 -a -c $PWD

You need to point Mongrel to the root of your Rails application explicitly, which we did here using the -c $PWD argument. This command will create a new configuration in config/mongrel_cluster.yml.

When run as a service, Mongrel picks up all the configuration files it finds in the /etc/mongrel_cluster directory, so either move your configuration file there, or create a symbolic link, like this:

$ sudo mkdir /etc/mongrel_cluster
$ sudo ln -s config/mongrel_cluster.yml /etc/mongrel_cluster/myapp.yml

To start and stop the service from the command line, use these commands:

$ mongrel_cluster_ctl start
$ mongrel_cluster_ctl stop

To deploy Mongrel as a service on Linux, find the resources/mongrel_cluster file in the Mongrel gem directory, and copy it over to the /etc/init.d diretory using gem contents mongrel_cluster. Make it an executable with chmod +x, and configure it to start and stop at the appropriate run levels. On Red Hat/CentOS, use this command:

/sbin/chkconfig --level 345 mongrel_cluster on

On Debian/Ubuntu, use this command:

sudo /usr/sbin/update-rc.d mongrel_cluster defaults

On Windows, start with this command:

gem install mongrel_service

Then install a service by running the following command, which takes the same command-line arguments or configuration file:

mongrel_rails service::install

C.4. Setting up Apache load balancing

With Thin up and running, we can turn our attention to the frontend web server. To set up Apache for reverse proxy and load balancing, you need Apache 2.1 or later, loaded with mod_proxy, mod_headers, and mod_proxy_balancer. These modules are included and activated in the default Apache configuration.

We’re going to create a new configuration file for our application, myapp.conf, and place it in the Apache configuration directory. We’ll start by defining the proxy balancer to point at the three Thin instances:

<Proxy balancer://myapp>

Put this at the head of the configuration file. If you add more instances (ports), move Thin to a different server (IP address), or if you need to fine-tune how the workload is distributed (assigning a different load factor to each instance), this will be the place to make these changes.

Next, and still using the same configuration files, we’re going to create a virtual host that handles incoming requests using the proxy load-balancer:

<VirtualHost *:80>
DocumentRoot /var/www/myapp/public

# Redirect all non-static requests to application,
# serve all static content directly.
RewriteEngine On
RewriteRule ^/(.*)$ balancer://myapp%{REQUEST_URI} [P,QSA,L]

Our application serves dynamic content, but also includes static content like images, CSS stylesheets, JavaScript, and so forth. Apache can handle these efficiently and without burdening the backend server, so we tell Apache to serve any file it finds in the document root directory, and to dispatch all other requests to the backend server. Rails puts all the static content under the public directory, so point DocumentRoot there. Never point it to the root directory of the Rails application unless you want people to download your application’s source code and configuration files.

The reverse proxy forwards requests to the backend application using HTTP, so when configuring a virtual host to handle HTTPS requests, the proxy must inform Rails that the original request came over HTTPS by setting the forwarded protocol name. You can do that by adding this line to the virtual host configuration:

RequestHeader set X_FORWARDED_PROTO 'https'

Now we’re ready to restart Apache (sudo apachectl -k restart), open the browser to http://localhost, and watch the application in action.

Before we finish off, here’s another trick to add to your arsenal. Apache includes a Load Balancer Manager that you can use to monitor the state of your workers and change various load-balancing settings without restarting Apache. The following configuration will enable local access to the Load Balancer Manager on port 8088:

Listen 8088
<VirtualHost *:8088>
<Location />
SetHandler balancer-manager
Deny from all
Allow from localhost
Setting up Nginx

Nginx is an up-and-coming web server. It isn’t as popular as Apache and unfortunately isn’t as well documented, but it offers better performance and scalability. At its core, Nginx has a smaller memory and CPU footprint and uses nonblocking I/O libraries, so it can scale to larger workloads and maintain a higher number of concurrent connections. We recommend giving Nginx a try, and we’ll show you how to configure it as we did for Apache (you can read more about it at

We’re going to create a new configuration file for our application, myapp.conf, and place it in the Nginx configuration directory. Some setups of Nginx will include all the configuration files placed in a certain directory (typically /etc/nginx/enabled), but if not, make sure to include it from within the http section of the main configuration file.

We’ll start by defining the upstream web server that points to our three Thin instances:

upstream myapp {

Next, and still using the same configuration files, we’re going to define a virtual host that handles incoming requests by dispatching them to the upstream server:

server {
listen 80;
gzip on;
proxy_buffering on;

location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;

root /var/www/myapp/public;
if (!-f $request_filename) {
proxy_pass http://myapp;

When dealing with HTTPS requests, add the following line to the list of forwarded headers:

proxy_set_header X-Forwarded-Proto 'https';

If you’re running Nginx and Thin on the same machine, you can take advantage of Unix sockets by configuring the upstream server to use them:

upstream backend {
server unix:/tmp/thin.0.sock;
server unix:/tmp/thin.1.sock;
server unix:/tmp/thin.2.sock;

You will also need to configure Thin to use sockets instead of IP ports:

$ sudo thin config -s 3 -S /tmp/thin.sock -e production -C /etc/

C.5. Summary

In this appendix, we covered options for deploying Ruby-based web applications, focusing specifically on the reverse-proxy architecture. The benefits of this architecture are the ability to scale from a single-server deployment to a cluster of application servers, and the ease of mixing different web applications and languages. We showed you how to set up a cluster using two different lightweight Ruby servers, Thin and Mongrel. We also showed you how to set up a sturdy frontend server using either the popular Apache or the blazing fast Nginx.