uWSGI is one of those interesting projects that keeps adding features with every new release without becoming totally bloated, slow, and/or unstable. In this post, we'll look at some of its lesser used features and how you might use them to simplify your Python web service.
Let's start by looking at a common Python web project's deployment stack.
- Nginx: Static file serving, SSL termination, reverse proxy
- Memcached: Caching
- Celery: Background task runner
- Redis or RabbitMQ: Queue for Celery
- uWSGI: Python WSGI server
Five services. That's a lot of machinery to run for a basic site. Let's see how uWSGI can help you simplify things:
Static File Serving
uWSGI can serve static files quite efficiently. It can even do so without tying up the same worker/thread pool your application uses thanks to it's offloading subsystem. There are a bunch of configuration options around static files, but the common ones we use are:
offload-threadsthe number of threads to dedicate to serving static files
check-staticthis works like Nginx's
@tryfilesdirective, checking for the existence of a static file before hitting the Python application
static-mapdoes the same, but only when a URL pattern is matched
Other options exist to allow you to control gzipping and expires headers among other things. An ini configuration for basic static file serving might look like this:
offload-threads = 4 static-map = /static=/var/www/project/static static-map = /media=/var/www/project/media static-expires = /var/www/project/static/* 2592000
More information on static file handling is available on a topic page in the uWSGI docs. When placed behind a CDN, this setup is sufficient for even high-traffic sites.
uWSGI can handle SSL connections and even the SPDY protocol. Here's an example configuration which will use HTTPS and optionally SPDY as well as redirecting HTTP requests to HTTPS:
https2 = addr=0.0.0.0:8443,cert=domain.crt,key=domain.key,spdy=1 http-to-https = 0.0.0.0:8000
uWSGI speaks HTTP and can handle efficiently routing requests to multiple workers. Here's an example that will start an HTTP listener on port 80:
master = true http = 80 # http://uwsgi-docs.readthedocs.org/en/latest/articles/SerializingAccept.html thunder-lock = true uid = www-data gid = www-data
In this scenario, you'll need to start
uwsgi as the
root user to access port 80, but it will drop privileges to an unprivileged account via the
You can also do routes and redirects (see the docs for more complex examples):
route = ^/favicon\.ico$ permanent-redirect:/static/favicon.ico
Note: It is unclear to me whether uWSGI's HTTP server is vulnerable to DoS attacks such as Slowloris. Please leave a comment if you have any more information here.
Did you know uWSGI includes a fast in-memory caching framework? The configuration for it looks like this:
cache2 = name=default,items=5000,purge_lru=1,store=/tmp/uwsgi_cache
This will configure a cache named
default capable of holding up to 5000 items purging least recently used keys in the event of an overflow. The cache will periodically be asynchronously flushed to disk (
/tmp/uwsgi_cache) so the uWSGI process can be restarted without also dropping the entire cache.
Yes, that's right, uWSGI includes a task queue too. The uWSGI spooler can not only queue tasks for immediate execution, but also provide cron-like functionality to schedule tasks to run at some point in the future. It is configured, simply by providing a directory to store the queue and the number of workers to run:
spooler = /tmp/uwsgi_spooler spooler-processes = 4
uwsgi Python package provides a
uwsgidecorators module that can be used to place jobs on the queue for execution. A simple example:
from uwsgidecorators import cron, spool @cron(0, 0, -1, -1, -1) def cronjob_task(): # This will run everyday at midnight ... @spool def queued_task(**kwargs): # Something slow you want to queue ... # Put a task into the queue queued_task.spool(foo=1, bar='test')
As you can see, uWSGI really is a swiss army knife for serving Python web services. Actually, it's not even limited to Python. You can use it for Ruby and Perl sites as well. We've used many of these features on production sites with great success. While specialized services are certainly going to be more robust for high-volume workloads, they are simply overkill for the majority of sites.
Distributed microservice architectures may be all the rage, but the reality is that most sites can run on a single server. Reducing the number of services and dependencies makes deployment easier and removes points of failure in your system. Before you jump to add more tools to your stack, it's worth checking if you can make do with what you already have.