Over the past year, I’ve been busy at work and haven’t had much time to focus on personal web projects. During that time performance on my VPS gradually eroded as I added a few domains and generally ignored system administration. Given none of them have a whole lot of traffic, or pay the bills for that matter, I was okay with dismissing the upkeep. In any case, over the past couple months it started to become unbearable. I ended up rebuilding the server from scratch and the result has been more than satisfying, on the near order of magnitude level of response speed. Here’s how I did it (and so can you!).
First off, I kept the software running on the server to a minimum. This may sound obvious, but during the first build I’d overestimated the headroom I had in terms of CPU and IO. For example, why not run a mail server? Of course on Linux, a mail server is no simple matter – you got your Sendmail, Dovecot, Spam Assassin, MySQL or BerkleyDB. All these processes add up to a giant leech on system resources, just don’t do it. Set an MX record to a third party mail service and save yourself the trouble. Only install what you need. Again it may sound obvious, but you will be tempted at some point to disobey this best practice.
Secondly, if you’re using Apache, be smart about it. I had been using mod_wsgi in embedded mode, which is fine if you’re dealing with a fair amount of RAM. However, as the number of virtual hosts grew, the performance decreased – and not in a manner consistent with traffic. The rebuild utilizes one WSGIDaemonProcess/WSGIProcessGroup, which reduced the memory footprint of Apache quite a bit. Secondly, I’ve set up an nginx front end to proxy Apache/mod_wsgi as well as serve static content without any need to hit the Apache machinery for requests 9 out of 10 times. Additionally, proxying https through nginx seems to have helped, and is dead simple to configure. Remember that when running an nginx proxy, you will not need to gzip/expire/etc. twice – you can shut those features off in Apache and just let nginx handle it.
I initially built my VPS as 64bit. That way I can scale up to a larger VPS easily, I thought. Stupid. Unless you have more than 4 gigs of memory or an application that truly benefits from 64bit, just use 32bit and bank yourself some extra headroom.. Lets face it, if you need to scale up to more memory, that’s a happy problem which will only take one day to solve..
Beware of crons, especially ones that expand into cpu/memory and stick around for a few minutes at a time or that run frequently. I set each and every cron to use ionice which sets the process to a specified priority. Adding “ionice -c 3? prior to the script will set it to low priority. This may not be desirable in all cases, but should be the default unless a higher priority is what you really need.
5 */2 * * * ionice -c 3 /usr/bin/python /path/to/cron.py
In Django, I’m not going to touch on caching – my old build was caching fine so it wasn’t an issue. If your not caching (cacheable) pages, you’re going to want to look at that. What I did do was audit my indexed fields. When I had initially built out the server, some of my databases were much smaller than they are presently. With a small database, it’s easy to overlook indexing fields. Basically, what you want to do is look at all your queries and determine what you are looking results up off of. The big offenders tend to be DateTimeFields and SlugFields. In the field definition, just pop a “db_index=True” in and rebuild, then load the fixture back into the table. If the table is too big to work with a fixture, simply manage.py sqlreset the app, copy out the indexes and paste them in via the dbshell.
Just to give you an idea of what these changes applied to, my VPS was running 8 small sites (+6 https VitualHost clones).