The Blimp Tech Stack: Backend

By José Padilla 16 Jan 2013

USS Shenandoah (ZR-1)

When we started working on Blimp back in August 2011 it was just a Flask application using MongoDB as the main database. Back then we thought that building a RESTful API and a client-side JavaScript app would be way easier than just building a traditional web app. We spent a few weeks hacking away MongoDB documents and API endpoints for each one. Then we had to build authentication, input validation, and a ton of other things that made us reinvent the wheel over and over. We had too many relations between objects which we had to maintain and enforce ourselves instead of letting an ORM do the work, we had to build an authentication and authorization system, plus validate and sanitize data.

After giving it some thought we knew that we were wasting time going down that road, so we quickly turn away. We knew that everything that we were doing was probably already built-in into Django, which we had used many times before and we knew was battle tested by other companies we look up to, like Disqus and Instagram. With this move we had the option of keeping MongoDB as our main database, but decided to go with PostgreSQL and take full advantage of the Django ORM.

Our system is constantly evolving into something we are all very proud of and this is a glimpse of it.

**Application

** The core application is built on Django and runs on Heroku. We chose Heroku for various reasons but mainly because how incredibly easy it is to scale resources to handle and accommodate more requests and load. Heroku also let’s us easily use Gunicorn as our WSGI server. We use Fabric to run all of our deployments and other sysadmin tasks. I recently released a Django project template with a collection of settings and Fabric tasks specially useful for projects on hosted on Heroku.

**Data Storage

** Since we are already on Heroku, we used their Postgres database as a service which is used and managed as any other Heroku add-on, making it easy to scale when its time. Our core application’s main database is all on Postgres.

We recently provisioned an Amazon EC2 instance with Memcached and Redis. Memcached is our main cache backend and Redis is used as the message broker for Celery and to cache application wide stats. We managed to automate the provisioning and setup of new cache and worker servers on Amazon EC2 which allows us to spawn new instances ready for production in a matter of minutes.

Static files and user uploads are hosted on Amazon S3. We also use Amazon (Route 53) for our DNS servers.

**Task Queue

** For most user actions we generate async events using Celery. Some events trigger emails, some trigger internal notifications, and others trigger event logging. Everytime an event is created we push the task into a queue which Celery consumes. All expensive jobs are executed asynchronously on the background.

**Monitoring and Error Logging

** One of the cool things about hosting on Heroku is the long list of awesome addons that can easily integrate with your app, one of them being New Relic. The add-on provides seamless integration with the Heroku platform giving us the capability of monitoring and fine tuning our application’s performance. New Relic gives us 24/7 snapshots of the app’s health and availabilty, back and front-end performance, web transactions, database calls, and more.

For error logging we have our own Sentry server instance. Sentry is an open-source Django app written by the Disqus guys. It is an event logging system which works great for application error reporting, aggregating events together with data such as web requests, exceptions and full tracebacks. The system and all of its components are open source on GitHub allowing you to host it yourself. If you don’t feel like mantaining your own hosted version you can signup for a subscription on getsentry.com, or use the Sentry Heroku add-on. With Sentry we are able to log all errors and exceptions on the backend as well as javascript errors on the client side.

For application wide stats like how many users we have, how many projects per company on average and many more, we use Hasselblad, an app we made in-house to monitor KPIs. We might open source this in the future. Right now is very rough but it works great.

**Transactional Emails

** Handling transactional emails has always been a pain. We use Postmark, another great product from Wildbit, on all of our projects, because it just simply removes that pain. We can monitor our volume, bounces, spam complaints, send activity, insights to know if our messasges are being receiced and have peace of mind. Wildbit has great products, an awesome team, and fantastic support!

**Billing and Payments

** We are using Stripe, we’ve been talking with the guys over at Stripe since they were in beta and they have given us a lot of help. Online payment solutions are a pain to get started with, you need a business bank account, merchant account, payment gateways, fees, etc. Stripe removes almost all of the pain, we are very happy.

That’s basically it. We plan to write more posts like this one. We are working on one about the client side tech stack, that should be published pretty soon. Feel free to ask further questions about our tech stack on the comments below.

Tweet Share

Next:

Updates for January 16, 2013