From FutureAdLabs To Adludio: how things evolved ( devops )
Adludio (f-k-a Future Ad Labs), the world’s leading Sensory Advertising Platform: mobile advertising that uses touch & haptic sensory feedback technologies to create ads proven to be 10x more memorable.
What we do in Adludio is build html5 richmedia creative and deliver them using our own adserver solution to our publisher network and programmatically through other adnetwork.
The production setup when I first joined Adludio last year was composed mainly into three main different services, the adserver, the adunit generator and the validation. To get everything setup on my machine was relatively simple just couple of git clone after have had installed all the necessary software: node, mysql, redis, git and my favourite editor!
This served us very well for a long time, but as time went on, we started having some trouble scaling the system to meet our growing traffic. We have decided to start building an entire new system ( which we call Playwat, after play cause every adunit we make is a small game and wat has in no limit in the adunit placement formats (playcaptcha, playinview, playroll, playunlock ) from the ground-up using Node.js. Node is a technology that seems almost tailor-made for Adludio – it allows us to handle a huge volume of requests very quickly, and allows us to distribute the load across many machines in an efficient way.
So we did decide to start restructuring everything and the first thing we have done is start splitting the three services into smaller microservices. ( Also the adunit were not generated ahead before serving but we decided to deploy them on s3 and only load them on the user side. )
Microservices are great ! :) everything is decoupled and deployed on his own environment and it is easy for anyone to understand and get his hand dirty with the code in short time rather than having to understand and start a big complicated service.
Even if things are easier on one side it gets really complicated to maintain many different environment each service is running under a different configuration, using a software version etc. and they all need to talk to each other so even networking gets more elaborate.
Our production environment is hosted on Amazon AWS.
We created our own VPC ( Virtual Private Cloud ) configured so that we have 3 private subnet one for each availability zone and 3 public subnet. Every single microservice is deployed into a private instance only accessible through our NAT and each service receive traffic from an ELB ( Elastic Load Balancer), if the service is meant to be public than the ELB is deployed in a public subnet otherwise in a private subnet to keep the ELB private. The routing table defined in the VPC allow any kind of traffic inside the VPC so each instance can talk to the other one.
We created an AMI ( Amazon ec2 image ) for each one of our service. The base AMI has the minimum requirements already installed and is ready to start the service application and the ssh keys to clone the repo from github.
The base AMI is used to create a launch configuration. A launch configuration is a template for the EC2 instances launched into an Auto Scaling group. The autoscaling group spin up new instances whenever an alarm activate ( I.e we have an alarm that activate when the latency is > 200ms for a period of 5min). The new instance started is using the launch configuration and on start up execute a git pull to update the code from github repository and then run an npm install to eventually install any new node modules added to the package.json file. The last command ran restart the service to make the updates effective.
This configuration is easy enough but the deployment is still a long process and the we are still missing an easy way to start a development environment.