DistilledODN is a mission critical piece of technology, and the infrastructure has been designed from the ground up with that in mind.
Sitting in AWS, the DistilledODN platform is currently located in 3 regions: Dublin (Ireland), Oregon (US), and Virginia (US).
The information below gives a higher level insight into our infrastructure, and you can also find more information on our Resilience and Security page.
The platform runs with a level of redundancy that is maintained during times of increased traffic by way of auto-scaling techniques. When traffic levels certain thresholds (defined by various infrastructure metrics) then new servers will be automatically provisioned and brought into service within a few minutes.
We provide two methods of failover, ensuring that should the DistilledODN platform or AWS suffer from any problems your traffic will be immediately and automatically routed directly to your origin servers.
This ensures that if there is an unexpected service interruption to the platform that your website customers are not affected and can continue to visit your website like normal.
We provide multiple points of presences (POPs) and most of our customers will utilise all of these. In order to ensure that your website maintains peak performance traffic is routed to the POP that has the lowest latency (usually the nearest).
This DNS based latency detection also provides an additional level of protection should a specific AWS region ever suffer connectivity issues - your traffic will automatically go via one of our other data centres.
With tens of thousands of requests going via the DistilledODN platform every minute, it is important that deployments do not interrupt service.
We use zero-downtime software deployment strategies to ensure that when we roll our updates your customers don't notice.
The platform runs with levels of redundancy capable of handling large and sudden spikes in traffic. The diagram below shows an example when a customer's traffic increase 80x within 2 hours due to a news article.
During this spike the site received over 200 million requests, at over 4,000 requests a second.
The platform responded automatically to temporarily bypass certain non-essential internal services, whilst auto-scaling additional servers, meaning that the site continued to operate and suffered no downtime.