When Enterprise IT systems migrate to AWS

When Enterprise IT systems migrate to AWS
Photo by Andrew Seaman / Unsplash

So we've all seen the marketing slides....

  • Company x saved 40% in infrastructure costs
  • Company y collapsed their monothlith into 40 Lambdas

But what happens when an on-prem system that has humed its tune in the local Data Centre for years and the vendor has never even thought of a cloud migration or even contemplated what capabilities exist in Public Cloud?

These stories don't make the headlines, however this one is slightly more interesting and I'd like to share some details predominantly around the networking solution built around AWS Tranist Gateway.

Setting the scene

Our target application recieves images from devices located across many geographical locations. What's unique about this system is that these client imaging devices are proper old school!

They are configured with:-

  • A single endpoint to send images to
  • Don't support DNS so target a single IP address
  • There are 800+ of these devices that need manual intervention to update configuration

All the above makes this a challenging application to migrate and host in AWS.

Redundancy within an AWS Region

When implementing redundant architectures in AWS, DNS often comes into play when spreading components over multiple Availability Zones (AZ) within a single region.

Elastic load balancing enables multiple interfaces across two or more AZs with a common entrypoint, however to benefit from this redundancy we must reference this load balancer via its DNS name.

Relevant to our imaging client limitations we need to maintain a single IP address that is accessible even if one of the AZs becomes unavailable.

AWS allows us to assigns CIDR blocks to subnets, these subnets bind to an AZ and then are used to host our resources.

It is currently not possible to move an allocated CIDR block from one AZ to another without deleting and recreating it!

How can we provide a single IP address to our imaging clients in a way we can move it across AZs? We basically need some sort of overlay IP address that our clients can reference.

The Call for Transit Gateway

Transit gateway simply put is a massive router that enables you to control where your traffic goes within AWS.

It enables many different hybrid network architectures as detailed here in the Hybrid Connectivity AWS Whitepaper.

Using Transit Gateway we can satisfy our overlay requirement using the following configuration.

As shown in the above diagram

  • Clients are configured with the overlay IP address 192.168.0.1
  • On-prem client traffic destined for 192.168.0.1 is passed to Transit Gateway via the Direct Connect Gateway
  • The Transit Gateway Route table is configured with the overlay IP address 192.168.0.1
  • Transit Gateway will pass any traffic it recieves for 192.168.0.1 to the VPC.
  • The VPC route table sends traffic for 192.168.0.1 to the instance with eni-123456789
💡
The instances must have source destination check turned off to be able to receive traffic for the overlay IP

To failover traffic from one Availability Zone to another, its a simple routing change to the VPC Route Table as follows.

Our application needs to be able to listen and respond on this overlay IP address but this is fairly trivial for any application such as NGINX or Apache.

The route table change can be triggered from a health check of the EC2 instance, a cloudwatch canary or a manual failover.

I hope this helps someone else if they are stuck with some legacy clients in the enterprise world.

Cheers