When you migrate a legacy application to the cloud, a thorough audit of your app will reveal a number of elements or procedures that either do not scale well or are not compatible with your new cloud infrastructure. Unfortunately, quick “lift and shift” cloud migration tools do not always solve these issues.
Our ultimate goal is to transition legacy applications from instances that are manually configured — and therefore special, dependent, and finicky — to instances that are similar, easy to destroy, and easy to replace.
Here are a couple of common issues engineers at Logicworks see frequently, and how we confront them in a managed AWS deployment:
Hardcoded IPs
As a rule of thumb, a cloud engineer should touch IP addresses as little as possible. In a flexible, auto scaling, self-healing environment, you should be able to spin up and destroy instances with little or no “loyalty” to a particular VM.
Hardcoding IPs into your source code may have been a good solution for DNS-related complications or other requirements in your current environment. But hardcoding IPs into source code running on AWS is an especially bad idea for a number of reasons:
- When you restart an instance via API or console, the IP changes and code must be recompiled
- You cannot change a security group after you launch it, which means if the IP changes, you have to tear down security groups and bring them back up
- If you are employing CloudFormation stacks, you cannot update an instance with a database that has a static IP; it will try to spin up a new instance if that IP address is already taken
- It forces the same address to be used in every environment (dev, sys, qa, prod)
The Solution: Make API calls to find out where a resource might live. In AWS, this might look like this:
$ipaddress = `aws ec2 describe-instances`;
Replacing the hardcoded IP with an elastic IP (EIP) is a messy and unnecessary solution; making an API call is simpler.
Chatty Protocols
On your current network, it might not be an issue to have a chatty protocol that requires a server to wait for acknowledgment before it can transmit again. When you move to the cloud, variable latency could turn a millisecond wait time into many minutes. Server-side session states often add additional stress.
Increasing bandwidth or upgrading your instance type is not a long-term solution; on cloud infrastructure, the ultimate goal is to create protocols that are latency tolerant.
The first step is thinking of these protocols as services, not objects. You want each request to do as much work as possible, so that related requests are grouped together rather than making a separate remote method invocation for each request.
The Solution: ElastiCache is the AWS implementation of Memcache or redis, which allows us to cache data locally and avoid going over the network. Depending on the environment, we usually host our ElastiCache tier on a high memory instance with better IO.
Memcache and redis can be downloaded onto any EC2 instance, which can be an issue since each instance needs to access a common Memcache or redis instance. ElastiCache is maintained as a separate tier and accessed through the same protocol over TCP.
You should also look for opportunities to return partial responses. As in an SQL select statement, you can specify only the fields you want back in a partial response. If you are using CloudFront, GET requests can be broken down into smaller units with Range. Range GET requests improve the efficiency of partial downloads. CloudFront checks Memcache or ElastiCache first; if the cache does not contain the requested range, it is forwarded to origin.
At the same time, look out for low hanging fruit. There could be some unnecessary roundtrips that could be combined or eliminated.
Many Unused Libraries or Non-Standard Libraries
Legacy applications can often have dozens or even hundreds of non-essential libraries that no one has cleaned out in years. If you maintain these libraries in your new environment, you have to accept that during scaling or deploying a new instance, you will either manually install these libraries, update them manually in your baked AMI, or your Puppet script will spend hours installing them one by one, introducing unnecessary latency into deployment.
The Solution: Start with a clean machine and uninstall the libraries one by one to see what breaks and when it breaks. This will not only make your application more easily scalable, it also helps you better understand your application’s vulnerabilities.
Optimizing an application for migration to AWS is not the simplest process. In fact, you may have to decide which applications make the most sense in your public cloud and which would make an easier transition in a private cloud solution. But eliminating these frequent sticking points will help you take full advantage of the scalability and flexibility of whatever cloud infrastructure you choose.
Feel free to contact us if you want to learn more about how we optimize your applications for the cloud.
1 Comment