Cloud migration is full of potential pitfalls. However, not all mistakes are as immediate or glaring as downtime or data loss; some mistakes can only be seen months later, when their impact is deeper and harder to change.
Here are 3 common AWS migration mistakes that have the biggest impact on the long-term success of cloud projects:
Mistake #1: Assuming that the cloud “just works” or is cheap to maintain
Many business leaders overestimate the role and responsibilities of Amazon in maintaining the company’s AWS environment. The perception is that once a set of applications is migrated to AWS, in-house engineers no longer need to worry about maintaining the infrastructure layer; after all, Amazon does all that “maintaining” work for you.
This is a potentially dangerous misconception – and one that we run into often.
While AWS will maintain the physical infrastructure that supports your environment, it will not help you configure those virtual instances and get them ready to run your code. Moving to the cloud means you have outsourced racking and stacking servers, but your IT team still needs to do things like configure networks, maintain permissions, lock down critical data, set up backups, create and maintain machine images, and dozens of other tasks AWS does not perform. AWS is a world-class engine equipped with a robust set of tools, but it is not a car you can just drive off the lot.
Experienced engineers never have the illusion that the cloud is easy. Yet business leaders often do, which can make it difficult for engineers to advocate for critical projects like infrastructure automation or outsourced 24x7x365 AWS support.
Key takeaway: Your cloud migration plan is important, but your cloud maintenance plan is even more so. Make sure business leaders understand the time and cost of maintaining AWS resources and prioritize infrastructure automation plans so that engineers are not constantly firefighting problems.
Mistake #2: Building custom, “snowflake” AWS environments
Most cloud projects are initiated by a single business unit or project. Over time, each business unit creates its own cloud environment, with its own security rules and management processes, perfectly customized to its application needs.
This strategy has its benefits: business units can move quickly, without central oversight, and usually get products to market faster. But it also has significant downsides: like any piece of infrastructure or IT project, AWS infrastructure has a tendency to get even more complex and “custom” over time and could potentially grow into an unknown, ungovernable environment that costs too much and has no central control. Again, AWS has great services to govern complex environments, but enterprises need to have processes in place to implement and manage them.
You get the best of both worlds by templatizing some common features in AWS CloudFormation or another templating tool like Terraform, which product teams can then customize. There are two main benefits to this model:
- Every AWS environment has common properties, such as naming conventions, hub/spoke VPC architecture, access to Active Directory, etc. That means every environment by default meets your basic security parameters.
- Future AWS migration projects can happen more quickly, as product teams already have a basic template to work from.
Over time, your team will build bootstrapping scripts, likely with configuration management tools, that customize these templates per environment. The key is that embracing rules-based, automated, repeatable systems is just as valuable as upfront work to get to the cloud.
Key takeaway: After you get your first product or workload to AWS, take a step back and decide which aspects can be templatized and centrally controlled. Either focus engineering time on creating these templates and scripts or outsource that work to a third party (like Logicworks).
Mistake #3: Not Using AWS Native Tools
Vendor lock-in is one of the biggest concerns for enterprises migration to AWS. After spending years under the yoke of database giants and hardware vendors, IT leaders are skittish about getting stuck in a single public cloud, and therefore choose to use only the most basic AWS services (EC2, VPC) while bringing or building their own queueing system, code deployment tool, etc.
The idea is: if we ever need to leave AWS, we will not have put much skin in the game.
In reality, this technique usually does more harm than good. Enterprises end up spending the same amount of time — if not more — upfront building or re-architecting their own tools for AWS than it would have taken them to rearchitect an AWS tool for another cloud platform in the future. Then they have to manage those custom tools, update them, and improve them over time. Enterprises that avoid using AWS RDS, Amazon’s managed database solution, just end up paying higher licensing costs to Oracle and Microsoft to host their database in EC2.
The benefit of AWS is that their engineers have spent the last 10 years building advanced, scalable infrastructure services—and they release hundreds of updates to these services every year. You do not have to update them. You do not have to patch them or version them. If they break, AWS fixes it.
Key takeaway: If you already have sophisticated in-house, application-specific tooling, you can bring that with you to AWS. But wherever possible, evaluate AWS tools and consider the opportunity cost of maintaining your own tools. What else could your most valuable engineers be doing with their time?
_
Migrating to AWS is an exciting but precarious time. In the rush to migrate workloads, enterprises tend to think about Day 1 cost and agility savings and deprioritize Day 1000 maintenance and upgrading. Enterprises that plan ahead and think towards maximizing engineering effort will get the most out of AWS.
Logicworks is an enterprise cloud automation and managed services provider with 25 years of experience transforming enterprise IT. We are an AWS Premier Partner with the AWS Migration Competency. Contact us to learn more about our managed cloud solutions.
No Comments