Migrating & modernising SQL workloads - a CTO's perspective
This conversation was conducted as part of our recent Cloud Pathway event, 'Moving SQL Server to AWS Aurora'.
Dan Pacitti is CTO of ASP, an events and exhibitions website company providing their CMS 'Showoff' to events companies around the world. Cloudsoft worked with ASP to modernise their SQL server onto AWS, saving them 65% on their database costs.
These migration projects often involve a compelling event and there's no event more compelling than being faced with the bill for a hard way of refreshing the data center!
You can watch a recording below, or read the transcript. Please note that the transcript has been edited for clarity.
Alasdair: So you’re CTO at ASP Events, what does that involve? What does a typical day look like for you guys down at ASP.
Dan: So my CTO role is probably different from others you may have heard speak. I'm very hands on in the sense that I get my hands dirty in absolutely everything my team does! We're a very small team, so I like keeping in touch. I've driven a lot of change in terms of cloud adoption and how we can modernise our SAAS products effectively into the cloud and make the most of the services that AWS can give us.
Alasdair: That’s interesting, because as I recall when we first met several years ago you were originally hosting your service across a couple of data centers. How did AWS appear on your radar and what attracted you to AWS specifically?
Dan: We looked at AWS four or five years ago when we made a bit of investment into our data centers but at the time AWS wasn't necessarily the right fit for us. We didn't feel that it was the right time to adopt it for our workloads, but we always knew that it was going to be a step we wanted to take and we would take. The time came when we needed to make investment again into architecture and hardware and it made sense at that point to move to the cloud. We had a few issues with running SQL across data centers and trying to run ‘always-on’ availability groups. These migration projects often involve a compelling event and there's no event more compelling than being faced with the bill for a hard way of refreshing the data center!
Alasdair: I hear you! So how and why did you ultimately make the decision that the SQL workload would be migrated to AWS, and what kind of risks and benefits factored into your decision?
Dan: We went for the Rehost first, so we effectively tried to ‘lift and shift’ as much as we could. So in terms of moving the SQL we're not database admins at all! The main risk we considered was that it was a big change for us and there were a lot of code changes with moving to AWS Aurora, but the reason we went for it was the licensing costs. It's quite difficult when building a business case to be able to actually compare apples with apples when you're doing a comparison because of all the management overhead that is simply removed when you use a managed service from AWS!
Alasdair: Casting my mind back all those years ago, your stack used to involve a lot of Microsoft technologies. You would run everything primarily on Windows hosts accessed via remote desktop you had the usual Microsoft web stack of IIS and SQL server with active directory. Even I think of Microsoft DFS as a file share, so in terms of porting that, even in a lift and shift mindset, how did all that technology carry over into the brave new world of AWS?
Dan: It didn't massively but it was one of those things! We use Microsoft because that's what we'd always done. So with cloud software we tested a couple of options. We used DFS for our web servers, so we looked at using a Windows server again and FSX but that seemed overly complex. We chose to move to a much simpler architecture of Linux and AWS Elastic File System (EFS). In terms of SQL Server, we knew we had to make lots of changes. We looked into RDS, but with limits we ended up having to run four RDS instances as well so that was kind of a bit of a gotcha! This was a spearhead for us to move to Aurora. We did look at running the SQL Server in ‘always-on’ availability groups as well to try and carry Windows across and we could run that on Elastic Computing (EC).
Again we weren't database admins and to have the overhead of trying to manage that as well as trying to ‘lift and shift’ meant that Aurora was the best option for us.
Alasdair: Once you've experienced consuming a managed service where all that overhead, admin stuff is taken care of the thought of going back to hosting your own, particularly with Windows, oh it would keep you awake at night! Obviously you then took the decision that you were going to stop using Windows servers, you were going to stop remote desktoping into machines to do your admin chores and you were going to start using Linux with a bit more automation. This is of course a modernization journey in itself! Can you can you tell us a bit more about that?
Dan: Yeah it was a bit of a learning curve! We didn't really have much Linux experience, but I did feel it was the right fit for us to do that. There was a bit of a learning journey with Bash and looking at the user scripts. I felt we had the support we needed to do it. It was scary don't get me wrong! But it was the right decision to make. Moving to things like EFS was much simpler than DFS, which was a bit sluggish in terms of replication. The move to using as much cloud-native technology as possible really reduced the amount of maintenance and improved manageability.
Alasdair: I think that's testament to you being a hands on CTO!
Dan: Thanks, if you don't challenge the norm then then you're never going to get any better really!
Alasdair: So in terms of your development, management & release processes what changes had to be made to those as a result of migrating to the cloud and adopting more automation around CI/CD, or backup and restore?
Dan: Being across a couple of data centers we didn't really get a chance to do much automation and a lot of our deployments would be done out of hours. It would require two or three people to be there to have a fail back. Each instance would have to be manually updated within the data center. Moving to the cloud effectively and using the infrastructure-as-code, Cloud Formation etc. we could just deploy resources through Cloud Formation from templates and on every kind of build of a server. We are actually looking to take that step further now, looking at code deploy and code pipeline and to take more advantage of that automation capability.
Alasdair: How has this impacted formal KPIs like release time or downtime?
Dan: It probably saves us three or four hours per release, it’s a much more streamlined process. We’ve saved a lot of man hours too!
Alasdair: That's an experience that certainly resonates with us! At Cloudsoft we've been building our own automation products for over a decade, automation’s part of our DNA - when you get a taste for it you can’t imagine going back to the manual ways of working!
Just one one final question from me: are you sleeping better at night or are there still things that give you nightmares and keep you awake?
Dan: I am sleeping a lot better thank you! Things are a lot more self-healing now, so if anything untoward did happen with the auto-scaling with the databases it’s much more self-healing. I touched on things earlier, with SQL server, about when we run ‘always-on’ availability groups and the issues we had with those failing over gracefully. With Aurora and moving to cloud architecture the failover is almost seamless which is fantastic for us!
Alasdair: Thanks for sharing Dan and being prepared to talk about your experiences migrating and modernising on AWS!
Considering modernising your workloads?
Cloudsoft are AWS Advanced Consulting Partners, and have migrated and modernised the workloads of numerous clients from a variety of industries. You can read more about modernisation here, or get in touch with one of our Solution Architects for a free, 30 minute conversation about how we could help you adopt and make the most of the cloud: