ok that title came from an article I was drafting around 2015 to complain about the second-tier outsourced pseudo-AWS in China and some of the rather dubious samples put out there which would be unusable in practice. But rather than complain, here’s a practical example of:
Migrating cPanel to AWS without breaking anything or spending a fortune
In contrast to most AWS articles this series focusses on migrating an optimised WHM/cPanel installation running live WordPress sites to AWS and uses a staged approach to introduce AWS native services.
Most AWS examples create new services, adding only the features needed for the example, with a scope too limited for real-world use. At other times AWS seems expensive: while the “typically cost $450/month” minimum to run WordPress is cheap in enterprise terms, the scale-out cost could be much higher, and already sounds high if you are currently serving up to 2,000 simultaneous users for $50/month.
WHM and associated plugins such as ConfigServer Security & Firewall (csf) substantially automate most systems management tasks such as security, backup, updates etc as well as managing all the services needed by a typical website such as databases, SSL Certificates, email etc.
WHM manages the server and partitions it into multiple hosting accounts, each of which has a Control Panel “cPanel” for easy management of that accounts domains, databases, files etc and optionally sandboxed SSL and FTP access.
Overall WHM/cPanel provides low cost easy maintenance with the ability to delegate a simplified web-based management console to hosting clients. However recent changes to the licencing model have significantly increase the price for hosting providers, further incentivising the move to cloud native solutions.
Migration Strategy Overview
The starting position is a “simple WHM” setup consisting of live and dr/standby WHM/cPanel virtual servers directly exposed to the internet without any of the benefits of the AWS infrastructure – see example setup.
- migrate live server to AWS, configure VPC firewall, ensure all services operational, optimize security and backup (this article)
- scale out to separate database and multiple front end web servers
- additional AWS optimisations
Stage 1 – migrating WHM to AWS
This stage sets up the initial WHM server and migrates the existing sites.
tldr; set up your networking first to avoid problems later: WHM licencing requires fixed ip and server name.
Tip: prepare your DNS first, it’s frustrating when you get to the end of a migration and discover that you forgot the DNS has a 24hour TTL, so check your DNS at least 1 day before the move and eg:
- get an Amazon ElasticIP address
- reduce TTL on the DNS for the domains you are going to use
- create an A record pointing to the Elastic IP address
- if appropriate, add the ElasticIP address as a permitted email sender to the SPF TXT records for the domain
Before getting started on the server build:
There’s a fairly good guide to the initial setup which I won’t repeat here: https://docs.cPanel.net/knowledge-base/web-services/launch-an-aws-ami-instance/ but highlight some salient points.
- when using WHM ami, ssh user is centos (though later when using session manager, the session manager user is ssm-user)
- setup your VPC networking – here are some notes on ports eg 2089 outbound is required to reach cPanel licencing:
- while setting up the VPC, add a gateway endpoint for S3 so S3 traffic stays within the AWS network
- get an elastic ip address and set up a record in DNS / Route53 (because WHM licencing requires it)
- choose the instance size, and buy a capacity reservation to get the 75% ‘discount’ (on-demand instances are 4x the price of aws reserved and non-AWS hosting services). Do note it may be that on AWS less spare capacity is needed on the server:
- network security on AWS reduces the load on the on-server by blocking off ports and certain DDOS before hitting the server’s defences: your mileage may vary but I’m seeing an order of magnitude lower cpu usage (>10x lower cpu utilisation than expected).
- on AWS, cpu may be burstable but memory is not – that said the next article will transfer off cache and database services which will reduce memory load.
- AWS solution will be to scale out rather than scale up: rather than using an instance size large enough for peak capacity, the AWS solution will add additional front end servers when usage thresholds are exceeded.
- subscribe to the cPanel&WHM for Linux AMI in the AWS region (the AMI is free but requires bring-your-own licence after trial period, nevertheless it takes a little while to authorise the subscription
- decide the initial disk space you need
- root disk will normally be EBS gp2 at <£1/month per 10GB
- EFS is aprox 3x price of EBS but offers unlimited space. EFS performance and relation to size is often commented on but performance was significantly improved in April 2020.
- WHM Basic WebHost Manager® Setup, allows specification of directory for new home locations so can be set to EFS mounted path see also https://s3.amazonaws.com/b izzabo.file.upload/pebknqsASDCVjFdFUK28_Vetter%20-%20Unlimited%20home%20with%20Amazon%20EFS%20and%20cPanel.pdf
- cPanel full backups or account transfer functions require temporary space and may also trigger capacity limits on small disks
- S3 offers cheap backup and file transfer via aws s3 sync and an S3 bucket can also be mounted directly using s3fs. While s3fs is not an entirely recommended solution it could works well for some use cases, and S3 writes within the same AWS region were a lot faster than I expected.
- small instances come with no instance store and no swap space – rather than creating a swapfile, Amazon recommend using a larger instance with more memory – another option is to create a separate EBS volume for swap https://dev.to/hardiksondagar/how-to-use-aws-ebs-volume-as-a-swap-memory-5d15
Ok, so now you decided what capacity you needed, created your VPC, Security Group, ElasticIP, IAM role for EC2 with access to S3 so now launch the EC2 with these settings
For Initial whm updates and config see: WHM/cPanel VPS Server setup example.
Pay particular attention to the email setup and note that you do have to submit a request to AWS to set up reverse DNS pointer and lift email restrictions – or in the case of SES lift restriction on SES sandbox.
#aws command line interface install curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" unzip awscliv2.zip sudo ./aws/install #remember to configure AWS for your region aws configure #amazon systems manager agent install sudo yum install -y https://s3.amazonaws.com/ec2-downloads-windows/SSMAgent/latest/linux_amd64/amazon-ssm-agent.rpm #amazon efs tools sudo yum install -y amazon-efs-utils #s3cmd sudo yum install s3cmd
AWS Session Manager
Session Manager allows a direct SSH login from within the AWS network. As long as the ssm agent is installed and given sufficient permissions then then SSH ports can be firewalled off while still allowing SSH session from the browser, either through EC2 console (Session Manager not EC2 Instance Connect which will be blocked by the firewall) or System Manager console (as well as the default local console availability via WHM).
When not running Amazon Linux, the permissions setup is more complex than perhaps it should be, check the instructions carefully and use systemctl status amazon-ssm-agent -l to detect problems with missing permissions in IAM role:
S3 Backup area
Here we create and mount an S3 bucket to a local /s3 folder, see: https://github.com/s3fs-fuse/s3fs-fuse/wiki/Fuse-Over-Amazon
aws s3 mb s3://bucket-name #install dependencies for fuse sudo yum install automake fuse fuse-devel gcc-c++ git libcurl-devel libxml2-devel make openssl-devel sudo yum install epel-release sudo yum install s3fs-fuse /bin/s3fs bucket-name -o use_cache=/cache -o allow_other -o complement_stat -o ahbe_conf=/<optional config location>/ahbe.conf -o multireq_max=5 /s3
If using s3fs, ensure the mount is excluded from processes which may scan the disk as this results in much higher S3 usage charges, see for example s3fs FAQ:
Why does my AWS S3 bill cost a lot more than the storage fee?
A: ListBucket and HeadObject API calls were being made
located). Solution: Add your mount point to PRUNEPATHS in
/etc/updatedb.confso updatedb does not include this when it scans
In practice I do not recommend s3fs for production use: it’s not an Amazon supported use case, it uses server memory and especially /tmp space: if the s3 directory is used as a backup target eg /backup (in the hope the cPanel server backs up directly to S3) then the /tmp area will run out of space until backup completes halting eg mysql operations (see also next article).
Better – use aws s3 sync commands as needed without mounting the bucket.
Note: WHM s3 backup does not support using server role, in IAM
EFS home area
Here we mount a new home directory called /home2 backed by EFS.
This gives us:
- [practically] unlimited disk space
- The ability to add additional servers later..
Note: do not replace the WHM /home directory since this also includes installation and system areas such as /home/virtfs which includes tmpDSK files as well as symbolic links which will run more efficiently on the local disk. Also, regardless of where the home location is set, each account will have a symbolic link at /home/accountname and the virtfs remains at /home/virtfs.
- in WHM ConfigServer Security & Firewall add NFS port 2049 to CSF TCP_OUT and restart CSF
- similarly check 2049 is enabled in the EFS security group and also that ElasticFileSystem access is included in the web server’s IAM role
- create an efs disk with mount targets in the correct VPC and availability zones
- on the server, create /home2 and mount according to the aws instructions eg:
sudo mount -t efs <mount target>:/ home2
mounts the root of the EFS disk to the /home2 area using the efs utilities installed in AWS Installs above.
- tip, if the mount hangs until timing out with mount.nfs4: Connection refused then port 2049 is probably blocked.
- in WHM, Basic WebHost Manager setup, change new user default home directory location to /home2 (in fact WHM will normally choose the area matching “home” with the largest available free space, which will normally be the EFS mount as this always reports the maximum available free space).
So now we have an S3 mount and an EFS mount, but at the moment they will disappear on reboot, so to reconnect them at reboot, add to /etc/fstab
<efsid>:/ /home2 efs _netdev 0 0 s3fs#<s3bucket> /s3 fuse _netdev,allow_other,complement_stat,ahbe_conf=ahbe.conf,use_cache=/cache 0 0
and of course, reboot and check ..
some helpful commands:
#test fstab sudo mount -fav #list all mounts in human readable form df -aTh #check which s3 buckets mounted ps -ef | grep s3fs #folder listing and space used, human readable du -sh -- *
Note that despite adding s3 and efs it is still possible to run out of disk space for example there may be some limited space /tmp mounts. To change the cPanel default tmp size, edit /scripts/securetmp line
my $tmpdsksize = 512000; # Must be larger than 250000
and then remove and recreate
umount -l /tmp umount -l /var/tmp rm -fv /usr/tmpDSK /scripts/securetmp
ok now the server is all set up, time to move on to site migration.
Lots of ways to do this including transferring the files via s3 with aws s3 sync etc.
Recommended to get everything set up in test on a separate web address and then when everything is working, refresh with production data and flip the domain name.
But here we are looking at WHM/Cpanel migration so let’s do that.
- did you reduce the DNS TTL above? check and reduce it again now we are on the real migration.
- If you had installed Nginx as a reverse caching proxy in the server setup eg via Engintron, disable it now – we will re-enable it after the sites are setup and working.
- on the target server create an SSH key in WHM, Manage root’s SSH Keys
- on the source server, upload the public key and authorise it
- on the target server, WHM, Transfers, Transfer Tool allows you to copy entire accounts to the new server
Note the source server needs to be open to SSH from the target server.
- Enter the server name, select the SSH public key and connect
- If connected successfully you will see the list of cPanel accounts on the source server, select the accounts to transfer and kick off the transfer: pick one of the smaller / less critical accounts first, get everything up and running well and optimise it for AWS before transferring the next accounts.
- The web page may time out without appearing to finish but the job will continue executing and complete – check the files and database and full account details will then be available to check through WHM or terminal session
- check files and data are present – if following setup above, should find that account files are on EFS disk at /home2/account with symbolic link /home/account. This works fine except for cron jobs which don’t follow symbolic links – cron will literally hit “No such file or directory” type errors so check and update the paths in crontab and individual sites php.ini and config.php (this is really a problem if you are using custom cron jobs instead of the http based WP-Cron).
- flip DNS settings for domains
- check and enable php-fpm
- wait the TTL period
- check the sites are up and running – assuming you have a phpinfo or opcache page on the site you should be able to visit these and confirm you are getting to the corrrect site
- check email capabilities
- check Email Deliverability report in WHM: if you didn’t fix up your spf records in advance of migration do that now
- check SMTP settings on individual sites, eg Easy WP SMTP both allows configuration of smtp configuration and test email with smtp session debug log from within the site admin.
- check cron jobs noting point 5. above
- re-enable Engintron/Nginx – here you will get a 502 bad gateway error if following the security setup: by default Nginx will take over ports 80 and 443 and move Apache to ports 8080 and 8443, then Nginx will port forward traffic to the public ip addresses for the relevant domains. So:
- in csf allow inbound traffic on ports 8080 and 8443
- in AWS security group allow inbound traffic from the public ip addresses for the server (adding inbound traffic from the same security group is not sufficient because that only covers the internal address)
- or, if running via a single ip address, edit Engintron custom rules for Nginx to set $PROXY_DOMAIN_OR_IP “10.10.0.10”; #put local ip address here.
After verifying the sites are running well:
- on the source server, disable the accounts which have been migrated
- in DNS restore normal TTL
- revise backup procedures
Backup procedures will need reviewing every time we optimise the services. Above a /backup folder was mounted to S3, and WHM by default will write backups here on quite a flexible schedule. But multi-GB backup files are not the quickest way of restoring or spinning up a new environment, consider:
- create an AMI (EC2, Instance, Create Image) for faster restore of the current server configuration
- backup website files to S3: above an S3 bucket is mounted to /backup so one could just add a root cron job to rsync from /home2/account to /bucket/account or use the aws cli directly eg:
aws s3 sync . s3://my-bucket/path --delete
or create a separate bucket for each account or site and add to the account specific cron schedule.
Some additional parameters to try: add a cache-control header if you may be serving static files from the bucket, exclude symlinks and common backup, cache, tmp directories:
--cache-control max-age=31536000 --no-follow-symlinks --exclude '*/.cpanel/*' --exclude '*/tmp/*' --exclude '*/backup/*' --exclude '*/cache/*'
Tip: turn on versioning and it is always safe to use –delete as the file still exists but merely has a delete marker as the latest version of the file.
However you do it, transfer to an s3 bucket within the same region seems really fast compared to backup from outside s3
Tip2: s3 sync works best for small directory trees, when running for a large area aws s3 sync may appear to hang with “~0 file(s) remaining (calculating…)” and high cpu usage
- add a daily sql backup script: depending on needs this again could be root or account specific.
Practice a restore – important!
- Launch new instance
- Fix issues
- Terminate instance
- Add fixes to scripts in cloud-init userdata
- repeat 1
Finishing up – tighten security
Remember the ports opened up at the beginning for WHM/cPanel?
Each time we complete a round of configuration, check whether those ports are still needed and close them off if not.
Bonus points: to reduce the attack surface further, lock down the admin ports for WHM/cPanel, either to specific source ips (a bastion host/jump box or fixed ip if you have it) or split into a separate “whmadmin” Security Group and disassociate this Security Group from the server except when you actually need to use these admin interfaces – when you need to access web admin go to EC2, Networking Change Security Groups and add the security group back to the server.