Rear End AWS expansion

Previous articles described a cPanel installation optimised for WordPress web hosting moving to AWS and offloading traffic from the Front End.

Here the back end is offloaded by migrating the database and redis cache to AWS services.

Aurora Database

Create new database instance

  1. Go to AWS Console, RDS, Create database,
  2. Choose Standard Create, Aurora, Serverless, create credentials.
  3. Set minimum 1 capacity unit: this gives 2GB RAM and a minimum charge of $43.80 monthly (or $51.10 in London Region).  Additionally there will be a minimum storage charge of $1/month for first 10GB.
  4. Choose the Virtual PC configured for the web servers.
  5. expand Additional connectivity options and Create new VPC security group and enable Data API.
  6. expand additional configuration and optionally create initial database, review backup and deletion protection options
  7. Wait!

After a few minutes the database instance will move to status “Available” to receive connections.

RDS console

The Amazon RDS Query Editor offers a database gui which is secure in that credentials are stored in the AWS Secrets Manager service, though offers a fairly low level of functionality.

Command Line

So this might be from a session directly on the EC2 instance or other server which has been allowed to connect in via port 3306, here we can do standard mysql commands eg:

mysql -h -u user -p


Is it possible to connect to an Aurora database from phpMyAdmin or other GUI?
Sure, allow port 3306 access through the firewall and security group for the required ip and for phpMyAdmin add the server details to

$cfg['Servers'][$i]['host'] = 'HostName'; 
$cfg['Servers'][$i]['user'] ='UserName'; 
$cfg['Servers'][$i]['password'] = 'Password'; 
$cfg['Servers'][$i]['port'] = 3306'; 
$cfg['Servers'][$i]['auth_type'] = 'config';


on WHM/cPanel the phpmyadmin config can be located at: /usr/local/cpanel/base/3rdparty/phpMyAdmin/ though as this is a cPanel managed area this config might be overwritten during updates so better to use the provided features.

WHM has a concept of MySQL Profiles which govern which SQL server is being managed by the user interface.  These allow the Aurora cluster to substitute the built in sql server as follows:

  1. Review and rename database users longer than 16 characters:
    1. Unlike MariaDB, Aurora has a username limit of 16 characters:  since cPanel database users are normally in the format cpaneluser_dbuser they can easily exceed this and lead to errors of type “String ‘accountname_username’ is too long for user name (should be no longer than 16)”.
  2. WHM, SQL Services, SQL Profiles
  3. Add a new profile for the Aurora server
  4. Validate and Activate
  5. cPanel will then report that it has remapped the user privileges to the new database server.  It hasn’t though but at this point the cPanel MySQL user interfaces will be connected to Aurora rather than the local MariaDB.  The sites however have not been changed and will still be connected to the local db.
  6. For each user and database:
    1. in cPanel, MySQL Databases, re-add the user to the database with the correct privileges
    2. test connection from the command line

Note:  the cPanel profile affects the default connection from the command line too, mysql and mysqldump will by default connect to Aurora and no longer operate on the local db.

Data Transfer

Once the database is created and connectivity established, if some downtime is acceptable data can be exported and imported with standard command line tools:

mysqldump mydatabase > mydatabasedump.sql
mysql -h -u user -p my database < mydatabasedump.sql

(Or transferring the dump via S3 if bringing into AWS from external host).

Then for WordPress, update wp-config.php to point to the Aurora database host ie:

define('DB_HOST', '');

then clear php opcache if appropriate, log into the site, check the Site Info, Database is pointing to the Aurora database as expected.

This will incur a few minutes downtime per GB of data, for reduced downtime Amazon have a variety of recommendations from using an Aurora read replica to migrate from an RDS instance,  to Percona XtraBackup and AWS Database Migration Service.


As always when moving service, update backup processes.

Aurora backups are continuous and incremental so you can quickly restore to any point within the backup retention period – select database cluster, choose Actions, “Restore to point in time”.

Note that the Amazon restore process actually creates a new cluster, and as such the capacity settings etc can be changed for the new instance.

  • Tip, for investigating data issues etc, during the restore select “Additional scaling configuration”, “Pause compute capacity after consecutive minutes of inactivity”.   This allows non-production databases at minimal cost by effectively shutting down the database when not in use.

To use the Aurora backups outside of RDS, two methods, firstly the RDS snapshot can be exported to S3 (currently only supported in certain regions, but if necessary the snapshot can be copied to one of those regions). Secondly Aurora remain mysql compatible so traditional techniques such as mysqldump may also be used.


WordPress code works on objects which are stored in a relational database.  These objects are expensive to create as each object instantiation involves queries to:

  • main object table eg wp_posts, wp_users, wp_comments
  • metadata table with rows for each data item not included in the main object table’s columns eg:  wp_postmeta, wp_usermeta, wp_commentmeta
  • separate queries to the taxonomy tables (eg for categories, tags etc)

To reduce the load, WordPress caches the serialized json of the instantiated objects, but by default these json are cached as ‘transients’ in the options table (option_name is ‘_transient_objectkey‘ option_value is the serialised object).  Then each web request loads the cached data from the options tables which is still a high database load.

Performance and database load can be improved by using any of a number of plugins which substitute the WordPress caching mechanism with Memcached or Redis. (Redis can also be used as a memory cache only by disabling the disk cache as done in WHM/cPanel optimised WordPress hosting.)  A good option for this is the Redis Object Cache plugin.  All options are set in the config file including per site cache key prefix, selective flush and list of objects not to cache.  To scale out to multiple servers cache location and consistency needs to be dealt with.

The de-facto recommended scale-out option is to use a dedicated caching server as described below, but it isn’t the only option as for example redis clustering can be configured without using Amazon’s services.


Elasticache, Amazon’s service for Redis or Memcached is not a serverless solution, EC2 infrastructure needs to be provisioned, similar to the non-serverless RDS.  Since the cache server should be always up, calculate and book an instance capacity reservation rather than paying on demand prices.

If you are already running redis,current usage could be checked from the OS point of view:

ps -eo pmem,pcpu,rss,size,vsize,args | grep redis

but better to ask redis itself:

redis-cli info memory

this gives used_memory_peak statistics which may be better guide than the current process size.

If redis isn’t already enabled estimate from the database size, check eg: CloudWatch, Metrics, Aurora VolumeBytesUsed.

The redis database size doesn’t need to be as big as the entire database, especially if the application cache is currently using the database itself, about 15% may be sufficient.

Next, choose an instance which will cover the memory capacity, for example,

Cache instances are more expensive than vanilla EC2, eg on the uk pricing:

The ElastiCache pricing page also gives the chart by cache memory so one could extract database sizes:

Instance type Cache memory 3yr monthly reserved price implied WordPress / WooCommerce database capacity
cache.t3.micro 0.5GB $7.30 4GB (approx 50,000 pages/products/orders)
cache.t3.small 1.37GB $14.60 9GB
cache.r5.large 13.07GB $95.63 87GB

Note that the ElastiCache the wizard defaults to cache.r4.large (12.3GB) – at on demand London pricing $0.267/h * 2 (cluster) * 730hours, almost $390/month +VAT.

So to the creation:

  • create the ElastiCache redis in the same VPC, subnet and security group as the webserver.
  • In the case of redis-cache for WordPress, configure the website redis connection parameters in wp-config
  • check the admin interface redis diagnostics to confirm the connection is operating successfully.

That’s it!

Redis is one of those delightful infrastructure components which just works straightaway.

But to check in more detail, browse a few pages on the site then go back to the AWS Consold ElastiCache, Redis, click on the cluster, select a member node, and there are detailed CloudWatch metrics including eg Current Items (Count).


Redis Enterprise Cloud

In the last few years, AWS services have proved a significant challenge for enterprises built up around open source software. In 2019 Redis Labs changed their licensing terms to avoid  having their market stripped by AWS commercialisation of the open source software.  In May 2020 Redis Labs announced partnership with Microsoft to deliver Azure Cache for Redis.

Redis Enterprise Cloud (with the new Redis6) is now available on AWS, Azure

  • Essentials: 30MB free tier Plans start at $5/month (without replication)
  • Pro starts at $1.11/h = $810/month according to overview page, but the detail pricing page suggests minimum $1.98/h = $1,445/month.

30MB  is only enough for light testing,  a  more realistic entry level might be 500MB, with replication, cost comparison:

  • AWS $19.20/month : 2x cache.t3.micro, 2x reserved pricing $14.60/month
  • Redis Enterprise Cloud: $36/month with replication or $47/month multi-AZ
    Pro level needed for ability to add to VPC.

aws Wrap-up

So after moving off the website files to EFS (and backed to S3) and the database to Aurora, object caching to ElastiCache, proxy and web application firewall to CloudFront and WAF, the webserver is now a disposable commodity processing unit, which we can replace or scale out behind Elastic Load Balancer as needed.  Cattle as opposed to a custom configured WHM/cPanel Pet.

part of a series on cPanel migration to AWS


Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s