Website Hosting Reliability and Security

We believe reliability and security are vital for our clients' websites, and place great importance on regular, redundant backups, disaster recovery procedures, and security best practices below.

Hosting Reliability and Disaster Recovery

Our hosting utilises Amazon's (AWS) reliable and secure data centres in Sydney.

We take daily backups of the websites and databases that we host, and backups are kept for 30 days. The daily backups are stored redundantly onsite and offsite, on multiple devices across 3 data centres in Sydney that are isolated from each other. These data centres are built with disaster recovery in mind, and are in a different flood zone, a different geographical area, and connected to different power grids, to make sure they are truly isolated from one another.

Our hosting is constantly monitored 24/7, and in the event of a server failure or catastrophic event affecting the whole data centre, we are notified immediately, and can quickly recover your sites from the redundantly stored backups. Depending on the type of disaster, we also have the option to restore the sites and databases onto the other data centres not affected by the catastrophic event affecting the original data centre.

In addition, our web servers run on mirrored drives so there are 2 real-time copies of the site and any files that you or users upload to the site, to be safe from the more common single drive failures.

Hosting Security

We encrypt all data at rest on the disk drives on the web servers and database servers, as well as the backups, to secure all data from unauthorized access to the underlying storage.

We have also setup a number of additional layers of security to protect your data, including:

  • A firewall to ensure only traffic of the expected protocol and on the expected port is allowed to the web servers and database servers.
  • IP whitelists to limit and control which IP address can connect to the web server and database for administration purposes.
  • Access to the firewall and IP whitelists are restricted to authorised personnel only.
  • The database is set to force and accept Secure Socket Layer (SSL) connections only, so that all communications with the database are encrypted.
  • Separate credentials for each website to access their corresponding database. This ensures your database is secure even if other website's database credentials are compromised.
  • Each credential is created following the security principle of least privilege, so that for example, the credentials used by websites to access the database has no permissions to delete tables.
  • Separate file access permissions for each website, to ensure each website process only has access to its own files and folders.
  • Our servers are kept up to date with security updates.
  • Our servers support the latest Transport Layer Security (TLS) encryption protocols, and we have removed support for older Secure Sockets Layer (SSL) encryption protocols that have known vulnerabilities. This is to be secure when communicating with the user's browsers and 3rd party APIs/service providers via TLS/SSL encryption.

 

Website Application Security

For websites that we create, we uphold security best practices:

  • Passwords are one-way hashed before being stored, to ensure we don't store passwords in plain-text.
  • We ensure a process is in place to prevent brute force password guessing attacks and session hijacking attacks.
  • Database calls are done via stored procedures and parameterised queries to prevent SQL Injection attacks.
  • We guard against malicious user inputs:
    • All user inputs are processed to ensure it is clean and of a high-quality eg. to ensure for example ​an email address is in the correct format for an email address, the purchase price is a valid dollar and cents amount etc. Friendly error messages are displayed to the user if any inputs are invalid.
    • We ensure that input validations must pass server-side validations, and do not rely solely on client-side input validations, as an attacker can get around client-side validations by posting to the web server directly to bypass client-side javascript checks. This prevents input injection attacks.
  • We use custom error pages to ensure no useful information about the website or application infrastructure is leaked to a malicious user.
  • We take steps to prevent cross site scripting attacks, such as escaping user inputs that are displayed back on screen.
  • We highly recommend installing an SSL ceritifcate to encrypt communications between the user's desktop or mobile browser and our web server via HTTPS, so their details are not sent over the internet in plain-text to prevent eaves-dropping attacks. This also avoids the "Not Secure" warning from the the Google Chrome browser starting July 2018 on websites that are not served over HTTPS.
  • As part of setting up HTTPS for websites, we create redirects from non-HTTP to HTTPS, and add HTTP Strict Transport Security (HSTS) headers to declare that web browsers should only interact with the site using secure HTTPS connections, and never via the insecure HTTP protocol, to prevent "Man In The Middle" (MITM) attacks.

For more information on how Amazon secures their data centres, please see here
https://aws.amazon.com/compliance/data-center/