Introduction

Ghost is an open source content managment system and publishing platform that is taking on the likes of WordPress and quickly making a name for itself. In this tutorial we will examine how to get the most out of this CMS and supercharge its performance by using it with StackPath SecureCDN. 

It has been a busy couple of months at StackPath. MaxCDN joined StackPath, and since then, StackPath has deployed over two dozen new points of presence (PoPs), and we are aggressively ramping up our footprint. We recently surpassed over two terabits per second of capacity and push hundreds of gigabits per second at the speed of light.

 

I will be demonstrating using StackPath for full-site acceleration using a 2GB DigitalOcean droplet. This tutorial will have five sections:

  1. StackPath SecureCDN Setup
  2. Kernel sysctl Tuning
  3. Configuration of ulimits
  4. NGINX Configuration
  5. Performance Results

This article shows you how to setup your Ghost blog with StackPath SecureCDN. After completing the steps in this article, you will have a secure platform for your Ghost configuration. StackPath has a 15-day free trial you can try out.

Prerequisites


Before taking steps to get setup on StackPath, you should have a basic understanding of the following:

  1. Common file editors like: vi, vim, and nano
  2. Using SSH for connections
  3. NGINX Installed
  4. Ghost (NodeJS & NPM) Installed

StackPath SecureCDN Setup

  1. Initial Creation of Site
    Step_1_Initial_Create_Site.png
  2. With StackPath SecureCDN you can use either:

    Full-Site Acceleration - This option must be used if you want to use our Web Application Firewall (WAF).

    Assets Only - This is the most common way of implementing a CDN but only offers you the ability to serve static content with high availability and high performance - not the protection of StackPaths' machine learning platform.
    Step_2_Create_Site.png

  3. Domain Configuration
    Step_3_Enter_Domain.png

  4. Final Details
    Step_4_Final_Details.png

    On this step, you can define if you are using HTTP or HTTPS for the origin pull. Your requests are routed to the WAF of your choosing. You will want to chose the WAF closest to your origin
  5. Configure DNS
    Step_5_Configure_DNS.png

    You will be provided a unique DNS entry for your SecureCDN site. Navigate to your DNS provider and create a CNAME or ANAME record for, in this case, example-domain.com. You can read more about DNS here and you can find detailed guides here

  6. Review CDN Settings - To get setup with Ghost on CDN, we need to make a few changes to the settings.

Cache Control Header

We will need to change the Cache Control Header setting from No Override to at the minimum of one day. The way Ghost is, the headers will by default be Cache-Control →public, max-age=0. Later in this tutorial we will hide this header, that way the origin will have no Cache-Control header and the CDN will refer to the browser how long to cache the file. 

Origin Server Header Response:
HTTP/1.1 200 OK
Server: nginx/1.11.5
Date: Wed, 14 Dec 2016 23:25:40 GMT
Content-Type: image/jpeg
Content-Length: 285348
Connection: keep-alive
X-Powered-By: Express
Last-Modified: Mon, 07 Nov 2016 19:19:26 GMT
ETag: W/"45aa4-158403b201e"
X-Cache: HIT
Accept-Ranges: bytes
CDN Header Response:
HTTP/1.1 200 OK
Date: Wed, 14 Dec 2016 23:25:56 GMT
Content-Type: image/jpeg
Content-Length: 285348
X-Powered-By: Express
Last-Modified: Mon, 07 Nov 2016 19:19:26 GMT
ETag: W/"45aa4-158403b201e"
Server: NetDNA-cache/2.2
Expires: Thu, 15 Dec 2016 23:25:56 GMT
Cache-Control: max-age=86400
X-Cache: HIT
Connection: keep-alive

Gzip Compression

Leverage StackPath to do on-the-fly gzip on the edge as it delivers to your end-users. 

Add XFF Header

By enabling this option, you can then identify the originating IP address of the client connecting to the web server through our service.

Steps for Kernel Sysctl Tuning 


Before going into the NGINX configuration file, we need to make some modifications to the kernel sysctl on the machine. I am using the following OS, kernel, and NGINX version. Note, I am using NGINX Plus, but this guide will apply if you are running the open source version of NGINX as well.

 

Operating System: Ubuntu 16.04.1 LTS
Kernel: 4.4.0-53-generic
NGINX: nginx/1.11.5 (nginx-plus-r11)

Window Buffer - Global

net.core.wmem_max=12582912
net.core.rmem_max=12582912

By changing the above values, you will be changing the maximum window buffer for all sockets. net.core means that this buffer is applied to all protocols (TCP, UDP, etc.). We will explicitly state the values for TCP below. The default value of wmem_max and rmem_max is ~128 KB in most Linux distributions. You can verify what yours is by running: cat /proc/sys/net/core/{rmem_max,wmem_max}

Note: By adjusting these numbers, there will be an increase of memory usage on the machine.

Receive Buffer - TCP

net.ipv4.tcp_rmem= 10240 87380 12582912
net.ipv4.tcp_wmem= 10240 87380 12582912

By modifying net.ipv4.tcp_rmem and net.ipv4.tcp_wmem  you are telling the kernel the minimum/default/max receive buffer for each TCP connection.

net.ipv4.tcp_rmem= 10240 87380 12582912
| | |
| | +----- Maximum Receive Buffer
| +------------ Default Receive Buffer
+------------------ Minimum Receive Buffer

Packet Queueing

net.core.netdev_max_backlog = 5000

This setting will change the number of income connections that can be held in the queue/backlog. The maximum number of packets queued on the input side.

ICMP

net.ipv4.icmp_echo_ignore_broadcasts = 1
net.ipv4.icmp_ignore_bogus_error_responses = 1

Modifying the ICMP settings is not performance related, but, protects you from ICMP floods and bogus error responses in response to broadcast frames.

Syn

net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 2048
net.ipv4.tcp_synack_retries = 3

SYN floods are fairly common these days. While you are protected at the edge with StackPath, if your origin is found by a perpetrator, it can harm your system. You can learn more about SYN floods, as well as Volumetric and UDP flood attacks here

Window Scaling

net.ipv4.tcp_window_scaling = 1

By enabling TCP Window Scaling, which is explained in RFC1323, you will have an opportunity to push more throughput. While probably not necessary, if you have assets which can't be cached by the CDN or are very dynamic and are expecting a high throughput, this will be helpful.

Maximum Open Files

fs.file-max = 65536

To determine the maximum number of file handles for the entire system, run cat /proc/sys/fs/file-max.

Your servers kernel dynamically allocates file handles each time a file handle is requested by an application. The kernel does not free these file handles up when they are released by an application. Your kernel will recycle the file handles. Thus, over time the total number of allocated file handles will be ever increasing even though the number of used file handles may be low.

Max Connections

net.core.somaxconn = 65536

By modifying this value, which by default is 128 on most systems, your connecting client could see a slow connection, but it is better than a Connection Refused error if the number of connections exceeds the default.

Time Wait

net.ipv4.tcp_max_tw_buckets = 1440000

The maximum number of time wait for sockets held by the system. If the value exceeds the value of net.ipv4.tcp_max_tw_buckets then the socket is destroyed, and a warning will be printed to the log file.

IP Port Numbers

net.ipv4.ip_local_port_range = 1024 65000

You are allowing the ports 1024 through 65000 to be used. By default, on RHEL systems, it is net.ipv4.ip_local_port_range = 32768 61000.

Here is the completed /etc/sysctl.conf
####################################
###### Window Buffer - Global ######
####################################

net.core.wmem_max=12582912
net.core.rmem_max=12582912

####################################
##### Receive Buffer - TCP #########
####################################

net.ipv4.tcp_rmem= 10240 87380 12582912
net.ipv4.tcp_wmem= 10240 87380 12582912

####################################
########### Packet Queuing #########
####################################

net.core.netdev_max_backlog = 5000

####################################
############# ICMP #################
####################################

net.ipv4.icmp_echo_ignore_broadcasts = 1
net.ipv4.icmp_ignore_bogus_error_responses = 1

####################################
############# SYN #################
####################################

net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 2048
net.ipv4.tcp_synack_retries = 3

####################################
######### Window Scaling ###########
####################################

net.ipv4.tcp_window_scaling = 1

####################################
########### Open Files #############
####################################

fs.file-max = 65536

####################################
######## Max Connections ###########
####################################

net.core.somaxconn = 65536

####################################
############ Time Wait #############
####################################

net.ipv4.tcp_max_tw_buckets = 1440000

####################################
######## IP Port Numbers ###########
####################################

net.ipv4.ip_local_port_range = 1024 65000

To make the changes take effect you will need to run:

sysctl -p

ulimits


This section is simple. AS we changed the fs.file-max in sysctl we need to modify the limits for the nginx user. In my case, www-data is the user which NGINX runs under. We will modify the limits from the default of 2048 to a much higher number. You can verify what your user is by running grep -m1 "user" /etc/nginx/nginx.conf | cut -d: -f2 | awk '{ print $2}' | egrep -o '^[^;]+'

/etc/security/limits.conf

www-data soft nofile 32768 
www-data hard nofile 65536

NGINX Reverse Proxy Configuration


I have provided my /etc/nginx/nginx.conf configuration with comments below on the usage of most of the parameters. 

 

user  www-data;  
worker_processes  auto;

error_log  /var/log/nginx/error.log notice;  
pid        /var/run/nginx.pid;

######################################################
################ Events Section ######################
######################################################

events {  
  multi_accept on; # Making sure the worker process accepts all new connections at a time
worker_connections 32768;
use epoll; # Since we're using a Linux 2.6+ kernel this is most efficient.
accept_mutex on;
# If accept_mutex is enabled, worker processes will accept new connections by turn.
# Otherwise, all worker processes will be notified about new connections, and if volume of
# new connections is low, some of the worker processes may just waste system resources.
} worker_rlimit_nofile 65536;
# This is the limit of number of files, as referenced in sysctl and limits earlier.
# Now, we are explicitly defining those limits. http { upstream backend { zone backend 64k; server YOURSERVER; #Enter your server here. ###################################################### ################ Basic Settings ###################### ###################################################### sendfile on;
# sendfile() is called with the SF_NODISKIO flag which causes it not to block on disk I/O, but,
# instead, report back that the data are not in memory.
# nginx then initiates an asynchronous data load by reading one byte.
tcp_nopush on;
# Use TCP_NOPUSH socket option on FreeBSD or the TCP_CORK socket option on Linux.
# The options are enabled only when sendfile is used. Enabling the option allows
# sending the response header and the beginning of a file in one packet,
# on Linux and FreeBSD 4.*; sending a file in full packets.
tcp_nodelay on;
# Use TCP_NODELAY option. The option is enabled only when a connection is
# transitioned into the keep-alive state.
keepalive_timeout 65;
# The first parameter sets a timeout during which a keep-alive client connection will stay open on the
# server side. The zero value disables keep-alive client connections. The optional second parameter
# sets a value in the “Keep-Alive: timeout=time” response header field. Two parameters may differ.
types_hash_max_size 2048;
# Sets the maximum size of the types hash tables. By default this is 1024.
server_names_hash_bucket_size 128;
# Because hostnames for us can be long, we need to allocate a great amount
# of memory for the server names.
include /etc/nginx/mime.types; default_type application/octet-stream; } server { location / { proxy_pass http://backend; health_check; } } include /etc/nginx/mime.types; default_type application/octet-stream; ###################################################### ##################### Caching ######################## ###################################################### proxy_cache_path /data/nginx/cache levels=1:2 keys_zone=STATIC:100m inactive=24h max_size=2g; proxy_cache_key "$scheme$host$request_uri"; ###################################################### ################ Logging Format ###################### ###################################################### log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; tcp_nopush on; tcp_nodelay on; ###################################################### ################## Keepalive Settings ################ ###################################################### keepalive_timeout 65; keepalive_requests 100000; ###################################################### ################ Open File Cache ##################### ###################################################### open_file_cache max=1000 inactive=20s; open_file_cache_valid 30s; open_file_cache_min_uses 5; open_file_cache_errors off; ###################################################### ############# Compression Settings ################### ###################################################### gzip on; gzip_disable "msie6"; include /etc/nginx/sites-enabled/*; include /etc/nginx/conf.d/*.conf; }

NGINX Reverse Proxy Configuration


I am using the following NGINX configuration file, with comments provided as necessary:

server {
listen 80 default_server;

server_name yourdomain.com; # Replace with your domain

root /var/www/ghost;
index index.html index.htm;

client_max_body_size 128m;

location / {
proxy_pass http://local_nodejs;
# By referencing an upstream, this gives you the ability to scale easier if you add more web servers.

#################################
######## Hide Headers ###########
#################################

proxy_hide_header Cache-Control;
# We don't want to see the Cache-Control header

#################################
######## Ignore Headers #########
#################################

proxy_ignore_headers X-Accel-Expires Expires Cache-Control;
proxy_ignore_headers Set-Cookie;

#################################
######### Set Headers ###########
#################################

proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Connection "";

#################################
######### Add Headers ###########
#################################

add_header X-Cache $upstream_cache_status;
# Add a HIT/MISS header

#################################
########### Caching #############
#################################

proxy_cache STATIC;
# I am using caching, where STATIC is referenced in nginx.conf.
# Comment out if you don't use caching (you really should, though!)
proxy_cache_valid any 1m;
proxy_cache_valid 200 302 2h;
proxy_cache_valid 404 1m;
proxy_cache_revalidate on;
proxy_cache_lock on;
proxy_cache_use_stale error timeout invalid_header updating http_500 http_502 http_503 http_504;

#################################
### Load Balancing (Optional) ###
#################################

#proxy_http_version 1.1;
# This is necessary if you will be using NGINX as the load balancer.

#proxy_buffering off;
# If you would like the response to be passed to a client synchronously, immediately as it
# is received. nginx will not try to read the whole response from the proxied server.
}

#################################
###### Admin - Don't Cache ######
#################################

location ~ ^/(?:ghost|signout) {
add_header Cache-Control "no-cache, private, no-store, must-revalidate, max-stale=0, post-check=0, pre-check=0";
expires -1;
}
}

upstream local_nodejs {
zone local_nodejs 64k;
server 127.0.0.1:2368;
}

Now that we have the NGINX configuration setup, let me take a step back and look at how our requests will be handled with full site acceleration:

 

supercharging-ghost-on-stackpath-securecdn-images-cold-cache.png

 

supercharging-ghost-on-stackpath-securecdn-images-warm-cache.png

 

With StackPath’s continuing investment in both network and hardware, we are continually setting the bar higher and exceeding it. As of now, each of our PoPs are deployed and set up to scale up to 96 Tbps. Below are the results of our comparison between using Ghost without a CDN and with StackPath SecureCDN:

 

No CDN

 

 
 

stackpath-logo.png SecureCDN

 
 

Performance Breakdown

 

Measure

No CDN

stackpath-logo.png SecureCDN

Response Time
Average  256 ms 124 ms (51.5% Reduction)
 Min. Response Time 70 ms   2 ms
Max. Response Time   22,290 ms  1,409 ms
Response Counts  
 Success (200) 564,148  2,166,828 (284% More Responses)
400/500 0 / 0 0 / 0
Timeout 11,401  0
Network 1,658  0
Bandwidth
 Sent 72.23 MB   263.58 MB
 Received 6.97 GB   27.03 GB

Conclusion

In a nutshell, Ghost on StackPath SecureCDN resulted in over 50% reduction in page load speeds. Looking at it another way: it's twice as fast.

No matter the size of your website, leveraging a CDN is a good idea.  At StackPath, we have packages as low as $0.02 per gigabyte of bandwidth, and you can start your 15-day free trial, including WAF and DDoS mitigation.

Start 15-day Trial

Every Secure Content Delivery Plan includes WAF and DDoS mitigation.

Choose plan