Deploy Nginx Vhosts With Salt

one vhost is never enough

Prerequisites:

  • Have a nginx reverse proxy (which is managed by salt)
  • Have at least 2 websites / apps you want to reverse proxy serve

Introduction

Oke, this blogging is kinda cool. So here’s one more just before the weekend. In my personal evironment I have some VM’s, hypervisor, NAS en k8s deployments which i want to be reachable. I used to do this manually in apache with 301 redirects. But that was in an age when Ben Hur was still alive and kicking.

I built a similar configuration at a customer with Ansible. And figured i wanted to have that at home too. Because it can save you alot of time

Basically what it does is build a config based on several jinja2 statements and pillar data (variables). In addition the VM also handles the renewal of my let's encrypt certificate. (This will be handled in a different blog post)

Pillar data:

In my pillar data I have stored variables and booleans like in the example underneath for nginx itself:

def_nginx_siteconf_dir_avail: '/etc/nginx/sites-available'
def_nginx_siteconf_dir_enabl: '/etc/nginx/sites-enabled'

web_ports:
  - '80'
  - '443'

web_services:
  - 'http'
  - 'https'

nginx_packages:
  - "nginx"
  - "nginx-full"
  - "nginx-common"

fileloc_pub: /etc/letsencrypt/live/domain.net
chainfile: fullchain.pem
certfile: cert.pem
keyfile: privkey.pem

In addition this is the pillar data I used to build the various vhosts:

nginx_vhosts:
  portainer:
    name: portainer
    varprot: 'https'
    varport: '443'
    local: true
    upstream: true
    ipaddress: 'portainer.domain.local'
  proxmox:
    name: proxmox
    varprot: 'https'
    varport: '8006'
    local: true
    ipaddress: '8.8.8.8'
    serveralias: 
      - hypervisor
  heimdall:
    name: heimdall
    varprot: 'https'
    varport: '443'
    local: false
    upstream: true
    ipaddress: 'heimdall.domain.local'
    serveralias:
      - start
      - startpagina

Please note that for example proxmox has no upstream and portainer and heimdall do. This is because proxmox is a physical machine and has no means to failover / loadbalance

Salt state for nginx

I have one state file to deploy everything. I like to use if statements and for loops. Sometimes it becomes to complex, but this is hobby afterall ;). Mostly i do tend to stick to the kiss principal

My file / folder structure is as follows:

/srv/salt:
nginx/
./templates/
> default.j2
> nginx.j2
> vhost.j2
init.sls

I will mainly focus on deploying the vhost. And instead of just posting the entire state here at once. I will explain it section by section

Underneath you will see that i’m setting multiple variables. As you will see ,this can also be done in other manners. Like with pillar.get or just stating pillar['var'] everywhere when applicable. However for overview purposes I like this method more.

{%- set domain = pillar['domain'] %}
{%- set localdomain = pillar['localdomain'] %}
{%- set ssl_pub_loc = pillar['fileloc_pub'] %}
{%- set ssl_chain_file = pillar['chainfile'] %}
{%- set ssl_priv_key = pillar['keyfile'] -%}

These are basically the global vars which can be applied on both .local and .net domains. I think the variables speak for themselves.

Next up: installing nginx and certbot In the first function all packages from the pillar data nginx_packages will be installed. After that we create a custom log dir with proper permissions. After that the certbot will be installed. Certbot is used for let’s encrypt certificates

install_nginx:
  pkg.installed:
    - pkgs: {{ pillar['nginx_packages'] }}

create_nginx_log_dir:
  file.directory:
    - name: /etc/nginx/logs
    - user: root
    - group: root
    - dir_mode: '0755'
    - file_mode: '0644'

install_certbot:
  pkg.installed:
    {%- if grains['os_family'] == 'Debian' %}
    - pkgs: 
      - certbot
      - python3-certbot-dns-cloudflare
    {% endif %}

Placing the default and nginx config: First the default config will be placed. This will make sure that if an endpoint fails it will default to that vhost. After that an nginx.conf will be placed. Nothing special here. Just making sure the file exists.
The third function is nice. It can also be done per vhost. But i liked to clean out the entire directory each time (for now). To make sure no custom or old configs stay behind.

# place default config
default:
  file.managed:
    - name: {{ pillar['def_nginx_siteconf_dir_enabl'] }}/default
    - user: www-data
    - group: www-data
    - mode: '0664'
    - source: salt://nginx/templates/default.j2
    - template: jinja
    - context:
      ssl_pub_loc: {{ ssl_pub_loc }}
      ssl_chain_file: {{ ssl_chain_file }}
      ssl_priv_key: {{ ssl_priv_key }}

# place default config
nginx.conf:
  file.managed:
    - name: /etc/nginx/nginx.conf
    - user: www-data
    - group: www-data
    - mode: '0664'
    - source: salt://nginx/templates/nginx.j2
    - template: jinja
      
/etc/nginx/sites-enabled/:
  file.directory:
    - clean: True

note: the default template is place at the bottom of this blog

Deploying .net domains:

This function starts with a for loop and iterates over our nginx_vhosts pillar data. It checks for each entry name which is the key identifier. After this it will set it for this function as vhost, so we can use the variable. This we can see when looking at vhost.get('serveralias', {}). Basically what this does is; look per vhost if it has an entry called serveralias and if it does. It sets a custom var. This (as we will see in the j2 vhost template) will then add the serveralias to the vhost config.
For each entry found in the for loop an {{ name }}.conf will be made. Via the salt file.managed all variables will be set for use in the template.

{# .net domains #}
{% for name, vhost in salt['pillar.get']('nginx_vhosts', {}).items() %}
  {% if vhost.get('serveralias', {}) is defined and vhost.get('serveralias', {}) != None %}
  {% set serveraliasvar = vhost.get('serveralias', {})  %}
  {% endif %}
{{ name }}.conf:
  file.managed:
    - name: {{ pillar['def_nginx_siteconf_dir_enabl'] }}/{{ name }}.conf
    - user: www-data
    - group: www-data
    - mode: '0664'
    - source: salt://nginx/templates/vhost.j2
    - template: jinja
    - context:
      vhostname: {{ vhost.get('name', {}) }}
      protocol: {{ vhost.get('varprot', {}) }}
      port: {{ vhost.get('varport', {}) }}
      enabled: {{ vhost.get('varenabled', {}) }}
      ipaddress: {{ vhost.get('ipaddress', {}) }}
      upstream: {{ vhost.get('upstream', {}) }}
      domain: {{ domain }}
      serveraliasj2: {{ serveraliasvar }}
      ssl_pub_loc: {{ ssl_pub_loc }}
      ssl_chain_file: {{ ssl_chain_file }}
      ssl_priv_key: {{ ssl_priv_key }}
    - listen_in:
      - service: restart_nginx
    - require:
      - /etc/nginx/sites-enabled/
{% endfor %}

Deploying .local domains

Basically the same as .net, however no use of ssl and/or ssl certificates here

{# .local domains #}
{% for name, vhost in salt['pillar.get']('nginx_vhosts', {}).items() %}
{% if vhost.get('local', {}) is defined and vhost.get('local', {}) == True %}
  {% if vhost.get('serveralias', {}) is defined and vhost.get('serveralias', {}) != None %}
  {% set serveraliasvar = vhost.get('serveralias', {})  %}
  {% endif %}
{{ name }}.local.conf:
  file.managed:
    - name: {{ pillar['def_nginx_siteconf_dir_enabl'] }}/{{ name }}.local.conf
    - user: www-data
    - group: www-data
    - mode: '0664'
    - source: salt://nginx/templates/vhost.j2
    - template: jinja
    - context:
      vhostname: {{ vhost.get('name', {}) }}
      protocol: {{ vhost.get('varprot', {}) }}
      port: {{ vhost.get('varport', {}) }}
      local: {{ vhost.get('local', {}) }}
      ipaddress: {{ vhost.get('ipaddress', {}) }}
      domain: {{ localdomain }}
      serveraliasj2: {{ serveraliasvar }}
    - listen_in:
      - service: restart_nginx
    - require:
      - /etc/nginx/sites-enabled/
{% endif %}
{% endfor %}

At the end of this state a function is built to restart the nginx service when necessary:

restart_nginx:
  service.running:
    - name: nginx
    - enable: true
    - listen:
      - file: /etc/nginx/nginx.conf

please note: The entirety of the nginx state will be posted underneath

Breakdown of the vhost.j2 template:

Placing serveralias with a macro
Proud piece of work i must say. This took me awhile to figure out. It places the serveralias with the correct hostname behind servername in the vhost config.

# Managed by salt #

{% macro servername_VAR() -%}
{% if serveraliasj2|length %}
{{"       "}} server_name {{ vhostname }}.{{ domain }} 
{%- for alias_entry in serveraliasj2 -%}
{{" "+alias_entry }}.{{ domain }}
{%- endfor -%}
{{";"}}{# "place essential ;" #}
{% else %}
{{"       "}} server_name {{ vhostname }}.{{ domain }};
{% endif %}
{% endmacro %}

Custom accessrules for intervlan routing

I have several vlans in my network. And not all devices need to go to the infrastructure vlan for example. However some of them do. Via these accessrules I am able to allow certain devices based on ip to pass the nginx, others will be denied.

{%macro accessrules() -%}
{% if '192.168.2' in ipaddress %}
        allow 192.168.1.2; 
        allow 192.168.1.3;  
        allow 192.168.1.4;
        allow 192.168.1.5; 
        deny all;
{% endif %}
{% endmacro %}

Default upgrade function with nginx: This is necessary to turn a connection between a client and server from HTTP/1.1 into a websocket properly

map $http_upgrade $connection_upgrade {
        default Upgrade;
        ''      close;
}

Defining a loadbalancer: When upstream var is defined and equal to true place this in the vhost` config file

{% if upstream is defined and upstream is sameas true %}
{# use the vhostname var here to prevent duplicate errors #}
upstream backend-{{ vhostname }}{
        server perseus.domain.local:{{ port }} weight=5;
        server theseus.domain.local:{{ port }};
    }
{% endif %}

https/443 section of the config This is the main body of the template. Alot is happing here.

  • It checks on several occassions whether local is defined or not and what boolean value it returns
  • Alot of variables as stated in the Deploying <x> domain functions are placed.
    note that most variables without {{ }} are nginx’ own variables. I.E. $host
  • Ofcourse there is an upstream true/false check
  • Alot of checks also happen based on vhostname since not every endpoint requires the same functionality
{% if local is not defined or local is sameas false %}
server {
        # SSL configuration
        listen 443 ssl http2;
        {{ accessrules() }}
        {{ servername_VAR() }}
        
        client_max_body_size 520M;

        # 443 logging
        error_log /etc/nginx/logs/{{ vhostname }}_error_443.log warn;
        access_log /etc/nginx/logs/{{ vhostname }}_access_443.log;
        {% if vhostname == '<somevhost>' %}
        location /cams/ {
                proxy_read_timeout 120s;
                access_log off;
        {% else %}
        location / {
                proxy_read_timeout 900s;
        {% endif %}
                proxy_set_header Host $host;
                proxy_set_header X-Forwarded-Proto $scheme;
                proxy_set_header X-Forwarded-Port $server_port;
                proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                proxy_http_version 1.1;
                proxy_set_header Upgrade $http_upgrade;
                proxy_set_header Connection $connection_upgrade;
                proxy_set_header X-Real-IP $remote_addr;
                
                proxy_set_header X-Forwarded-Host $host;
                proxy_set_header X-Forwarded-Server $proxy_add_x_forwarded_for;
                
                proxy_pass_request_headers on;
                add_header X-location websocket always;
                
                {% if upstream is defined and upstream is sameas true %}
                proxy_pass {{ protocol }}://backend-{{ vhostname }};
                {% else %}
                proxy_pass {{ protocol }}://{{ ipaddress }}:{{ port }};
                {% endif %}

                proxy_ssl_verify off;
                #proxy_buffering off;
                {% if vhostname != '<somevhost>' %}
                fastcgi_buffers 16 16k; 
                fastcgi_buffer_size 32k;
                {% endif %}
                proxy_buffering         on;
                proxy_buffer_size       128k;
                proxy_buffers           4 256k;
                proxy_busy_buffers_size 256k;
        }

        ssl_certificate      {{ ssl_pub_loc }}/{{ ssl_chain_file }};
        ssl_certificate_key  {{ ssl_pub_loc }}/{{ ssl_priv_key }};
        
        #ssl_protocols TLSv1.2 TLSv1.3;
        ssl_prefer_server_ciphers on;
        ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";
        ssl_ecdh_curve secp384r1; # Requires nginx >= 1.1.0
        ssl_session_cache shared:SSL:10m;
        ssl_session_tickets off; # Requires nginx >= 1.5.9

        add_header X-Frame-Options SAMEORIGIN;
        add_header X-Content-Type-Options nosniff;
}
{% endif %}

http/80 section of the template In truth this is mostly here for the (301) redirect to https. However there a specific usecases where ssl is not used or not available.

  • Several if statements are built in to check whether local is definded or not and also whether the protocol is https or not.
server {
        listen 80;
        {{ accessrules() }}
        {{ servername_VAR() }}

        # this is not needed in http
        # fastcgi_buffers 16 16k; 
        # fastcgi_buffer_size 32k;

{% if local is not defined or local is sameas false %}
        # redirect to https
        location / {
                return 301 https://$host$request_uri;
        }
{% elif local is defined and local is sameas true and protocol != 'https' %}
        location / {
                proxy_set_header Host $host;
                proxy_set_header X-Real-IP $remote_addr;
                proxy_pass {{ protocol }}://{{ ipaddress }}:{{ port }};
        }
{% elif local is defined and local is sameas true and protocol == 'https' %}
        # redirect to https
        location / {
                return 301 {{ protocol }}://{{ ipaddress }}:{{ port }};
        }
{% endif %}
        {% if vhostname != '<somevhost>' %}
        #logging:
        error_log /etc/nginx/logs/{{ vhostname }}_error.log warn;
        access_log /etc/nginx/logs/{{ vhostname }}_access.log;
        {% endif %}
}

In the 443 and 80 sections log rules are placed. This simplifies debugging for me.

DNS configuration

It will be difficult if we don’t configure DNS for the websites be reachable.

Let’s say we want to reach portainer:

$ORIGIN .
$TTL 86400      ; 1 day
domain.net              IN SOA  ns1.domain.net. hostmaster.domain.net. (
                                2021102901 ; serial
                                300	   ; refresh (5 minutes)
                                600        ; retry (10 minutes)
                                604800     ; expire (1 week)
                                86400      ; minimum (1 day)
                                )
                        NS      ns1.domain.net.
                        NS      ns2.domain.net.
$ORIGIN domain.net.
ns1                     A       192.168.2.4
ns2                     A       192.168.2.5

; nginx proxy:
nginx                   A       192.168.1.104

; apps / websites
portainer  	            CNAME   nginx

; nginx CNAMES
proxy                   CNAME   nginx

First the DNS finds our portainer.domain.net. Which points to our nginx server. The nginx server then knows by the line behind proxy_pass where it should be forwarded too.

Used documentation:

Configuration files:

Nginx state init.sls:

The complete salte state for nginx vhost deployments nginx/init/sls

{%- set domain = pillar['domain'] %}
{%- set localdomain = pillar['localdomain'] %}
{%- set ssl_pub_loc = pillar['fileloc_pub'] %}
{%- set ssl_chain_file = pillar['chainfile'] %}
{%- set ssl_priv_key = pillar['keyfile'] -%}

install_nginx:
  pkg.installed:
    - pkgs: {{ pillar['nginx_packages'] }}

create_nginx_log_dir:
  file.directory:
    - name: /etc/nginx/logs
    - user: root
    - group: root
    - dir_mode: '0755'
    - file_mode: '0644'

install_certbot:
  pkg.installed:
    {%- if grains['os_family'] == 'Debian' %}
    - pkgs: 
      - certbot
      - python3-certbot-dns-cloudflare
    {% endif %}

# place default config
default:
  file.managed:
    - name: {{ pillar['def_nginx_siteconf_dir_enabl'] }}/default
    - user: www-data
    - group: www-data
    - mode: '0664'
    - source: salt://nginx/templates/default.j2
    - template: jinja
    - context:
      ssl_pub_loc: {{ ssl_pub_loc }}
      ssl_chain_file: {{ ssl_chain_file }}
      ssl_priv_key: {{ ssl_priv_key }}

# place default config
nginx.conf:
  file.managed:
    - name: /etc/nginx/nginx.conf
    - user: www-data
    - group: www-data
    - mode: '0664'
    - source: salt://nginx/templates/nginx.j2
    - template: jinja
      
/etc/nginx/sites-enabled/:
  file.directory:
    - clean: True

{# .net domains #}
{% for name, vhost in salt['pillar.get']('nginx_vhosts', {}).items() %}
  {% if vhost.get('serveralias', {}) is defined and vhost.get('serveralias', {}) != None %}
  {% set serveraliasvar = vhost.get('serveralias', {})  %}
  {% endif %}
{{ name }}.conf:
  file.managed:
    - name: {{ pillar['def_nginx_siteconf_dir_enabl'] }}/{{ name }}.conf
    - user: www-data
    - group: www-data
    - mode: '0664'
    - source: salt://nginx/templates/vhost.j2
    - template: jinja
    - context:
      vhostname: {{ vhost.get('name', {}) }}
      protocol: {{ vhost.get('varprot', {}) }}
      port: {{ vhost.get('varport', {}) }}
      enabled: {{ vhost.get('varenabled', {}) }}
      ipaddress: {{ vhost.get('ipaddress', {}) }}
      upstream: {{ vhost.get('upstream', {}) }}
      domain: {{ domain }}
      serveraliasj2: {{ serveraliasvar }}
      ssl_pub_loc: {{ ssl_pub_loc }}
      ssl_chain_file: {{ ssl_chain_file }}
      ssl_priv_key: {{ ssl_priv_key }}
    - listen_in:
      - service: restart_nginx
    - require:
      - /etc/nginx/sites-enabled/
{% endfor %}

{# .local domains #}
{% for name, vhost in salt['pillar.get']('nginx_vhosts', {}).items() %}
{% if vhost.get('local', {}) is defined and vhost.get('local', {}) == True %}
  {% if vhost.get('serveralias', {}) is defined and vhost.get('serveralias', {}) != None %}
  {% set serveraliasvar = vhost.get('serveralias', {})  %}
  {% endif %}
{{ name }}.local.conf:
  file.managed:
    - name: {{ pillar['def_nginx_siteconf_dir_enabl'] }}/{{ name }}.local.conf
    - user: www-data
    - group: www-data
    - mode: '0664'
    - source: salt://nginx/templates/vhost.j2
    - template: jinja
    - context:
      vhostname: {{ vhost.get('name', {}) }}
      protocol: {{ vhost.get('varprot', {}) }}
      port: {{ vhost.get('varport', {}) }}
      local: {{ vhost.get('local', {}) }}
      ipaddress: {{ vhost.get('ipaddress', {}) }}
      domain: {{ localdomain }}
      serveraliasj2: {{ serveraliasvar }}
    - listen_in:
      - service: restart_nginx
    - require:
      - /etc/nginx/sites-enabled/
{% endif %}
{% endfor %}

restart_nginx:
  service.running:
    - name: nginx
    - enable: true
    - listen:
      - file: /etc/nginx/nginx.conf

Nginx vhost template

The complete template used to generate vhosts nginx/templates/vhost.j2

# Managed by salt #

{% macro servername_VAR() -%}
{% if serveraliasj2|length %}
{{"       "}} server_name {{ vhostname }}.{{ domain }} 
{%- for alias_entry in serveraliasj2 -%}
{{" "+alias_entry }}.{{ domain }}
{%- endfor -%}
{{";"}}{# "place essential ;" #}
{% else %}
{{"       "}} server_name {{ vhostname }}.{{ domain }};
{% endif %}
{% endmacro %}

{%macro accessrules() -%}
{% if '192.168.2' in ipaddress %}
        allow 192.168.1.2; 
        allow 192.168.1.3; 
        allow 192.168.1.4; 
        allow 192.168.1.5; 
        deny all;
{% endif %}
{% endmacro %}

map $http_upgrade $connection_upgrade {
        default Upgrade;
        ''      close;
}

{% if upstream is defined and upstream is sameas true %}
{# use the vhostname var here to prevent duplicate errors #}
upstream backend-{{ vhostname }}{
        server perseus.domain.local:{{ port }} weight=5;
        server theseus.domain.local:{{ port }};
    }
{% endif %}

{% if local is not defined or local is sameas false %}
server {
        # SSL configuration
        listen 443 ssl http2;
        {{ accessrules() }}
        {{ servername_VAR() }}
        
        client_max_body_size 520M;

        
        
        # 443 logging
        error_log /etc/nginx/logs/{{ vhostname }}_error_443.log warn;
        access_log /etc/nginx/logs/{{ vhostname }}_access_443.log;
        {% if vhostname == '<somevhost>' %}
        location /cams/ {
                proxy_read_timeout 120s;
                access_log off;
        {% else %}
        location / {
                proxy_read_timeout 900s;
        {% endif %}
                proxy_set_header Host $host;
                proxy_set_header X-Forwarded-Proto $scheme;
                proxy_set_header X-Forwarded-Port $server_port;
                proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                proxy_http_version 1.1;
                proxy_set_header Upgrade $http_upgrade;
                proxy_set_header Connection $connection_upgrade;
                proxy_set_header X-Real-IP $remote_addr;
                
                proxy_set_header X-Forwarded-Host $host;
                proxy_set_header X-Forwarded-Server $proxy_add_x_forwarded_for;
                
                proxy_pass_request_headers on;
                add_header X-location websocket always;
                
                {% if upstream is defined and upstream is sameas true %}
                proxy_pass {{ protocol }}://backend-{{ vhostname }};
                {% else %}
                proxy_pass {{ protocol }}://{{ ipaddress }}:{{ port }};
                {% endif %}

                proxy_ssl_verify off;
                #proxy_buffering off;
                {% if vhostname != '<somevhost>' %}
                fastcgi_buffers 16 16k; 
                fastcgi_buffer_size 32k;
                {% endif %}
                proxy_buffering         on;
                proxy_buffer_size       128k;
                proxy_buffers           4 256k;
                proxy_busy_buffers_size 256k;
        }

        ssl_certificate      {{ ssl_pub_loc }}/{{ ssl_chain_file }};
        ssl_certificate_key  {{ ssl_pub_loc }}/{{ ssl_priv_key }};
        
        #ssl_protocols TLSv1.2 TLSv1.3;
        ssl_prefer_server_ciphers on;
        ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";
        ssl_ecdh_curve secp384r1; # Requires nginx >= 1.1.0
        ssl_session_cache shared:SSL:10m;
        ssl_session_tickets off; # Requires nginx >= 1.5.9

        add_header X-Frame-Options SAMEORIGIN;
        add_header X-Content-Type-Options nosniff;
}
{% endif %}

server {
        listen 80;
        {{ accessrules() }}
        {{ servername_VAR() }}

        # this is not needed in http
        # fastcgi_buffers 16 16k; 
        # fastcgi_buffer_size 32k;

{% if local is not defined or local is sameas false %}
        # redirect to https
        location / {
                return 301 https://$host$request_uri;
        }
{% elif local is defined and local is sameas true and protocol != 'https' %}
        location / {
                proxy_set_header Host $host;
                proxy_set_header X-Real-IP $remote_addr;
                proxy_pass {{ protocol }}://{{ ipaddress }}:{{ port }};
        }
{% elif local is defined and local is sameas true and protocol == 'https' %}
        # redirect to https
        location / {
                return 301 {{ protocol }}://{{ ipaddress }}:{{ port }};
        }
{% endif %}
        {% if vhostname != '<somevhost>' %}
        #logging:
        error_log /etc/nginx/logs/{{ vhostname }}_error.log warn;
        access_log /etc/nginx/logs/{{ vhostname }}_access.log;
        {% endif %}
}

Default vhost template:

# Managed by salt #

server {
        listen 80 default_server;
        listen 443 ssl default_server; 

        server_name _;

        #logging:
        error_log /etc/nginx/logs/default_error.log warn;
        access_log /etc/nginx/logs/default_access.log;

        # redirect to https
        return 301 https://www.ecosia.org:;
        
        ssl_certificate      {{ ssl_pub_loc }}/{{ ssl_chain_file }};
        ssl_certificate_key  {{ ssl_pub_loc }}/{{ ssl_priv_key }};
        
        #ssl_protocols TLSv1.2 TLSv1.3;
        ssl_prefer_server_ciphers on;
        ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";
        ssl_ecdh_curve secp384r1; # Requires nginx >= 1.1.0
        ssl_session_cache shared:SSL:10m;
        ssl_session_tickets off; # Requires nginx >= 1.5.9

        add_header X-Frame-Options SAMEORIGIN;
        add_header X-Content-Type-Options nosniff;
}