Skip to content
Fluffy Clouds and Lines
GitHubGitLabLinkedIn

Issuing RDP Certificates using Vault

Vault, Terraform, Architecture5 min read

This started out simply - a client I was working with one day asked - "we're already using Vault for all of our certificate needs, can we get CA signed certs instead of self-signed certs for RDP connections to your Windows machines?" My first thought was why Vault?

Windows AD Certificate Services is simple to setup and RDP templates are there out of the box. Auto-enrolment means this is set and forget kind of operation.

But, understandably, some environments will want to drive down the number of Windows systems to administer, not increase them! So, sure you can do it with Vault, the certificate requirements aren't too out there and it isn't as difficult as you may think.

Why is this no longer simple? Well I decided to use this is an opportunity to deploy Vault from Terraform, in a bit more of an operations-ready manner instead of just running vault server -dev.

So I set out to build a lab environment that met a few criteria;

Fully automated install / bootstrap of vault server and client, Apply certs everywhere (no skip TLS options), Aligned to good practice for running Vault on AWS. This wasn't too bad in the end, the talented team from gruntwork.io have put a lot of effort into https://github.com/hashicorp/terraform-aws-vault which contains Terraform and shell scripts to get you up and running with Vault on AWS.

This lab will deviate a little from their original pattern (dropping Vault to single node, removing Consul as the cluster backend), but reuses much of their shell scripts, so could be scaled back up if you're interested.

Before I dive into the code repo and the demo, a few concepts.

Why Vault?

Vault is designed to be a highly configurable, secure repository and factory of 'secure material' in your environment. Secure material could be static passwords or secrets, certificate generation, automatically rotated service account credentials or even providing an endpoint for 'Encryption as a Service'.

Servers and Clients

Vault has two components, server and client. The server role can be single node or clustered for high availability. The client logs into the Vault server to perform roles including administer Vault itself, request secrets, encrypt data or request certificates.

The vault binary is all you need to get going, depending on the options provided, it either starts as a server or a client. No external dependencies, yay!

Vault Server exposes all of it's functionality using a website-like hierarchy to represent the functionality and data it is managing. For example all requests in this lab will be targeted at the pki/ backend. The client makes requests to the URLs specified by yourself, along with any required input.

Vault protects what are potentially very sensitive functions using a comprehensive authentication and authorisation model. Before a Vault client can interact with anything, it needs to login. There are many different login authentication methods that can be configured, but in this lab we are using AWS IAM login. This allows the credentials associated with the underlying host of where Vault client is running to be used to login to the Vault server.

Once logged in, Vault then verifies what actions a user can perform, through the roles assigned to them. roles are attached to policies which define what a client can do against what. The do will generally be a mix of create update and delete. The 'what' is defined as a path that defines which features of the backend can be manipulated, for example;

1path "pki/sign/rdp-cert" {
2 capabilities = ["read", "update"]
3}

Certificate Authorities

The next piece of the puzzle I want to talk about is certificate authorities. Certificate Authorities are the back bone of internet security. At their simplest, they are the trusted parties who say who is reputable and who isn't in the digital world. This is underpinned by technology (public / private keypairs, X509) and by process (auditing, industry best practice).

In this lab we will be creating a Certificate Authority, but because I am signing it myself, the 'process' part is going pretty much out of the window 😇. This lab will predominantly cover the technology side.

There are two certificate authorities that shall be created here. One that Vault is a consumer of (the Root CA), and one that Vault controls (the Intermediate CA).

The one that Vault is a consumer of is created using the tls provider built into Terraform - https://github.com/hashicorp/terraform-aws-vault/tree/master/modules/private-tls-cert and the second will be created by activating the pki backend in Vault.

In a 'real' environment it is considered best practice that your Root Certificate Authority is outside of Vault with appropriate controls in place to protect it (physically offline and secure, MFA), then Vault operates an Intermediate Certificate Authority for day-to-day operation.

That's a wrap on the primer, onto how to tie it all together.

The Code Repository

Projects · Fluffy Clouds and Lines / Issuing Windows Certificates using Vault GitLab.com

GitLab

WARNING - The AWS lab in this repository contains resources that are not in AWS Free Tier. The t3a instance used for the Windows client is not in free tier - t2.micro for Windows is painfully slow! You have been warned.

The repo has two main strands, client-scripts which are the parts on the client side that do the RDP certificate request and loading. If you have an existing Vault deployment, these could easily be adapted to your needs.

demo_environment is a fully self-contained Terraform 0.12 project that;

Stands up 2 EC2 instances, one Amazon Linux 2 (for Vault Server), one Windows Server (for testing), Creates security group rules to allow communication between the instances and remote management (the remote_source_ip variable has to be defined when running terraform plan or apply), Executes user-data on both instances to bootstrap the lab.

1├── client-scripts
2│ ├── rdp-certificate.ini.tmpl
3│ └── request_rdp_cert.ps1
4├── demo_environment
5│ ├── main.tf
6│ ├── provider.tf
7│ ├── terraform.tfstate
8│ ├── terraform.tfstate.backup
9│ ├── userdata.sh
10│ ├── variables.tf
11│ └── windows_userdata.ps1
12└── README.md

Vault Server

On Vault Server startup, the user-data shall;

Download vault from Hashicorp, installs it as a systemd service and configure it to use non-clustered, local storage, Creates a self-signed CA and publishes it via HTTP, Initialise and Unseal the Vault, Setup the pki backend, policies and roles, Activates AWS IAM authentication. Once user-data completes, the server is ready for use. It is not 100% suitable for use in a production environment, as the bootstrap scripts capture the unseal disks to disk to allow initial configuration. This is bad practice, do as I say, not do as I do!

The self-signed Root CA is the first part of our configuration process. We are using a module from the terraform-aws-vault repo to create our key material. This runs on our Vault server as part of initial setup. The CA certificate and the server certificate (and private key) are placed into the appropriate location for Vault to pickup.

1terraform init
2terraform apply -var ca_public_key_file_path=/opt/vault/tls/ca.crt.pem \
3 -var public_key_file_path=/opt/vault/tls/vault.crt.pem \
4 -var private_key_file_path=/opt/vault/tls/vault.key.pem \
5 -var owner=vault \
6 -var organization_name='FCAL' \
7 -var ca_common_name=infra.fluffycloudsandlines.blog \
8 -var common_name=vault-demo.infra.fluffycloudsandlines.blog \
9 -var dns_names='["vault-demo.infra.fluffycloudsandlines.blog"]' \
10 -var ip_addresses='["127.0.0.1"]'\
11 -var validity_period_hours=24 \
12 -auto-approve

Next we need to configure Vault and initialise it. Initialisation creates a new database to store our data and configuration. Once it is created, it needs unsealing. Unsealing is required in the case of server restart. Therefore, the keys that are created need to be kept.

They need to be kept securely as they are what is protects your Vault from being decrypted by a malicious party. This is where we cut a corner to fully automate our lab, the unseal key is persisted to disk for easy reference later on.

In a production scenario unseal keys should be securely distributed to and retained by key operators, or persisted into a cloud key service, for example AWS KMS.

1/opt/vault/bin/run-vault --tls-cert-file /opt/vault/tls/vault.crt.pem --tls-key-file /opt/vault/tls/vault.key.pem
2
3sed -i -e '/storage "consul"/,/}/d' /opt/vault/config/default.hcl
4systemctl restart vault.service
5
6sleep 3
7/opt/vault/bin/vault operator init > /opt/vault/config/init_output
8
9export PATH=$PATH:/opt/vault/bin
10export VAULT_ROOT_KEY=$(cat /opt/vault/config/init_output | grep "Root Token" | cut -d ":" -f 2 | xargs)
11export VAULT_UK1_KEY=$(cat /opt/vault/config/init_output | grep "Unseal Key 1" | cut -d ":" -f 2 | xargs)
12export VAULT_UK2_KEY=$(cat /opt/vault/config/init_output | grep "Unseal Key 2" | cut -d ":" -f 2 | xargs)
13export VAULT_UK3_KEY=$(cat /opt/vault/config/init_output | grep "Unseal Key 3" | cut -d ":" -f 2 | xargs)
14vault operator unseal $VAULT_UK1_KEY
15vault operator unseal $VAULT_UK2_KEY
16vault operator unseal $VAULT_UK3_KEY
17We're on the home straight now, Vault is ready to go, now we need to login and configure our PKI.
18
19vault login $VAULT_ROOT_KEY
20vault secrets enable pki
21vault secrets tune -max-lease-ttl=8760h pki
22
23vault write pki/root/generate/internal \
24 common_name=$WINDOWS_DOMAIN \
25 ttl=8760h
26
27vault write pki/config/urls \
28 issuing_certificates="http://127.0.0.1:8200/v1/pki/ca" \
29 crl_distribution_points="http://127.0.0.1:8200/v1/pki/crl"
30
31vault write pki/roles/rdp-cert \
32 allowed_domains=$WINDOWS_DOMAIN \
33 allow_subdomains=true \
34 max_ttl=72h \
35 ext_key_usage_oids=1.3.6.1.4.1.311.54.1.2 \
36 key_usage="" \
37 ext_key_usage=""
38
39vault policy write "rdp-policy" -<<EOF
40path "pki/sign/rdp-cert" {
41 capabilities = ["read", "update"]
42}
43EOF

Finally, we need clients to be able to login, so let's configure our AWS IAM authentication;

1vault auth enable aws
2
3vault write auth/aws/role/rdp-issue-iam \
4 auth_type=iam \
5 bound_iam_principal_arn=arn:aws:iam::040224243460:role/${role_name} \
6 policies=rdp-policy max_ttl=500h
7
8vault write auth/aws/config/client iam_server_id_header_value=vault-demo.infra.fluffycloudsandlines.blog
9And that's pretty much it. The full script can be found under demo_environment/user_data/sh.

Windows Client

On Vault Server startup, the user-data shall;

Download vault from Hashicorp, and adds it to the PATH, Downloads certreq templates and the scheduled task script from the GitLab repo above, Installs the scheduled task to run on boot. A working directory for ongoing operation is created under c:\windows\vault-rdp-sign. A certreq compliant INI file and the scheduled task to create the certificate request, request Vault to sign it and import the resulting certificate is kept here.

The client RDP request script is broken down into a few steps;

First we need to establish what the FQDN of the host is. If this was an AD joined machine, this would be easier to establish, however, as this is a workgroup machine, it is effectively the host name and the primary DNS suffix of the first network adaptor joined together.

We request a new certificate using the template rdp-certificate.ini. This specifies the custom OID which is stated as good practice by Microsoft for RDP certificates. Creating the CSR on the host then requesting Vault to sign it means the private key never leaves the host being secured.

1$fqdn = ([System.Net.Dns]::GetHostByName(($env:computerName))).Hostname
2$csr_filename = -join ((65..90) + (97..122) | Get-Random -Count 15 | % {[char]$_})
3
4$ini_file = Get-Content -Path "${env:SYSTEMROOT}\vault_rdp_sign\rdp-certificate.ini.tmpl"
5$ini_file -Replace 'TMPL_HOSTNAME',$fqdn | Set-Content -Path "${env:SYSTEMROOT}\vault_rdp_sign\rdp-certificate.ini"
6
7certreq -new "${env:SYSTEMROOT}\vault_rdp_sign\rdp-certificate.ini" "${env:TEMP}\${csr_filename}.csr"
8Then we login to Vault using the EC2 instance's IAM role, sign our CSR and grab the resulting signed certificate from the response JSON.
9
10vault login -method=aws header_value=vault-demo.infra.fluffycloudsandlines.blog role=rdp-issue-iam
11
12$sign_response = $(vault write pki/sign/rdp-cert csr=@"${env:TEMP}\${csr_filename}.csr")
13
14$cert_json = ($sign_response | ConvertFrom-Json)
15
16Set-Content -Path .\signed_cert.crt -Value $cert_json.data.certificate
17Finally we need to import our certificate into the Machine's certificate store and tell Windows to use it for RDP connections. The change is instant!
18
19$cert_import_response = Import-Certificate -FilePath .\signed_cert.crt -CertStoreLocation Cert:\LocalMachine\My
20
21$cert_thumbprint = $cert_import_response.Thumbprint
22
23wmic /namespace:\\root\CIMV2\TerminalServices PATH Win32_TSGeneralSetting Set SSLCertificateSHA1Hash="${cert_thumbprint}"
24The alternative to using the CSR / sign cycle as we do here here would be to use the certificate+key request and import cycle (using pki/issue), but this would mean careful handling of the private key generated by Vault and returned to the client. Windows can be very picky about private key handling (a good thing), so in this lab I have gone for simplicity. I have seen consul-template used for this job quite effectively, as the post-fetch hook could then be used to import the private key and invoke wmic to import the certificate.

So, wrapping up, simple question? Yes. Simple answer? Ish. Massive scope creep? Most definitely!

© 2024 by Fluffy Clouds and Lines. All rights reserved.
Theme by LekoArts