Tag Archives: Tecnologia

System misconfiguration is the number one vulnerability, at least for Mastodon

I was one click away to start a twitter infosec.exchange revolution

One time during a security engineering interview someone asked me

What is the number one vulnerability?

That question caught me by surprise. I immediately start thinking about OWASP top 10, RCE, 0days and things like that, then I remembered the security incidents I’ve deal with in the past and most of them has been related to employees accidentally exposing credentials or private keys so I responded with “developers pushing credentials into public repositories”.

The interviewer smiled at me, she liked my answer, but clearly I was wrong. She said

The number one vulnerability is system misconfiguration

Today I’m going to explain why this is true and how I could have replaced everyone’s profile picture (or any other user’s uploaded content) with a meme at infosec.exchange Mastodon instance.


With all the drama happening lately on Twitter I decided to look at Mastodon. Mastodon is a free and open-source distributed social network which is super cool by itself. Most of the CyberSecurity people from twitter are gathering at infosec.exchange now so I went there and created an account.

Within minutes I start receiving greetings :). Then I start wondering (my hacker mind started kicking in) where is all this user uploaded content stored?

A quick look at the source code of the page reveals the content is coming from https://media.infosec.exchange/infosecmedia and when I visited the URL on my browser I found this

This looks very familiar to an AWS S3 xml response, also “minio” is displayed on the response as well so I immediately knew infosec.exchange was using MinIO.

Full disclosure, at the time of this publication on 11/18/2022, I’m working as a Security Software Engineer at MinIO. I’m very proud our product is being used by millions around the globe and with Mastodon getting a lot of adoption MinIO will only increase in popularity.

Anyways, my hacker mind (again) started wondering. 

  1. User content is uploaded to MinIO buckets
  2. Buckets must have anonymous read access in order for the browser, or really any other client, to access those resources
  3. Can I query those resources using a tool like the MinIO Client?

I opened my terminal and typed

mc alias set infosec https://media.infosec.exchange/ "" "" --debug

That was expected since I knew s3 clients don’t need any authentication to download content from the buckets. WIth this I was able to list all the content at https://media.infosec.exchange/infosecmedia.

mc ls infosec/infosecmedia

I noticed there were additional folders in there and I could access them with anonymous credentials as well, this will be equivalent to a Directory Traversal vulnerability.

So far I tried read and list operations only, in S3 IAM policies language that would be the Get and List Object operation, but what about Create/Upload and Delete?.
I identified where the logo of the server was stored inside the https://media.infosec.exchange/infosecmedia server, I downloaded it with mc, modified it a little bit and uploaded the modified version and it works!

Original:

Modified:

I added a tiny label at the bottom of the logo that says infosec.exchange so the change will go unnoticed and because my intention was never to disrupt the server.

At this point I realized the anonymous credentials has s3:* privileges, and I can do everything I want content wise, including:

  • Download all the files in the server, including files shared via Direct Messages.
  • Delete all the files in the server
  • Replace everyone’s profile picture with an Elon Musk meme, jk but it was possible to replace all the existing files

This system misconfiguration at the object storage level defeats whatever security mechanism Mastodon has on top.

Timeline of events and disclosure

  • 11/17/2022 – I created my infosec.exchange account and start playing around
  • 11/17/2022 – Found anonymous access was enabled and all the files were exposed
  • 11/17/2022 – Reached to [email protected] and reported the issue
  • 11/18/2022 – Jerry confirmed is aware of the issue and working on a fix
  • 11/18/2022 – Issue got fixed, thank you so much Jerry.

After reporting this issue I wondered if there may be other Mastodon instances with the same issue (system misconfiguration) so I went to fediverse.space and started checking some of the most popular, I found similar problems with a couple of them and I already reported the vulnerabilities.

Happy hacking

TLS Certificates 101

Over the last couple years I’ve been involved in many projects that require TLS certificates and some other technologies to provide security and establish encryption in transit for network communications. These technologies involve different concepts, protocols and standards such as mTLS, X509, PKI, digital signatures, hmac, symmetric and asymmetric encryption, different cryptographic algorithms etc and can feel very overwhelming especially for people that are new in the topic. 

I decided to write a quick blog post and share some of the lessons I’ve learned over the years. It is my intention to provide some guidance to newcomers when it comes to debugging these kinds of issues.

I don’t consider myself an expert in the whole TLS topic (I’m not a Cryptographer), however I’ve spent considerable time on this to the point that I feel very confident when debugging most issues related to TLS.

I firmly believe an expert is a person who has made so many errors and mistakes in the past and because of that he has accumulated so much knowledge to the point that he knows almost all the answers. I still need to make more 😉

Having said that, let’s start our journey into the TLS world.

What is TLS?

TLS, short for Transport Layer Security, is a cryptographic protocol designed to provide communications security over a computer network.

The TLS protocol is a complex piece of engineering with many security mechanisms and moving parts used to achieve security of data in transit by combining symmetric and asymmetric encryption techniques, if you want to learn more details about this technology I highly recommend to look at the TLS Wikipedia page, also here is an awesome diagram by Wazen Shbair that shows the different steps that happen during the TLS handshake process such as choosing a cipher suite or exchanging a shared key.

There are many different versions for this protocol but for practical purposes the only thing you need to know is:

  • TLS 1.0 and TLS 1.1 are known to be very vulnerable so you should never use them
  • TLS 1.2 is considered to be much more secure and is recommended to use
  • TLS 1.3 is an improved version of TLS 1.2 in terms of performance

What are TLS Certificates?

Excellent, now that you know TLS is the technology behind TLS certificates you may be wondering, what are TLS certificates?.

A TLS certificate is an implementation of a X509 Certificate, X509 is a standard that defines the file format used to store information related to an entity (among other things). This is very important because the main purpose of a certificate is to provide an identity to an entity, an entity can be anything, such as a website domain, a server, a piece of software, a workstation, a laptop or even a person. Similar to real life, people (entities) have birth certificates and driver’s licenses that prove who they are, this documents are backed up by government institution that we all trust (well … kinda), if you ask someone to prove their identity that person will probably show you their ID and if the ID looks damaged or you think there’s something phishy you can ask for additional forms of identification until you are convinced that you can trust them.

A TLS certificate will looks like this:

The TLS certificate contains many different fields like:

  • Subject name: the entity name, person name, website domain, etc
  • Issuer name: the authority that issued the certificate
  • Period of validity: the certificate is not valid before or after this dates
  • Additional cryptographic information and digital signatures

When an application (like your web browser) connects to a website by typing the IP address or the domain, ie: www.alevsk.com, the server behind will return a TLS certificate, the browser will then look at it and decide what to do next (exchange keys and establish a secure connection) based on the fields above.

As you can see, it is the client’s responsibility to verify these TLS certificates, the server may offer a perfectly fine certificate and the client could still choose to throw it away. With that said, I decided to write this blog post to share my experience debugging server applications during countless hours just to find the issue was on the client side or in the certificate itself, customers will swear the server is broken and not working when in reality it was their client not trusting the certificate authority or the clock their was misconfigured.

But before doing that I want to show you a couple of examples of TLS certificates being used on some applications, for that I have prepared a couple of code snippets.

Suppose you are running a simple web server written in go like this.

package main

import (
  "fmt"
  "net/http"
)

func hello(w http.ResponseWriter, req *http.Request) {
  secret := req.URL.Query().Get("secret")
  fmt.Fprintf(w, fmt.Sprintf("pong %s\n", secret))
}

func main() {
  address := "0.0.0.0:8080"
  http.HandleFunc("/ping", hello)
  fmt.Println("Starting server on", address)
  if err := http.ListenAndServe(address, nil); err != nil {
     return
  }
}

Let’s query the server running on port 8080 via CURL.

curl http://localhost:8080/ping\?secret\=eaeae
pong eaeae

With curl I’m performing a GET request and the server is replying fine, notice how I’m passing a secret via the secret parameter in the URL, everything looks good so far but there’s a problem.

This is an insecure web server, hence all the traffic going from the client (curl command) to the server (Go application) can be captured and the secret can be retrieved in plain text.

Let’s fix this by adding a TLS certificate to this web server, but first, how do we get one? Well, there are two types of certificates: self-signed certificates and certificates issued/signed by a certificate authority.

A certificate authority is an entity that stores, signs, and issues digital certificates. Going back to the society metaphor, it will be the equivalent to a government institution that many people trust. 

  • Self-signed certificate: ID document that you crafted yourself and its not valid
  • Certificate signed by a certificate authority: birth certificate, drivers license or ID document that is backed up by the government institution
  • Certificate authority: The government institution that most people trust by default

Using a TLS certificate signed by a certificate authority has many advantages that I’m going to discuss in some other blog post, for now it’s enough to say that you have to pay in most cases to get one of those but it’s worth it. Now let’s generate some certificates.

Tools for playing with TLS certificates

I’ll focus on self-signed certificates for this example and I’m going to show you how to generate them using three different tools.

Mkcert

Mkcert is a tool created by Filippo Valsorda. Mkcert is a simple zero-config tool to make locally trusted development certificates with any names you’d like, you can download it and installed following the instructions directly from this Github repository https://github.com/FiloSottile/mkcert

mkcert "localhost"

#Created a new certificate valid for the following names 📜
# - "localhost"

#The certificate is at "./localhost.pem" and the key at #"./localhost-key.pem" ✅

#It will expire on 14 December 2024 🗓

Certgen

Certgen is a tool created by MinIO, the startup I currently work with. Certgen is a dead simple tool to generate self signed certificates, you can download it and installed following the instructions directly from this Github repository https://github.com/minio/certgen

certgen --host localhost
#2022/09/14 21:58:54 wrote public.crt
#2022/09/14 21:58:54 wrote private.key

OpenSSL 

OpenSSL is a tool that doesn’t require any introduction, it has been part of the TLS arsenal of system administrators and network engineers for decades, if you wish to use openssl to generate certificates the process is a little bit more manual than with the tools above but still is fairly simple to use.

openssl genrsa -out private.key 2048

cat >> certificate.cnf << 'END'
[req]
distinguished_name = req_distinguished_name
req_extensions = req_ext
prompt = no

[req_distinguished_name]
O = "http-server"
C = US
CN  = "localhost"

[req_ext]
subjectAltName = @alt_names

[alt_names]
DNS.1 = localhost
END

openssl req -new -config certificate.cnf -key private.key -out certificate.csr

openssl x509 -signkey private.key -in certificate.csr -req -days 365 -out public.crt

#Certificate request self-signature ok
#subject=O = http-server, C = US, CN = localhost

Doesn’t really matter which tools you use to get your certificates as long as they are not malformed. Going back to our Go code I made a couple of changes and the code looks like this now.

package main

import (
  "fmt"
  "net/http"
)

func helloTLS(w http.ResponseWriter, req *http.Request) {
  secret := req.URL.Query().Get("secret")
  fmt.Fprintf(w, fmt.Sprintf("pong %s\n", secret))
}

func main() {
  address := "0.0.0.0:8080"
  http.HandleFunc("/ping", helloTLS)
  fmt.Println("Starting server on", address)

  if err := http.ListenAndServeTLS(address, "public.crt", "private.key", nil); err != nil {
     return
  }
}

The most relevant change is that I’m using the function http.ListenAndServeTLS now which is pretty explanatory. This function allows me to pass two files: the TLS certificate (also called the public key) and the corresponding private key. As mentioned before the TLS certificate contains information about the expiration date, valid domains, digital signature, certificate authority, among other, that will be relevant to the client while the private key will remain secret in the server and be used by the Go application to decrypt encrypted messages sent by the client during the TLS handshake. If you want to learn more about Public Key Cryptography I highly recommend looking at the PKI Wikipedia page.

I’ll run the Go program again and this time I’ll use the https protocol in the curl command.

curl https://localhost:8080/ping\?secret\=eaeae -k
pong eaeae

Before continuing I have to mention, just for the sake of the example and to demonstrate how TLS is securing the communication, I’ve included the -k flag in the curl command, passing this flag will make curl disable all TLS verifications. If you inspect the traffic on wireshark this time you’ll only see the TLS handshake and encrypted data after that, no more parameters in plain text.

TLS certificates

If you are not able to inspect SSL/TLS traffic in wireshark try adding the custom server port under Edit -> Preferences -> Protocols -> HTTP -> SSL/TLS Ports. You can add your custom port, ie: 8080. Change it to: 443,8080.

Debugging TLS certificate issues

Now comes the fun part, and where I’ve “invested” countless hours trying to fix other people’s problems, debugging TLS certificates issues.

If I go back and remove the -k flag from my curl command I get the following output.

curl https://localhost:8080/ping\?secret\=eaeae                                     
#curl: (60) SSL certificate problem: self-signed certificate
#More details here: https://curl.se/docs/sslcerts.html

#curl failed to verify the legitimacy of the server and therefore could not
#establish a secure connection to it. To learn more about this situation and
#how to fix it, please visit the web page mentioned above.

The error message is pretty clear, curl is failing to verify the certificate because this is a self-signed certificate, meaning it cannot be trusted. How does curl know this is a self-signed certificate? 

Well it is very simple, remember when I said there were only two types of certificates? 

There are two fields in the certificate: Subject Name and Issuer Name, if both fields match then this is a self-signed certificate. If they don’t then this is a certificate issued/signed by a certificate authority, that may or may not be trusted by the client.

The solution for this is quite simple, in your client application use the same certificate itself to verify its authenticity, since the certificate is technically its own authority, with curl you need to include the –cacert flag and specify the path to the public.crt used by the server.

curl https://localhost:8080/ping\?secret\=eaeae --cacert public.crt
pong eaeae

The above is one of the most common TLS issues I’ve encountered in the past few years and I hope I did a good job explaining the root cause, how to approach the problem and finally how to find a solution.

Next is a list of the most common TLS issues I’ve seen and some advices on how to debug them:

TLS ErrorDebugging advice
SSL certificate problem: self-signed certificateAsk people to provide you with the public key of the server (or download it yourself with openssl, ie: echo | openssl s_client -servername www.alevsk.com -connect www.alevsk.com:443 | sed -n ‘/^—–BEGIN CERT/,/^—–END CERT/p’ > public.crt) and then pass it with the –cacert flag in the curl command
SSL certificate problem: unable to get local issuer certificateUpdate ca certificates in the client machine (sudo update-ca-certificates) or ask for the ca.crt (certificate authority certificate) and pass it with the –cacert flag in the curl command
X.509 Certificate Signed by Unknown AuthorityClient doesn’t trust the certificate authority that issued the certificate, if you have the ca.crt files you can use openssl to verify the chain of trust: openssl verify -verbose -CAfile root.pem -untrusted intermediate.pem server.pem 
SSL: no alternative certificate subject name matches target host name ‘XXXX’The certificate used by the server is not valid for any of the domains the client is trying to use to connect, are you using an IP address instead of a domain name? Check the whole list of domains inside the certificate and make sure they match to what you are using in your curl command
SSL certificate problem: certificate has expiredMake sure the client and server clocks and correctly configured and in sync
SSL certificate problem: certificate is not yet validMake sure the client and server clocks and correctly configured and in sync
SSL: certificate subject name ‘XXXX’ does not match target host name YYYYThe certificate used by the server is not valid for any of the domains the client is trying to use to connect, are you using an IP address instead of a domain name? Check the whole list of domains inside the certificate and make sure they match to what you are using in your curl command
Public key certificate and private key doesn’t matchPretty explanatory, I’ve seen this happening mostly when people copy a bunch of keypairs around and get confused, you can use openssl to verify this
Connection error:The SSL connection could not be established, see inner exception..Check which version of TLS is the client using, most probably is using an old vulnerable version like 1.0 or 1.1, you need to upgrade your client TLS version to TLS 1.2 or above.

Test this using:

curl --tls-max 1.0 https://<endpoint>

As I remember more errors and ways of how I approach the problem to find a solution I may add them to the list, most of the errors are very explanatory but for some reason users struggle with them, when it comes to TLS errors they may think it’s some kind of obscure or arcane technology but its not.

Takeaway

As I said before, I’ve spent countless hours debugging this kind of issues, my main advice will be: instead of jumping directly into pulling server/application logs first look as the client side, always use the curl command first for testing, if the customer provide you with some client certificates for mTLS authentication or a ca.crt file and you are not even make them work with curl then definitely the issue is in the client side and not in your application.

  • Pay attention to how the client is referencing the server application, what domain the client is using in the curl command?
  • Make sure the client and server clocks and correctly configured and in sync
  • If the certificate contains wildcards make sure those are valid for the domain the server is using and the client is referencing
  • Make sure the public key and the private key matches, you can use OpenSSL to verify this

Here are some useful TLS certificates debugging tools I use:

Happy hacking

Pfsense + UDM + VLANs: The perfect home network

A couple weeks ago I did a mayor reconfiguration on my home network, I migrated from a single flat insecure network in where any device was able to talk to any other to a more secure design in where the network is segmented (IoT devices, guests, home lab, etc) and where I control who has access to what resources via firewall rules and other tools.

My original home network consisted of a single Google Wifi router, if you are interested the device it’s limited but will get the job done. However I wanted to learn more about networking and in particular how to configure a couple of monitoring tools, network packet inspection, security, firewall rules, etc. So I started looking at networking appliances that will let me do more advanced configurations and I quickly found about Pfsense (Protectli Vault) so I got one.

Additionally, as a birthday gift from @perrohunter, I got The Dream Machine from Ubiquiti (usually you will use one or the other) so I had two routers now. 

I had to integrate them together but I faced a couple of issues during the process to the point where I got locked out from the network and I had to reset the devices multiple times, either the Pfsense or the UDM would work but not both of them at the same time but after some time it’s finally working so I decided to document the process in case it helps someone in the future.

Designing the network

The main goal was to have a clear separation between IoT devices, guest devices and my home devices so i came out with this design.

Disclaimer: I’m a security software engineer but I know a thing or two about networking, if you see something wrong or do you think this design can be improved in any way please let me know.

As you can see, I’m putting the Pfsense at the edge of the network so I have full control over the traffic. I’m using the UDM as an access point only because most of the routing and DNS resolution will be done by Pfsense. The home network consists of 3 VLANs.

IoT network VLAN 30

All my smart lights, roomba, smart locks, cameras will be here, these devices cannot communicate to the other networks or connect to the Internet. Only wireless devices will connect to this network.

Guests network VLAN 50

Occasionally I get visitors at my place, guests can connect to this network and enjoy access to the Internet however devices here will not be able to talk to devices on the IoT nor the LAN network. TODO: I want to put some rules in place so guests’ devices are fully isolated from each other. Only wireless devices will connect to this network.

LAN 

This is the main network and it’s a combination between wired and wireless devices, my work stations, laptops, mobile devices, home servers, smart tv, gaming consoles, etc. These are devices that I trust and most of them have static IPs and dns names.

Setup

I’m not going to explain in detail how to do the initial configuration for the Pfsense or the UDM, there are thousands of videos and tutorials that can guide you through that, instead I’ll focus on the parts I struggled the most and the “hacks” I applied to make this work.

Pfsense setup

These devices will usually come with two ports, WAN and LAN. I had to connect the Ethernet cable from the modem to the WAN port (also called an interface) and that will be enough for the device to talk to the internet in most cases. After that, during the initial configuration Pfsense asked me to configure the LAN interface, there I chose the network IP, IP range, etc In my case I selected 10.13.37.1/24 as my network IP range.

You can tweak and do some more advanced configurations under Services > DHCP Server > LAN

DHCP got configured automatically for this interface so I didn’t worry about it.

After that I grabbed another Ethernet cable and connected it into the Pfsense LAN port and the UDM WAN port.

The Dream Machine (UDM) setup

Here is where the issues begin, I connected the Ethernet cable to the UDM, the app guided my through the initial configuration, then I created the initial Wireless network and everything seemed to work fine however after looking at Status > DHCP Leases on my Pfsense I could not see any of my wireless devices, that was weird.

I logged in into the Dream Router management console and I could see my wireless network, the default network and the wan interface. I also could see all my connected devices, however the assigned IP addresses were in the 192.168.1.1/24 range not the 10.13.37.1/24. So I had some idea about what was happening, UDM had its own DHCP server and was assigning the IP addresses itself.

I start trying many different things, some of them were:

  • Disabling DHCP in the default network of the UDM didn’t work.
  • Changing the network range in the default network of the UDM to 10.13.37.1/24 didn’t work, UDM was complaining that the range conflicts with the IP assigned to it (10.13.37.2).
  • Created an additional network on the range I wanted 10.13.37.1/24 didn’t work, devices from here were not able to see the Pfsense.

I tried many more things and after a couple weekends of trial and error I found the winning combination of steps, this is probably the most important part of this article.

  • Disconnect the Ethernet cable from the UDM WAN port, this cause the UDM to lose the IP assigned by the Pfsense
  • Change the default network configuration in the UDM to use the 10.13.37.3/24 network, this network will overlap with the 10.13.37.1/24 network in Pfsense but it’s ok, also set DHCP Mode to none.
  • In the UDM go to Internet > default WAN and select manual configuration, here I’m setting the primary DNS server as 10.13.37.1 (Pfsense) and IPv4 configuration has to be as follow

Here I’m telling UDM the next hop will be at 10.13.37.1 (Pfsense), also I want the UDM to use the static IP 10.13.37.2, and the subnet mask will be 255.255.255.248 which ended being the “hack” that allow me to use the 10.13.37.x range on the default network

Finally plug the Ethernet cable again into the UDM but this time into any of the LAN ports not the WAN (the little world icon), avoid the WAN port seriously!.

The reason why I want the default network in the UDM to be an overlap of the 10.13.37.1/24 network in the Pfsense was because otherwise I would lose access to the UDM management console, I’m still trying to figure out why is that but my guess is even if the UDM is accessible from the Pfsense network on 10.13.37.2 IP address when I try to go to there (if the default network range is configured to be 192.168.1.1 on UDM) it won’t let me in because of some validation on UDM, to avoid this I ended creating a dedicated wireless network just to recover access (after getting locked out multiple times).

Using the above configuration my devices in the 10.13.37.1/24 range are able to talk to Pfsense (10.13.37.1) and also the UDM (10.13.37.3) and finally I’m able to see and control my devices from the Pfsense as well.

VLANs

Network interfaces

The main network is working fine now what? I started creating additional VLANs and firewall rules for the guests and the IoT networks. On the Pfsense I went to  Interfaces > Assignments > VLANs and added the two VLANs. It’s very important to select LAN as the parent interface because all the traffic is going to come from that port.

For no particular reason I chose tag 30 for the IoT VLAN and tag 50 for the guest VLAN, don’t forget to assign the new VLANs to the LAN interface and create the new networks.

To be consistent I decided the guests network range will be 10.13.50.1/24 and the IoT will follow 10.13.30.1/24

DHCP Server

Now it was time to configure the DHCP server for the new networks, I went to Services > DHCP Server and made sure the enable DHCP box was checked, additionally I configured the assignable IP range. I did this for both networks.

Firewall rules

According to my original design the guests and IoT network have to be isolated from everything else and in particular the IoT devices should not have any access to the Internet, let’s do that very quickly by configuring firewall rules on Pfsense (Firewall > Rules).

These are the rules applied to the IoT_VLAN, here I’m telling Pfsense to block any incoming connection from the IoT network to the home or guests network, I’m also blocking the access to the Pfsense management console itself on port 8443 and 3000. This firewall by default will block any egress traffic in the network and because I’m not saying otherwise this network will not have access to the Internet.

The guest firewall rules are pretty much the same with the exception that I will allow users to access the Internet (see the last rule).

The Dream Machine

At this point I was done with the Pfsense part but I was missing one last import piece, configuring the access method for the IoT and guests devices so for that I had to return to the UDM management console and create a couple of wireless and network configurations.

Guest Network

I created the new guest network configuration, most of the default values were ok but I had to pay special attention to the VLAN ID section, this one has to match to the one I configured on Pfsense (tag 50). Also is very important to set DHCP Mode to None

I created the Wifi network and told UDM to use the guest network, all packets will be marked (tag 50) and managed by the guest VLAN.

IoT Network

I repeated the previous steps but this time for the IoT network, I proceeded to create the network, added the right VLAN (tag 30) and disabled DHCP, then configured the wifi network as well.

Testing

Once everything is configured the way I wanted I tested by connecting a couple of devices to the IoT network and monitored the traffic with the help of ntopng (maybe I will write a blogpost about it in the future), there I confirmed there was not a single request to a remote address.

Conclusion

Designing a network is one of the most fun things you can do in IT. The main reason for me to get the Pfsense was because I wanted to learn more about networking and have hands-on experience with several networking and security tools. VLANs, Firewall rules, DHCP, DNS, packet inspection etc are good skills for a security engineer but these are only the tip of the iceberg for a network engineer.

Weak TLS cipher suites

HTTP and HTTPS are well known Internet protocols that don’t require any introduction. The other day at work as part of a daily security scan one of our servers got tagged as using weak cipher suites during TLS negotiation. In this quick post I’ll explain what a weak cipher suite means and how to fix it.

There are many tools out there to check if you are following the security best practices when it comes to SSL/TLS server configuration (supported versions, accepted cipher suites, certificate transparency, expiration, etc.) but one of my favorites is https://www.ssllabs.com/ssltest/analyze.html and drwetter/testssl.sh.

SSLlabs.com is easy to use, you just have to enter a Hostname and the website will analyze all possible TLS configuration and calculate a score for you, this tool will also tell you what you can do to improve that score.

There are many TLS protocol versions: 1.0, 1.1, 1.2, 1.3. The first two are considered insecure and should not be used so I will focus on 1.2 and 1.3 only.
In my case SSLlabs.com was complaining about weak cipher suites were supported for TLS 1.2:

The above report is showing ECDHE-RSA-AES256-SHA384E and ECDHE-RSA-AES256-SHA as weak cipher suites.

First let’s clarify a couple of things, according to Wikipedia:

Cipher suite: A set of algorithms that help secure a network connection.

So a weak cipher suite will be algorithms with known vulnerabilities that can be used by attackers to downgrade connections or other nefarious things.

Fixing this is very easy and will require changing a line or two on the server configuration, deploying the changes and then testing again using SSLlabs.com.

Of course, the above only applies if you know exactly what you are doing, otherwise it will take you many attempts and you are going to waste precious time.

Reproduce and fix the issue locally

Suppose you have the same issue I had, because this issue was reported on an Nginx Server now the task is to reproduce the issue locally and come up with a fix. With modern technologies like docker containers it’s very easy to run a local Nginx server, the only thing you need is a copy of the nginx.conf, to run a docker container the command will be:

docker run --name mynginx -v ./public.pem:/etc/nginx/public.pem -v ./private.pem:/etc/nginx/private.pem -v ./nginx.conf:/etc/nginx/nginx.conf:ro -p 1337:443 --rm nginx

The above command is telling docker to run an Nginx container; binding the local port 1337 to the container port 443 and also mounting the public.pem (public key), private.pem (private key) and nginx.conf (the server configuration) files inside the container.

The nginx.conf looks like this:

    server {
        listen 443 ssl;
        server_name www.alevsk.com;
        ssl_certificate /etc/nginx/public.pem;
        ssl_certificate_key /etc/nginx/private.pem;
        ssl_protocols TLSv1.3 TLSv1.2;
        ssl_prefer_server_ciphers on;
        ssl_ecdh_curve secp521r1:secp384r1:prime256v1;
        ssl_ciphers EECDH+AESGCM:EECDH+AES256:EECDH+CHACHA20;
        ssl_session_cache shared:TLS:2m;
        ssl_buffer_size 4k;
        add_header Strict-Transport-Security 'max-age=31536000; includeSubDomains; preload' always;
    }

You can test the server is working fine with a simple curl command:

curl https://localhost:1337/ -v -k
*   Trying 127.0.0.1:1337...
* Connected to localhost (127.0.0.1) port 1337 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
*  CAfile: /etc/ssl/certs/ca-certificates.crt
*  CApath: /etc/ssl/certs
* TLSv1.0 (OUT), TLS header, Certificate Status (22):
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS header, Certificate Status (22):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (OUT), TLS header, Finished (20):
* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (OUT), TLS header, Certificate Status (22):
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS header, Finished (20):
* TLSv1.2 (IN), TLS header, Certificate Status (22):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* TLSv1.2 (OUT), TLS header, Supplemental data (23):
* TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384
* ALPN, server accepted to use http/1.1
* Server certificate:
....
....
....
> 
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* old SSL session ID is stale, removing
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* Mark bundle as not supporting multiuse
< HTTP/1.1 404 Not Found
< Server: nginx
< Date: Sun, 15 May 2022 23:14:23 GMT
< Content-Type: text/html
< Content-Length: 146
< Connection: keep-alive
< Strict-Transport-Security: max-age=31536000; includeSubDomains; preload
< 
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx</center>
</body>
</html>
* Connection #0 to host localhost left intact

The next step is to reproduce the report from https://www.ssllabs.com/ but that will involve somehow exposing our local Nginx server to the internet and that’s time consuming. Fortunately there’s an amazing open source tool that will help you to run all these TLS tests locally.

drwetter/testssl.sh is a tool for testing TLS/SSL encryption anywhere on any port and the best part is that runs on a container too, go to your terminal again and run the following command:

docker run --rm -ti --net=host drwetter/testssl.sh localhost:1337

The above command will run the drwetter/testssl.sh container, the –rm flag will automatically delete the container once it’s done running (keep your system nice and clean), -ti means interactive mode and –net=host will allow the container to use the parent host network namespace.

After a couple of seconds you will see a similar result as in the website, something like this:

Testing server’s cipher preferences.

Hexcode  Cipher Suite Name (OpenSSL)       KeyExch.   Encryption  Bits     Cipher Suite Name (IANA/RFC)                        
-----------------------------------------------------------------------------------------------------------------------------  
SSLv2                                                                                                                          
 -                                                                                                                             
SSLv3                                                                                                                          
 -                                                                                                                             
TLSv1                                                                                                                          
 -                                                                                                                             
TLSv1.1                                                                                                                        
 -                                                                                                                             
TLSv1.2 (server order)                                                                                                         
 xc030   ECDHE-RSA-AES256-GCM-SHA384       ECDH 521   AESGCM      256      TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384               
 xc02f   ECDHE-RSA-AES128-GCM-SHA256       ECDH 521   AESGCM      128      TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256               
 xc028   ECDHE-RSA-AES256-SHA384           ECDH 521   AES         256      TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384               
 xc014   ECDHE-RSA-AES256-SHA              ECDH 521   AES         256      TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA                  
 xcca8   ECDHE-RSA-CHACHA20-POLY1305       ECDH 521   ChaCha20    256      TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256         
TLSv1.3 (server order)                                        
 x1302   TLS_AES_256_GCM_SHA384            ECDH 256   AESGCM      256      TLS_AES_256_GCM_SHA384                              
 x1303   TLS_CHACHA20_POLY1305_SHA256      ECDH 256   ChaCha20    256      TLS_CHACHA20_POLY1305_SHA256                        
 x1301   TLS_AES_128_GCM_SHA256            ECDH 256   AESGCM      128      TLS_AES_128_GCM_SHA256

Even on this test we see the weak cipher suites (ECDHE-RSA-AES256-SHA384 and ECDHE-RSA-AES256-SHA).

Great, now you are able to fully reproduce the issue locally and test as many times as you want until you have the perfect configuration. It’s time to do the actual fix.

Open the nginx.conf file one more time and locate the line that starts with ssl_ciphers and just add !ECDHE-RSA-AES256-SHA384:!ECDHE-RSA-AES256-SHA at the end, ie:

 server {
        listen 443 ssl;
        server_name www.alevsk.com;
        ssl_certificate /etc/nginx/public.pem;
        ssl_certificate_key /etc/nginx/private.pem;
        ssl_protocols TLSv1.3 TLSv1.2;
        ssl_prefer_server_ciphers on;
        ssl_ecdh_curve secp521r1:secp384r1:prime256v1;
        ssl_ciphers EECDH+AESGCM:EECDH+AES256:EECDH+CHACHA20:!ECDHE-RSA-AES256-SHA384:!ECDHE-RSA-AES256-SHA;
        ssl_session_cache shared:TLS:2m;
        ssl_buffer_size 4k;
        add_header Strict-Transport-Security 'max-age=31536000; includeSubDomains; preload' always;
    }

If you run the Nginx container with the new configuration and then run the drwetter/testssl.sh test again this time you will see no weak cipher suites anymore.

TLSv1.2 (server order)                                                                                                         
 xc030   ECDHE-RSA-AES256-GCM-SHA384       ECDH 521   AESGCM      256      TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384              
 xc02f   ECDHE-RSA-AES128-GCM-SHA256       ECDH 521   AESGCM      128      TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256              
 xcca8   ECDHE-RSA-CHACHA20-POLY1305       ECDH 521   ChaCha20    256      TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256        
TLSv1.3 (server order)          
 x1302   TLS_AES_256_GCM_SHA384            ECDH 256   AESGCM      256      TLS_AES_256_GCM_SHA384                              
 x1303   TLS_CHACHA20_POLY1305_SHA256      ECDH 256   ChaCha20    256      TLS_CHACHA20_POLY1305_SHA256                        
 x1301   TLS_AES_128_GCM_SHA256            ECDH 256   AESGCM      128      TLS_AES_128_GCM_SHA256

Conclusion

Congratulations, you just fixed your first security engineer issue and now you can push the fix to production. In general when it comes to fixing any kind of problem in tech it is better to start by reproducing the issue locally and work on a fix from there (of course this is debatable if you are facing an issue that only happens in a specific environment). SSL/TLS  and cipher suites are one of those technologies that you have to learn by heart or at least have a very good understanding if you want to work in application security, but not only that, once you understand it it will completely change the way you approach problems and debug security applications .

Happy hacking.

Compilation of open-source security tools & platforms for your Startup

This compilation of open-source tools aim to provide resources you can use for some of the step of the secure development life cycle of your organization, ie:

  1. Security Training
  2. Security Architecture Review
  3. Security Requirements
  4. Threat Modeling
  5. Static Analysis
  6. OpenSource Analysis
  7. Dynamic Analysis
  8. Penetration Testing

If you think I should add a new tool to the list you can open a Github issue or send a PR directly.

User management

Secret management

IDS, IPS, Firewalls and Host/Network monitoring

Data visualization

Web Application Firewall

Object Storage

VPN

Security training platforms

Static analysis tools

Dynamic analysis tools

Misc