Category Archives: Noticias Informáticas

Weak TLS cipher suites

HTTP and HTTPS are well known Internet protocols that don’t require any introduction. The other day at work as part of a daily security scan one of our servers got tagged as using weak cipher suites during TLS negotiation. In this quick post I’ll explain what a weak cipher suite means and how to fix it.

There are many tools out there to check if you are following the security best practices when it comes to SSL/TLS server configuration (supported versions, accepted cipher suites, certificate transparency, expiration, etc.) but one of my favorites is https://www.ssllabs.com/ssltest/analyze.html and drwetter/testssl.sh.

SSLlabs.com is easy to use, you just have to enter a Hostname and the website will analyze all possible TLS configuration and calculate a score for you, this tool will also tell you what you can do to improve that score.

There are many TLS protocol versions: 1.0, 1.1, 1.2, 1.3. The first two are considered insecure and should not be used so I will focus on 1.2 and 1.3 only.
In my case SSLlabs.com was complaining about weak cipher suites were supported for TLS 1.2:

The above report is showing ECDHE-RSA-AES256-SHA384E and ECDHE-RSA-AES256-SHA as weak cipher suites.

First let’s clarify a couple of things, according to Wikipedia:

Cipher suite: A set of algorithms that help secure a network connection.

So a weak cipher suite will be algorithms with known vulnerabilities that can be used by attackers to downgrade connections or other nefarious things.

Fixing this is very easy and will require changing a line or two on the server configuration, deploying the changes and then testing again using SSLlabs.com.

Of course, the above only applies if you know exactly what you are doing, otherwise it will take you many attempts and you are going to waste precious time.

Reproduce and fix the issue locally

Suppose you have the same issue I had, because this issue was reported on an Nginx Server now the task is to reproduce the issue locally and come up with a fix. With modern technologies like docker containers it’s very easy to run a local Nginx server, the only thing you need is a copy of the nginx.conf, to run a docker container the command will be:

docker run --name mynginx -v ./public.pem:/etc/nginx/public.pem -v ./private.pem:/etc/nginx/private.pem -v ./nginx.conf:/etc/nginx/nginx.conf:ro -p 1337:443 --rm nginx

The above command is telling docker to run an Nginx container; binding the local port 1337 to the container port 443 and also mounting the public.pem (public key), private.pem (private key) and nginx.conf (the server configuration) files inside the container.

The nginx.conf looks like this:

    server {
        listen 443 ssl;
        server_name www.alevsk.com;
        ssl_certificate /etc/nginx/public.pem;
        ssl_certificate_key /etc/nginx/private.pem;
        ssl_protocols TLSv1.3 TLSv1.2;
        ssl_prefer_server_ciphers on;
        ssl_ecdh_curve secp521r1:secp384r1:prime256v1;
        ssl_ciphers EECDH+AESGCM:EECDH+AES256:EECDH+CHACHA20;
        ssl_session_cache shared:TLS:2m;
        ssl_buffer_size 4k;
        add_header Strict-Transport-Security 'max-age=31536000; includeSubDomains; preload' always;
    }

You can test the server is working fine with a simple curl command:

curl https://localhost:1337/ -v -k
*   Trying 127.0.0.1:1337...
* Connected to localhost (127.0.0.1) port 1337 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
*  CAfile: /etc/ssl/certs/ca-certificates.crt
*  CApath: /etc/ssl/certs
* TLSv1.0 (OUT), TLS header, Certificate Status (22):
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS header, Certificate Status (22):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (OUT), TLS header, Finished (20):
* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (OUT), TLS header, Certificate Status (22):
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS header, Finished (20):
* TLSv1.2 (IN), TLS header, Certificate Status (22):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* TLSv1.2 (OUT), TLS header, Supplemental data (23):
* TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384
* ALPN, server accepted to use http/1.1
* Server certificate:
....
....
....
> 
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* old SSL session ID is stale, removing
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* Mark bundle as not supporting multiuse
< HTTP/1.1 404 Not Found
< Server: nginx
< Date: Sun, 15 May 2022 23:14:23 GMT
< Content-Type: text/html
< Content-Length: 146
< Connection: keep-alive
< Strict-Transport-Security: max-age=31536000; includeSubDomains; preload
< 
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx</center>
</body>
</html>
* Connection #0 to host localhost left intact

The next step is to reproduce the report from https://www.ssllabs.com/ but that will involve somehow exposing our local Nginx server to the internet and that’s time consuming. Fortunately there’s an amazing open source tool that will help you to run all these TLS tests locally.

drwetter/testssl.sh is a tool for testing TLS/SSL encryption anywhere on any port and the best part is that runs on a container too, go to your terminal again and run the following command:

docker run --rm -ti --net=host drwetter/testssl.sh localhost:1337

The above command will run the drwetter/testssl.sh container, the –rm flag will automatically delete the container once it’s done running (keep your system nice and clean), -ti means interactive mode and –net=host will allow the container to use the parent host network namespace.

After a couple of seconds you will see a similar result as in the website, something like this:

Testing server’s cipher preferences.

Hexcode  Cipher Suite Name (OpenSSL)       KeyExch.   Encryption  Bits     Cipher Suite Name (IANA/RFC)                        
-----------------------------------------------------------------------------------------------------------------------------  
SSLv2                                                                                                                          
 -                                                                                                                             
SSLv3                                                                                                                          
 -                                                                                                                             
TLSv1                                                                                                                          
 -                                                                                                                             
TLSv1.1                                                                                                                        
 -                                                                                                                             
TLSv1.2 (server order)                                                                                                         
 xc030   ECDHE-RSA-AES256-GCM-SHA384       ECDH 521   AESGCM      256      TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384               
 xc02f   ECDHE-RSA-AES128-GCM-SHA256       ECDH 521   AESGCM      128      TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256               
 xc028   ECDHE-RSA-AES256-SHA384           ECDH 521   AES         256      TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384               
 xc014   ECDHE-RSA-AES256-SHA              ECDH 521   AES         256      TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA                  
 xcca8   ECDHE-RSA-CHACHA20-POLY1305       ECDH 521   ChaCha20    256      TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256         
TLSv1.3 (server order)                                        
 x1302   TLS_AES_256_GCM_SHA384            ECDH 256   AESGCM      256      TLS_AES_256_GCM_SHA384                              
 x1303   TLS_CHACHA20_POLY1305_SHA256      ECDH 256   ChaCha20    256      TLS_CHACHA20_POLY1305_SHA256                        
 x1301   TLS_AES_128_GCM_SHA256            ECDH 256   AESGCM      128      TLS_AES_128_GCM_SHA256

Even on this test we see the weak cipher suites (ECDHE-RSA-AES256-SHA384 and ECDHE-RSA-AES256-SHA).

Great, now you are able to fully reproduce the issue locally and test as many times as you want until you have the perfect configuration. It’s time to do the actual fix.

Open the nginx.conf file one more time and locate the line that starts with ssl_ciphers and just add !ECDHE-RSA-AES256-SHA384:!ECDHE-RSA-AES256-SHA at the end, ie:

 server {
        listen 443 ssl;
        server_name www.alevsk.com;
        ssl_certificate /etc/nginx/public.pem;
        ssl_certificate_key /etc/nginx/private.pem;
        ssl_protocols TLSv1.3 TLSv1.2;
        ssl_prefer_server_ciphers on;
        ssl_ecdh_curve secp521r1:secp384r1:prime256v1;
        ssl_ciphers EECDH+AESGCM:EECDH+AES256:EECDH+CHACHA20:!ECDHE-RSA-AES256-SHA384:!ECDHE-RSA-AES256-SHA;
        ssl_session_cache shared:TLS:2m;
        ssl_buffer_size 4k;
        add_header Strict-Transport-Security 'max-age=31536000; includeSubDomains; preload' always;
    }

If you run the Nginx container with the new configuration and then run the drwetter/testssl.sh test again this time you will see no weak cipher suites anymore.

TLSv1.2 (server order)                                                                                                         
 xc030   ECDHE-RSA-AES256-GCM-SHA384       ECDH 521   AESGCM      256      TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384              
 xc02f   ECDHE-RSA-AES128-GCM-SHA256       ECDH 521   AESGCM      128      TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256              
 xcca8   ECDHE-RSA-CHACHA20-POLY1305       ECDH 521   ChaCha20    256      TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256        
TLSv1.3 (server order)          
 x1302   TLS_AES_256_GCM_SHA384            ECDH 256   AESGCM      256      TLS_AES_256_GCM_SHA384                              
 x1303   TLS_CHACHA20_POLY1305_SHA256      ECDH 256   ChaCha20    256      TLS_CHACHA20_POLY1305_SHA256                        
 x1301   TLS_AES_128_GCM_SHA256            ECDH 256   AESGCM      128      TLS_AES_128_GCM_SHA256

Conclusion

Congratulations, you just fixed your first security engineer issue and now you can push the fix to production. In general when it comes to fixing any kind of problem in tech it is better to start by reproducing the issue locally and work on a fix from there (of course this is debatable if you are facing an issue that only happens in a specific environment). SSL/TLS  and cipher suites are one of those technologies that you have to learn by heart or at least have a very good understanding if you want to work in application security, but not only that, once you understand it it will completely change the way you approach problems and debug security applications .

Happy hacking.

Compilation of open-source security tools & platforms for your Startup

This compilation of open-source tools aim to provide resources you can use for some of the step of the secure development life cycle of your organization, ie:

  1. Security Training
  2. Security Architecture Review
  3. Security Requirements
  4. Threat Modeling
  5. Static Analysis
  6. OpenSource Analysis
  7. Dynamic Analysis
  8. Penetration Testing

If you think I should add a new tool to the list you can open a Github issue or send a PR directly.

User management

Secret management

IDS, IPS, Firewalls and Host/Network monitoring

Data visualization

Web Application Firewall

Object Storage

VPN

Security training platforms

Static analysis tools

Dynamic analysis tools

Misc

Stop passing secrets via environment variables to your application

docker inspect command showing container secrets

Environment variables are great to configure and change the behavior of your applications, however there’s a downside for that, if someone uses the `docker inspect` command your precious secrets will get revealed there, because of that you should never pass any sensitive data to your container using environment variables (the `-e` flag), I’ll show you an example.

Suppose you have a simple python application (Download the source code of the app here: https://github.com/Alevsk/docker-containers-env-vars-security ) that returns the hmac signature for a provided message using a configured secret, the code will look like this:

app_secret = os.environ.get('APP_SECRET')
if app_secret is None:
    app_secret = ""

# derive key based on configured APP_SECRET
salt = binascii.unhexlify('aaef2d3f4d77ac66e9c5a6c3d8f921d1')
secret = app_secret.encode("utf8")
key = pbkdf2_hmac("sha256", secret, salt, 4096, 32)

app = Flask(__name__)

@app.route("/")
def hello():
    message = request.args.get('message')
    if message is None:
        return "Give me a message and I'll sign it for you"
    else:
        h = hmac.new(key, message.encode(), hashlib.sha256)
        return "<b>Original message:</b> {}<br/><b>Signature:</b> {}".format(message, h.hexdigest())

if __name__ == "__main__":
    app.run()

Pretty straightforward, then you build the docker image with:

docker build -t alevsk/app-env-vars:latest .

And run the application on a container with:

docker run --rm --name hmac-signature-service -p 5000:5000 -e APP_SECRET=mysecret alevsk/app-env-vars:latest

Test the endpoints works fine running a `curl` command:

curl http://localhost:5000/?message=eaeae

<b>Original message:</b> eaeae<br/><b>Signature:</b> cce4625d3d586470bc84ac088b6e2ae24c012944832d54ab42a999de94252849%

Perfect, everything works as intended, however if you inspect the running container the content of `APP_SECRET` is leaked.

docker inspect hmac-signature-service
    ..
    ...
    "Env": [
        "APP_SECRET=mysecret",
        "PATH=/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
        "LANG=C.UTF-8",
        "GPG_KEY=E3FF2839C048B25C084DEBE9B26995E310250568",
        "PYTHON_VERSION=3.9.9",
        "PYTHON_PIP_VERSION=21.2.4",
        "PYTHON_SETUPTOOLS_VERSION=57.5.0",
        "PYTHON_GET_PIP_URL=https://github.com/pypa/get-pip/raw/3cb8888cc2869620f57d5d2da64da38f516078c7/public/get-pip.py",
        "PYTHON_GET_PIP_SHA256=c518250e91a70d7b20cceb15272209a4ded2a0c263ae5776f129e0d9b5674309"
    ],
    ...
    ..

Additionally, you can get the `process id` of the app inside the container (`1869799` in this case) and then look at the content of the `/proc/[pid]/environ` file and your application secret will be there too.

docker inspect hmac-signature-service
       ...
       ..
       .
       "State": {
            "Status": "running",
            "Running": true,
            "Paused": false,
            "Restarting": false,
            "OOMKilled": false,
            "Dead": false,
            "Pid": 1869799,
            "ExitCode": 0,
            "Error": "",
            "StartedAt": "2021-12-14T08:40:24.338541774Z",
            "FinishedAt": "0001-01-01T00:00:00Z"
        },

pstree -sg 1869799

systemd(1)───containerd-shim(1869776)───gunicorn(1869799)─┬─gunicorn(1869799)
                                                          ├─gunicorn(1869799)
                                                          ├─gunicorn(1869799)
                                                          └─gunicorn(1869799)

sudo cat /proc/1869799/environ

PATH=/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/binHOSTNAME=6a9110ff90e1APP_SECRET=mysecretLANG=C.UTF-8GPG_KEY=E3FF2839C048B25C084DEBE9B26995E310250568PYTHON_VERSION=3.9.9PYTHON_PIP_VERSION=21.2.4PYTHON_SETUPTOOLS_VERSION=57.5.0PYTHON_GET_PIP_URL=https://github.com/pypa/get-pip/raw/3cb8888cc2869620f57d5d2da64da38f516078c7/public/get-pip.pyPYTHON_GET_PIP_SHA256=c518250e91a70d7b20cceb15272209a4ded2a0c263ae5776f129e0d9b5674309HOME=/root%

Mount secret file to the container and read from there instead

You can fix that by doing a small change in the application source code, mainly how the application reads the secret, this time instead of reading from the `APP_SECRET` environment variable the app will read from a file located at `/tmp/app/secret`.

secret_path = "/tmp/app/secret"
app_secret = open(secret_path).readline().rstrip() if os.path.exists(secret_path) else ""

Build the docker image and run the container mounting the secret file.

docker build -t alevsk/app-env-vars:latest .
docker run --rm --name hmac-signature-service -p 5000:5000 -v $(pwd)/secret:/tmp/app/secret alevsk/app-env-vars:latest

The secret file looks like this:

mysecret

Try `curl` again:

curl http://localhost:5000/?message=eaeae

<b>Original message:</b> eaeae<br/><b>Signature:</b> cce4625d3d586470bc84ac088b6e2ae24c012944832d54ab42a999de94252849%

The generated signature looks good, if you inspect the container you will not see the secret used by the application.

docker inspect hmac-signature-service
    ..
    ...
    "Env": [
        "PATH=/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
        "LANG=C.UTF-8",
        "GPG_KEY=E3FF2839C048B25C084DEBE9B26995E310250568",
        "PYTHON_VERSION=3.9.9",
        "PYTHON_PIP_VERSION=21.2.4",
        "PYTHON_SETUPTOOLS_VERSION=57.5.0",
        "PYTHON_GET_PIP_URL=https://github.com/pypa/get-pip/raw/3cb8888cc2869620f57d5d2da64da38f516078c7/public/get-pip.py",
        "PYTHON_GET_PIP_SHA256=c518250e91a70d7b20cceb15272209a4ded2a0c263ae5776f129e0d9b5674309"
    ],
    ...
    ..

Also, if you exec into the container by running `docker exec -it hmac-signature-service sh` the `APP_SECRET` environment variable is not there, nor in the `/proc/1/environ` file or in the `printenv` command output.

cat /proc/1/environ

PATH=/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/binHOSTNAME=afe80d4cafcaLANG=C.UTF-8GPG_KEY=E3FF2839C048B25C084DEBE9B26995E310250568PYTHON_VERSION=3.9.9PYTHON_PIP_VERSION=21.2.4PYTHON_SETUPTOOLS_VERSION=57.5.0PYTHON_GET_PIP_URL=https://github.com/pypa/get-pip/raw/3cb8888cc2869620f57d5d2da64da38f516078c7/public/get-pip.pyPYTHON_GET_PIP_SHA256=c518250e91a70d7b20cceb15272209a4ded2a0c263ae5776f129e0d9b5674309HOME=/root

printenv

HOSTNAME=afe80d4cafca
PYTHON_PIP_VERSION=21.2.4
SHLVL=1
HOME=/root
GPG_KEY=E3FF2839C048B25C084DEBE9B26995E310250568
PYTHON_GET_PIP_URL=https://github.com/pypa/get-pip/raw/3cb8888cc2869620f57d5d2da64da38f516078c7/public/get-pip.py
TERM=xterm
PATH=/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
LANG=C.UTF-8
PYTHON_VERSION=3.9.9
PYTHON_SETUPTOOLS_VERSION=57.5.0
PWD=/app
PYTHON_GET_PIP_SHA256=c518250e91a70d7b20cceb15272209a4ded2a0c263ae5776f129e0d9b5674309

What if I cannot change the source code of my application?

In case you don’t have access or cannot change the source code of the application not all is lost This time, instead of passing the `APP_SECRET` environment variable via docker `-e` flags, you will mount a `secret` file directly into the container and then modify the container entry point to source from that secret file.

The `secret` file will contain something like this:

export APP_SECRET="mysecret"

Launch the container like this:

docker run --rm --name hmac-signature-service -p 5000:5000 -v $(pwd)/secret:/tmp/app/secret --entrypoint="sh" alevsk/app-env-vars:latest -c "source /tmp/app/secret && gunicorn -w 4 -b 0.0.0.0:5000 main:app"

This will prevent the `APP_SECRET` environment variable to be displayed if someone runs the `docker inspect` command.

docker inspect hmac-signature-service
    ..
    ...
    "Env": [
        "PATH=/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
        "LANG=C.UTF-8",
        "GPG_KEY=E3FF2839C048B25C084DEBE9B26995E310250568",
        "PYTHON_VERSION=3.9.9",
        "PYTHON_PIP_VERSION=21.2.4",
        "PYTHON_SETUPTOOLS_VERSION=57.5.0",
        "PYTHON_GET_PIP_URL=https://github.com/pypa/get-pip/raw/3cb8888cc2869620f57d5d2da64da38f516078c7/public/get-pip.py",
        "PYTHON_GET_PIP_SHA256=c518250e91a70d7b20cceb15272209a4ded2a0c263ae5776f129e0d9b5674309"
    ],
    ...
    ..

However `APP_SECRET` will still be visible inside `/proc/[container-parent-pid]/environ` (requires to be inside the container or root privileges on the machine running the container)

Takeaway

Environment variables are great but the risk of leaking secrets is just not worth it, if you give access to the docker command to somebody on your system that person can pretty much look what’s running inside the container by running the docker inspect command (having docker access is equivalent to have root access on the system anyways), because of this reason it’s preferable that applications read configurations and secrets directly from files and to leverage on the OS file system permissions.

Download the source code of the app here: https://github.com/Alevsk/docker-containers-env-vars-security

Happy hacking

Just enough cryptography for better securing your apps

I’m not a cryptographer myself but I have always admired their work because literally they make the Internet a better place by creating technology that allows us our right to privacy and cybersecurity plus I enjoy playing basic crypto CTF challenges. At my current job I’m a weird mixture between Software developer and Information Security guy (finally the best of two worlds) that means I work a lot with security and crypto related matters and I’m also very fortunate for  being able to work very close to a real cryptographer, so a couple months ago we were talking about security and I asked him if he could share some resources about cryptography but focusing on Software Engineers, meaning people without a heavy background in mathematics, this is what I learned.

If you are a Software Engineer curious about Information Security chances are you have crossed paths with a task that involves adding some kind of security mechanisms to protect data in your application, my friend explained me that in practice, cryptography is about choosing the right tool for the job and as a Security Software Engineer the most common tasks you will face are:

  • Encrypt a data blob or data stream
  • Exchange a secret key with a peer
  • Verify that some data blob or data stream is not modified
  • Verify that some data blob or data stream has been produced by someone specific.
  • Generate a secret key from another secret key
  • Generate a secret key from a (low-entropy) value – e.g. password

There are out there many cryptographic mechanisms that will make our lives easier when it comes to software engineering and we need to learn how to pick the right tool for the job

I’m not saying you should completely ignore the theory and jump directly into the practice, theory is important and you should learn it or at least be aware of the different types of cryptographic primitives, the most important classes/types are:

  • Pseudo-Random Permutation (PRP) – e.g. AES
  • Pseudo-Random Function (PRF) – e.g. ChaCha
  • Random Oracle (RO) – e.g. SHA-256

Modern cryptographic algorithms usually follow a more theory-based approach when it comes to achieve its goals and test its security, they usually do that by reducing the security relative to the primitives they use, it’s very common to read things like: 

The scheme X achieves the security goals A, B and C if the underlying primitive Y is in-fact a K.

Primitives usually have a condition in the form of a mathematical proof or very hard problems to solve e.g. prime-factor representation for RSA asymmetric primitive or the discrete logarithm problem for the Diffie-Hellman key exchange algorithm, therefore If you want to break a cryptographic scheme you will first need to break the assumptions used by its primitives, if no one can do that then it’s safe to assume the cryptographic scheme is secure.

My point here is you need to understand what are the goals you want to achieve first, what is your requirement, that’s the only way you can pick the right cryptographic scheme based on the primitives that will solve your problem.

Key Derivation

Let’s say you have an application that calculates the hmac-sha256 signature of a message using the password provided by a user as a key:

This works but there’s a problem with this approach, calculate hmac-sha256 signatures its trivial, with the help of a good dictionary an attacker can easily brute force the user password and if he succeed on obtaining the original secret he can impersonate the user in your application

Therefore, in order to make the job of the attacker more expensive in terms of time, computation and memory resources, the recommended approach is to use a Key-derivation function (KDF) or password-based key derivation function (PBKDF) when deriving a key from a password 

When I learned this concept from my friend it was mind-blowing, In general you have to distinguish between deriving a secret key from a high-entropy source, like a cryptographic key, or from a low-entropy source, like a password. PBKDF usage is about trade-offs, try to hit a parameter such that the PBKDF is relatively cheap to compute for you in your scenario but expensive for the attacker that tries to brute-force the secret password.

func deriveKey(key []byte, salt []byte) string {
	derivedKey := pbkdf2.Key(key, salt, 4096, 32, sha1.New)
	return base64.StdEncoding.EncodeToString(derivedKey)
}

func main() {
	message := []byte("hello world")
	key := []byte("super secret key")
	fmt.Println(string(key), deriveKey(key, message))
	// super secret key UJS9n/48gSEHyVK8UZPcC6vKGpsyI6mNrWexmvdtCB4=
}

Data Integrity

Preserving data integrity is a crucial part when working with information, the easiest way to achieve data integrity is by encrypting the data, however sometimes you don’t want to do that because encrypting and decrypting data it’s an expensive operation and you only want to preserve data integrity, then the most straightforward solution is HMAC with a RO – like SHA-256.

func computeHmac256(message []byte, key []byte) string {
	h := hmac.New(sha256.New, key)
	h.Write(message)
	return base64.StdEncoding.EncodeToString(h.Sum(nil))
}

func main() {
	message := []byte("hello world")
	key := []byte("super secret key")
	messageHmac := computeHmac256(message, key)
	fmt.Println(string(message), messageHmac)
	// hello world yFgx2zBmFCpq9N6JuGAMRnTBN5cUwTkHBtqcAyGR2bw=
}

Encryption

Symmetric cryptographic schemes are better for encrypting a data blob or data stream vs asymmetric schemes due to performance advantages, the recommendation is to use Authenticated Encryption (AE) or Authenticated Encryption with Associated Data (AEAD). There are two main AEAD schemes used in practice: AES-GCM and ChaCha20-Poly1305. Both belong to the same class of cryptographic objects: AEAD.

func encryptAES_GCM(key []byte, message []byte) string {
	block, _ := aes.NewCipher(key)
	nonce := make([]byte, 12)
	if _, err := io.ReadFull(rand.Reader, nonce); err != nil {
		panic(err.Error())
	}
	aesgcm, err := cipher.NewGCM(block)
	if err != nil {
		panic(err.Error())
	}
	ciphertext := aesgcm.Seal(nil, nonce, message, nil)
	return base64.StdEncoding.EncodeToString(ciphertext)
}

func main() {
	message := []byte("hello world")
	key := []byte("super secret key")
	fmt.Println(string(message), encryptAES_GCM(key, message))
	// hello world gBoJuHdAm5PjNGFdr+B/8Eq58IFZKcrzz6JQ
}

Github example: https://gist.github.com/Alevsk/0c296f230279bd399a244d4f7d1d7b84

Happy hacking 🙂

Build your own private cloud at home with a Raspberry Pi + Minio

Early this year I got one of those widescreen 5k monitors so I could work from home, the display is so cool but the sad thing is it only comes with 2 USB ports. I have a wired mouse and keyboard so when I wanted to connect an external hard drive for copying and backing up files it was always a pain in the neck.

I remembered I have an old Raspberry PI2 I brought with me from México so last weekend I decided to work on a small personal project for solving this issue once and for all, I finished it and it’s working very well so I thought on writing a blogpost about it so more people can build its own private cloud at home too.

Install Raspbian

The first thing was to install a fresh version of raspbian into the raspberry pi, I got it from https://www.raspberrypi.org/downloads/raspbian/, I wanted something minimal so I got the Raspbian Buster Lite image, this version of raspbian doesn’t come with a graphical interface but it’s fine because ssh it’s all what we need.

Insert the SD card into your machine, I’m using a macbook pro so I have to use an adapter, once the card is there you can verify using the df command, tip: you can easily identify your SD card by the size reported by df -h.

df -h

Filesystem      Size   Used  Avail Capacity iused      ifree %iused  Mounted on
/dev/disk1s5   466Gi   10Gi  246Gi     5%  487549 4881965331    0%   /
devfs          338Ki  338Ki    0Bi   100%    1170          0  100%   /dev
...
..
/dev/disk2s1   <------------- my SD card

Before copying the image first you need to unmount the device using sudo umount /dev/disk2s1 after that you can use the dd command.

sudo dd bs=1m if=./2020-02-13-raspbian-buster-lite.img of=/dev/disk2s1

Optionally you can do all this process in a more friendly way by installing Raspberry Pi imager tool https://www.raspberrypi.org/downloads/, you need to insert your sd card, choose the os, choose the sd card and the click the write button.

Once you have your fresh version of Raspbian installed it’s time to verify the Raspberry is working, the easiest way to do that is to connect a monitor and keyboard to it, so I did it.

When you connect the raspberry to the power the green led should start flashing, if that doesn’t happen is probably a sign of a corrupted EEPROM and you should look at the Recovery section of https://www.raspberrypi.org/downloads/.

Access the Raspberry Pi remotely

Alright, if you get to this point means your raspberry is fine, next step is to connect it to your network, I connected mine to my switch using an ethernet cable, before ssh into the raspberry first we need to get its IP, there are multiple ways to get the IP address assigned to your raspberry, I used nmap https://nmap.org/ to quickly scan my local network for new devices.

nmap -sP 192.168.86.0/24

Starting Nmap 7.80 ( https://nmap.org ) at 2020-03-29 19:55 PDT
Nmap scan report for testwifi.here (192.168.86.1)
...
..
Nmap scan report for raspberrypi (192.168.86.84)
Host is up (0.0082s latency).
...
..
Nmap done: 256 IP addresses (10 hosts up) scanned in 2.55 seconds

Ok from now on I’m going to start referring to the raspberry as nstorage (network storage), on my local machine I added a new entry to /etc/hosts with this information.

# Minio running in raspberry pi in home network
192.168.86.84    nstorage
192.168.86.84    raspberry

I also added a new entry on ~/.ssh/config so it is easier to connect via ssh.

Host nstorage
	User pi
	Hostname nstorage
	Port 22
	ServerAliveInterval 120
	ServerAliveCountMax 30

You can type on your terminal ssh nstorage, and login using the default credentials: pi / raspberry.

ssh nstorage

Linux raspberrypi 4.19.97-v7+ #1294 SMP Thu Jan 30 13:15:58 GMT 2020 armv7l

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Mon Mar 30 03:27:49 2020 from 192.168.86.64
[email protected]:~ $

First thing you should do is change the default password using the passwd command http://man7.org/linux/man-pages/man1/passwd.1.html.

One thing I always like to do is to add the public ssh key of my machine (my macbook pro) to the list of authorized_keys on the remote server (nstorage), you can do this by copying your public key: cat ~/.ssh/id_rsa.pub | pbcopy and then in nstorage in the /home/pi/.ssh/authorized_keys (create the file if it doesn’t exist) file append the key to the end.

[email protected]:~/.ssh $ cat authorized_keys
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCvxqCsC2RWVfWfix/KT1R8eZ9zN5SXoZ8xV8eCsk47AZUkZKBdCLxp0arhS2/+WpjRAFuR4+XgmnWlu/rQYzWGaqv/sm5420zaF6fpOaeFXEuLGVP7Nb4e1oPR1tNbzZ7OLJs1FVZIk8rBeTfLh2+UMU8Lut+rKtd9FbW4LdTimscg8ufeFZ1bKWTPih4+o3kYEdSFpMz0ntKDqKA7g3Kvq6PbhUxcICA/KrJbjxTjuOelfqsfTz7xrJW/sII5QETTqL93ny7DlPdVdM2Qw6C/1NZ1hV7ZgpihFlD+XKhdqdugG9DgjzgKvdNx63idswCRJKmdxHZN+oM33+bASHMT [email protected]

That way next time you ssh into nstorage (the raspberry) the login process will be automatic.

Install Minio

You are on a fresh raspbian system, first thing you should do is update the existing software.

sudo apt-get update
sudo apt-get upgrade

After that lets download the minio server and the minio client, we also create symbolic links for both binaries.

wget https://dl.minio.io/server/minio/release/linux-arm/minio
wget https://dl.minio.io/client/mc/release/linux-arm/mc
sudo ln -s /home/pi/minio /usr/bin/minio
sudo ln -s /home/pi/mc /usr/bin/mc

At this point you can start a simple minio server with:

[email protected]:~ $ mkdir ~/data
[email protected]:~ $ minio server ~/data
Endpoint:  https://192.168.86.84:9000  https://127.0.0.1:9000
AccessKey: minioadmin
SecretKey: minioadmin

Browser Access:
   https://192.168.86.84:9000  https://127.0.0.1:9000

Command-line Access: https://docs.min.io/docs/minio-client-quickstart-guide
   $ mc config host add myminio https://192.168.86.84:9000 minioadmin minioadmin

Object API (Amazon S3 compatible):
   Go:         https://docs.min.io/docs/golang-client-quickstart-guide
   Java:       https://docs.min.io/docs/java-client-quickstart-guide
   Python:     https://docs.min.io/docs/python-client-quickstart-guide
   JavaScript: https://docs.min.io/docs/javascript-client-quickstart-guide
   .NET:       https://docs.min.io/docs/dotnet-client-quickstart-guide

Detected default credentials 'minioadmin:minioadmin', please change the credentials immediately using 'MINIO_ACCESS_KEY' and 'MINIO_SECRET_KEY'

In your local machine go to http://nstorage:9000/minio and you will see the following screen.

We are almost there, you have a minio server running in your raspberry pi, you can start uploading files and creating buckets if you want, but first let’s add some security.

Securing your Minio

Right now all the traffic between you and nstorage (your minio server) is unencrypted, let’s fix that quickly, I used mkcert https://github.com/FiloSottile/mkcert by Filippo Valsorda for quickly generate certificates signed by a custom certificate authority, sounds scary but is actually quite simple.

In the raspberry we are going to create the following folders to hold the certificates.

mkdir ~/.minio/certs/CAs
mkdir ~/.mc/certs/CAs

In your local machine we generate and push the certificates to the raspberry, dont forget to also push the public key of your local certificate authority created by mkert under /Users/$USER/Library/Application Support/mkcert/rootCA.pem.

$ mkcert nstorage
Using the local CA at "/Users/alevsk/Library/Application Support/mkcert" ✨

Created a new certificate valid for the following names 📜
 - "nstorage"

The certificate is at "./nstorage.pem" and the key at "./nstorage-key.pem" ✅

$ ls nstorage*
nstorage-key.pem nstorage.pem
$ scp ./nstorage* [email protected]:~/.minio/certs
$ scp ./rootCA.pem [email protected]:~/.minio/certs/CAs
$ scp ./rootCA.pem [email protected]:~/.mc/certs/CAs

That’s it, you have now a secure connection with your Minio, if you go to your browser you can HTTPS this time.

Nstorage certificate is valid and trusted by your system because was generated by your local certificate authority, every device that wants to access this server need to trust the CA as well, otherwise it will get a trust error.

Mount external drive

Alright, so far you have a secure Minio running on the raspberry pi, in my case I used a 16GB SD card, which was not enough for storing all my data and the whole point was to access my external drive files remotely, so let’s do that now. But first instead of start Minio manually let’s create a bash script and change the default credentials.

Create a new file using vim or your editor of choice: vim start.sh

#!/bin/bash

export MINIO_ACCESS_KEY=SuperSecretAccessKey
export MINIO_SECRET_KEY=SuperSecretSecretKey
export MINIO_DOMAIN=nstorage
export MINIO_DISK_USAGE_CRAWL=off

minio server ~/data

Save the above lines and then give execution permissions to the script: chmod +x start.sh
Now you can start your Minio running ./start.sh

[email protected]:~ $ ./start.sh
Endpoint:  https://192.168.86.84:9000  https://127.0.0.1:9000
AccessKey: SuperSecretAccessKey
SecretKey: SuperSecretSecretKey

Browser Access:
   https://192.168.86.84:9000  https://127.0.0.1:9000

Command-line Access: https://docs.min.io/docs/minio-client-quickstart-guide
   $ mc config host add myminio https://192.168.86.84:9000 SuperSecretAccessKey SuperSecretSecretKey

Object API (Amazon S3 compatible):
   Go:         https://docs.min.io/docs/golang-client-quickstart-guide
   Java:       https://docs.min.io/docs/java-client-quickstart-guide
   Python:     https://docs.min.io/docs/python-client-quickstart-guide
   JavaScript: https://docs.min.io/docs/javascript-client-quickstart-guide
   .NET:       https://docs.min.io/docs/dotnet-client-quickstart-guide

Now connect your external hard drive to one of the USB ports, I had some issues while doing this, Raspbian was not listing the device under /dev so make sure to increase the USB ports power via configuration in /boot/config.txt, add max_usb_current=1 to the end of the file.

[email protected]:~ $ cat /boot/config.txt
# For more options and information see
# http://rpf.io/configtxt
# Some settings may impact device functionality. See link above for details
...
..
# Increase power available to USB ports
max_usb_current=1

Reboot the raspberry and plug your drive again, if everything went right you should be able to see your external drive using fdisk.

$ sudo fdisk -l
Disk /dev/sda: 4.6 TiB, 5000981077504 bytes, 9767541167 sectors
Disk model: Expansion Desk
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 24A09C07-313E-43B6-A811-FAF09DAB962C

Device      Start        End    Sectors  Size Type
/dev/sda1      34     262177     262144  128M Microsoft reserved
/dev/sda2  264192 9767540735 9767276544  4.6T Microsoft basic data

You can mount the device using the mount command https://linux.die.net/man/8/mount.

[email protected]:~ $ sudo mount -t ntfs /dev/sda2 /home/pi/data
[email protected]:~ $ ls -la data
total 9032
drwxrwxrwx 1 root root     8192 Mar 30 08:19  .
drwxr-xr-x 9 pi   pi       4096 Mar 30 08:27  ..
drwxrwxrwx 1 root root    65536 Mar 26 22:53  anime
drwxrwxrwx 1 root root    20480 May  5  2019  anime_movies
drwxrwxrwx 1 root root        0 Jan  4  2019  backup
drwxrwxrwx 1 root root     4096 Jan  4  2019  books
drwxrwxrwx 1 root root     4096 Jan  4  2019  dev
drwxrwxrwx 1 root root    16384 Feb 12  2017  documents
drwxrwxrwx 1 root root        0 Feb  6  2017  download
drwxrwxrwx 1 root root    12288 Feb 12  2017  games
drwxrwxrwx 1 root root     4096 Jan  4  2019  images
drwxrwxrwx 1 root root     4096 Feb 10  2017  manga
drwxrwxrwx 1 root root     4096 Mar 29 07:48  .minio.sys
drwxrwxrwx 1 root root    65536 Mar 30 01:41  movies
drwxrwxrwx 1 root root        0 Jan  4  2019  music
drwxrwxrwx 1 root root        0 Feb  6  2017  pentest
drwxrwxrwx 1 root root    12288 Jun  2  2019  series
drwxrwxrwx 1 root root     4096 Jun  2  2019  software
drwxrwxrwx 1 root root        0 Jan 25 20:51  .Trashes
drwxrwxrwx 1 root root        0 Jun 21  2019  videos
[email protected]:~ $

Restart your minio server and this time when you go to the browser you will see all your files there.

You can list all the files and buckets using the minio client (mc) from your local machine or using the mc binary inside the nstorage raspberry.

$ mc config host add nstorage https://nstorage:9000 SuperSecretAccessKey SuperSecretSecretKey
$ mc ls nstorage

[2020-03-26 15:53:09 PDT]      0B anime/
[2019-05-04 18:25:59 PDT]      0B anime_movies/
[2019-01-03 23:00:08 PST]      0B backup/
[2019-01-03 23:04:29 PST]      0B books/
[2019-01-03 23:48:04 PST]      0B dev/
[2017-02-11 17:09:28 PST]      0B documents/
[2017-02-05 16:45:21 PST]      0B download/
[2017-02-11 16:03:31 PST]      0B games/
[2019-01-03 23:06:48 PST]      0B images/
[2017-02-10 11:50:31 PST]      0B manga/
[2020-03-29 17:41:41 PDT]      0B movies/
[2019-01-03 22:48:15 PST]      0B music/
[2017-02-05 22:14:30 PST]      0B pentest/
[2019-06-02 14:33:34 PDT]      0B series/
[2019-06-01 21:29:46 PDT]      0B software/
[2019-06-20 20:20:56 PDT]      0B videos/

You can download every file you want, upload files and also stream media. Go to your Minio browser and select any video you like, click on the “3 dots” icon on the right and click the share icon.

Minio will generate a pre-signed URL that you can use on VLC, click on File > Open Network and paste the video URL.

Click the open button and enjoy your videos.

Everything is great so far, you are able to access all your files from any device in your network but if your raspberry loses power and reboot you will need to mount the external drive and start the Minio server manually again so let’s automate that.

Mount the external drive with fstab

On linux by default every drive listed in /etc/fstab will be mounted on startup, there are many ways to mount drives but the recommended way is using UUID or PARTUUID instead of the name.

[email protected]:~ $ sudo blkid
...
...
...
/dev/sda2: LABEL="Arael" UUID="62F048D0F048AC5B" TYPE="ntfs" PTTYPE="atari" PARTLABEL="Basic data partition" PARTUUID="5206da84-ded1-43b6-abf2-14b5950c4d7c"

Locate the PARTUUID of your own drive, mine was 5206da84-ded1-43b6-abf2-14b5950c4d7c, and then add it at the end of your /etc/fstab file.

$ cat /etc/fstab

proc            /proc           proc    defaults          0       0
PARTUUID=738a4d67-01  /boot           vfat    defaults          0       2
PARTUUID=738a4d67-02  /               ext4    defaults,noatime  0       1
# a swapfile is not a swap partition, no line here
#   use  dphys-swapfile swap[on|off]  for that
PARTUUID=5206da84-ded1-43b6-abf2-14b5950c4d7c  /home/pi/data      ntfs    defaults,errors=remount-ro 0       1

Reboot your raspberry and verify your drive was mounted automatically under /home/pi/data.

Start the Minio server with systemctl

Finally, the last piece of the puzzle is to make minio to start automatically, again, there’s many ways to do this but in this tutorial we will do it with init system or systemctl, let’s create a file called minio.service with the following content.

[Unit]

Description=Minio Storage Service

After=network-online.target home-pi-data.mount

[Service]

ExecStart=/home/pi/start.sh

WorkingDirectory=/home/pi

StandardOutput=inherit

StandardError=inherit

Restart=always

User=pi

[Install]

WantedBy=multi-user.target

ExecStart points to the start.sh bash script, After directive will tell the Minio server to wait until the network service is online and the /dev/sda2 drive is mounted by fstab, home-pi-data.mount is a systemd mount unit you can get using the systemctl list-units command.

$ systemctl list-units | grep '/home/pi/data' | awk '{ print $1 }'
home-pi-data.mount

Copy the file to the /etc/systemd/system directory.

cp ./minio.service /etc/systemd/system/minio.service

Start minio as a systemd service using the start command and verify is running with the status command.

[email protected]:~ $ sudo systemctl start minio
[email protected]:~ $ sudo systemctl status minio
● minio.service - Minio Storage Service
   Loaded: loaded (/etc/systemd/system/minio.service; enabled; vendor preset: enabled)
   Active: active (running) since Mon 2020-03-30 10:12:22 BST; 4s ago
 Main PID: 1453 (start.sh)
    Tasks: 16 (limit: 2200)
   Memory: 156.2M
   CGroup: /system.slice/minio.service
           ├─1453 /bin/bash /home/pi/start.sh
           └─1456 minio server /home/pi/data

Mar 30 10:12:22 raspberrypi systemd[1]: Started Minio Storage Service.

If everything looks fine, enable the service, Minio will start automatically every time your Raspberry pi boot.

sudo systemctl enable minio

Reboot your raspberry pi one last time and verify everything is working as expected, if you are able to see the minio browser at https://nstorage:9000/minio without you having to do anything congratulations you now have your own private cloud at home powered by Minio :).

Happy hacking.