Tag Archives: docker

Weak TLS cipher suites

HTTP and HTTPS are well known Internet protocols that don’t require any introduction. The other day at work as part of a daily security scan one of our servers got tagged as using weak cipher suites during TLS negotiation. In this quick post I’ll explain what a weak cipher suite means and how to fix it.

There are many tools out there to check if you are following the security best practices when it comes to SSL/TLS server configuration (supported versions, accepted cipher suites, certificate transparency, expiration, etc.) but one of my favorites is https://www.ssllabs.com/ssltest/analyze.html and drwetter/testssl.sh.

SSLlabs.com is easy to use, you just have to enter a Hostname and the website will analyze all possible TLS configuration and calculate a score for you, this tool will also tell you what you can do to improve that score.

There are many TLS protocol versions: 1.0, 1.1, 1.2, 1.3. The first two are considered insecure and should not be used so I will focus on 1.2 and 1.3 only.
In my case SSLlabs.com was complaining about weak cipher suites were supported for TLS 1.2:

The above report is showing ECDHE-RSA-AES256-SHA384E and ECDHE-RSA-AES256-SHA as weak cipher suites.

First let’s clarify a couple of things, according to Wikipedia:

Cipher suite: A set of algorithms that help secure a network connection.

So a weak cipher suite will be algorithms with known vulnerabilities that can be used by attackers to downgrade connections or other nefarious things.

Fixing this is very easy and will require changing a line or two on the server configuration, deploying the changes and then testing again using SSLlabs.com.

Of course, the above only applies if you know exactly what you are doing, otherwise it will take you many attempts and you are going to waste precious time.

Reproduce and fix the issue locally

Suppose you have the same issue I had, because this issue was reported on an Nginx Server now the task is to reproduce the issue locally and come up with a fix. With modern technologies like docker containers it’s very easy to run a local Nginx server, the only thing you need is a copy of the nginx.conf, to run a docker container the command will be:

docker run --name mynginx -v ./public.pem:/etc/nginx/public.pem -v ./private.pem:/etc/nginx/private.pem -v ./nginx.conf:/etc/nginx/nginx.conf:ro -p 1337:443 --rm nginx

The above command is telling docker to run an Nginx container; binding the local port 1337 to the container port 443 and also mounting the public.pem (public key), private.pem (private key) and nginx.conf (the server configuration) files inside the container.

The nginx.conf looks like this:

    server {
        listen 443 ssl;
        server_name www.alevsk.com;
        ssl_certificate /etc/nginx/public.pem;
        ssl_certificate_key /etc/nginx/private.pem;
        ssl_protocols TLSv1.3 TLSv1.2;
        ssl_prefer_server_ciphers on;
        ssl_ecdh_curve secp521r1:secp384r1:prime256v1;
        ssl_ciphers EECDH+AESGCM:EECDH+AES256:EECDH+CHACHA20;
        ssl_session_cache shared:TLS:2m;
        ssl_buffer_size 4k;
        add_header Strict-Transport-Security 'max-age=31536000; includeSubDomains; preload' always;
    }

You can test the server is working fine with a simple curl command:

curl https://localhost:1337/ -v -k
*   Trying 127.0.0.1:1337...
* Connected to localhost (127.0.0.1) port 1337 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
*  CAfile: /etc/ssl/certs/ca-certificates.crt
*  CApath: /etc/ssl/certs
* TLSv1.0 (OUT), TLS header, Certificate Status (22):
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS header, Certificate Status (22):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (OUT), TLS header, Finished (20):
* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (OUT), TLS header, Certificate Status (22):
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS header, Finished (20):
* TLSv1.2 (IN), TLS header, Certificate Status (22):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* TLSv1.2 (OUT), TLS header, Supplemental data (23):
* TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384
* ALPN, server accepted to use http/1.1
* Server certificate:
....
....
....
> 
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* old SSL session ID is stale, removing
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* Mark bundle as not supporting multiuse
< HTTP/1.1 404 Not Found
< Server: nginx
< Date: Sun, 15 May 2022 23:14:23 GMT
< Content-Type: text/html
< Content-Length: 146
< Connection: keep-alive
< Strict-Transport-Security: max-age=31536000; includeSubDomains; preload
< 
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx</center>
</body>
</html>
* Connection #0 to host localhost left intact

The next step is to reproduce the report from https://www.ssllabs.com/ but that will involve somehow exposing our local Nginx server to the internet and that’s time consuming. Fortunately there’s an amazing open source tool that will help you to run all these TLS tests locally.

drwetter/testssl.sh is a tool for testing TLS/SSL encryption anywhere on any port and the best part is that runs on a container too, go to your terminal again and run the following command:

docker run --rm -ti --net=host drwetter/testssl.sh localhost:1337

The above command will run the drwetter/testssl.sh container, the –rm flag will automatically delete the container once it’s done running (keep your system nice and clean), -ti means interactive mode and –net=host will allow the container to use the parent host network namespace.

After a couple of seconds you will see a similar result as in the website, something like this:

Testing server’s cipher preferences.

Hexcode  Cipher Suite Name (OpenSSL)       KeyExch.   Encryption  Bits     Cipher Suite Name (IANA/RFC)                        
-----------------------------------------------------------------------------------------------------------------------------  
SSLv2                                                                                                                          
 -                                                                                                                             
SSLv3                                                                                                                          
 -                                                                                                                             
TLSv1                                                                                                                          
 -                                                                                                                             
TLSv1.1                                                                                                                        
 -                                                                                                                             
TLSv1.2 (server order)                                                                                                         
 xc030   ECDHE-RSA-AES256-GCM-SHA384       ECDH 521   AESGCM      256      TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384               
 xc02f   ECDHE-RSA-AES128-GCM-SHA256       ECDH 521   AESGCM      128      TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256               
 xc028   ECDHE-RSA-AES256-SHA384           ECDH 521   AES         256      TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384               
 xc014   ECDHE-RSA-AES256-SHA              ECDH 521   AES         256      TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA                  
 xcca8   ECDHE-RSA-CHACHA20-POLY1305       ECDH 521   ChaCha20    256      TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256         
TLSv1.3 (server order)                                        
 x1302   TLS_AES_256_GCM_SHA384            ECDH 256   AESGCM      256      TLS_AES_256_GCM_SHA384                              
 x1303   TLS_CHACHA20_POLY1305_SHA256      ECDH 256   ChaCha20    256      TLS_CHACHA20_POLY1305_SHA256                        
 x1301   TLS_AES_128_GCM_SHA256            ECDH 256   AESGCM      128      TLS_AES_128_GCM_SHA256

Even on this test we see the weak cipher suites (ECDHE-RSA-AES256-SHA384 and ECDHE-RSA-AES256-SHA).

Great, now you are able to fully reproduce the issue locally and test as many times as you want until you have the perfect configuration. It’s time to do the actual fix.

Open the nginx.conf file one more time and locate the line that starts with ssl_ciphers and just add !ECDHE-RSA-AES256-SHA384:!ECDHE-RSA-AES256-SHA at the end, ie:

 server {
        listen 443 ssl;
        server_name www.alevsk.com;
        ssl_certificate /etc/nginx/public.pem;
        ssl_certificate_key /etc/nginx/private.pem;
        ssl_protocols TLSv1.3 TLSv1.2;
        ssl_prefer_server_ciphers on;
        ssl_ecdh_curve secp521r1:secp384r1:prime256v1;
        ssl_ciphers EECDH+AESGCM:EECDH+AES256:EECDH+CHACHA20:!ECDHE-RSA-AES256-SHA384:!ECDHE-RSA-AES256-SHA;
        ssl_session_cache shared:TLS:2m;
        ssl_buffer_size 4k;
        add_header Strict-Transport-Security 'max-age=31536000; includeSubDomains; preload' always;
    }

If you run the Nginx container with the new configuration and then run the drwetter/testssl.sh test again this time you will see no weak cipher suites anymore.

TLSv1.2 (server order)                                                                                                         
 xc030   ECDHE-RSA-AES256-GCM-SHA384       ECDH 521   AESGCM      256      TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384              
 xc02f   ECDHE-RSA-AES128-GCM-SHA256       ECDH 521   AESGCM      128      TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256              
 xcca8   ECDHE-RSA-CHACHA20-POLY1305       ECDH 521   ChaCha20    256      TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256        
TLSv1.3 (server order)          
 x1302   TLS_AES_256_GCM_SHA384            ECDH 256   AESGCM      256      TLS_AES_256_GCM_SHA384                              
 x1303   TLS_CHACHA20_POLY1305_SHA256      ECDH 256   ChaCha20    256      TLS_CHACHA20_POLY1305_SHA256                        
 x1301   TLS_AES_128_GCM_SHA256            ECDH 256   AESGCM      128      TLS_AES_128_GCM_SHA256

Conclusion

Congratulations, you just fixed your first security engineer issue and now you can push the fix to production. In general when it comes to fixing any kind of problem in tech it is better to start by reproducing the issue locally and work on a fix from there (of course this is debatable if you are facing an issue that only happens in a specific environment). SSL/TLS  and cipher suites are one of those technologies that you have to learn by heart or at least have a very good understanding if you want to work in application security, but not only that, once you understand it it will completely change the way you approach problems and debug security applications .

Happy hacking.

Compilation of open-source security tools & platforms for your Startup

This compilation of open-source tools aim to provide resources you can use for some of the step of the secure development life cycle of your organization, ie:

  1. Security Training
  2. Security Architecture Review
  3. Security Requirements
  4. Threat Modeling
  5. Static Analysis
  6. OpenSource Analysis
  7. Dynamic Analysis
  8. Penetration Testing

If you think I should add a new tool to the list you can open a Github issue or send a PR directly.

User management

Secret management

IDS, IPS, Firewalls and Host/Network monitoring

Data visualization

Web Application Firewall

Object Storage

VPN

Security training platforms

Static analysis tools

Dynamic analysis tools

Misc

Stop passing secrets via environment variables to your application

docker inspect command showing container secrets

Environment variables are great to configure and change the behavior of your applications, however there’s a downside for that, if someone uses the `docker inspect` command your precious secrets will get revealed there, because of that you should never pass any sensitive data to your container using environment variables (the `-e` flag), I’ll show you an example.

Suppose you have a simple python application (Download the source code of the app here: https://github.com/Alevsk/docker-containers-env-vars-security ) that returns the hmac signature for a provided message using a configured secret, the code will look like this:

app_secret = os.environ.get('APP_SECRET')
if app_secret is None:
    app_secret = ""

# derive key based on configured APP_SECRET
salt = binascii.unhexlify('aaef2d3f4d77ac66e9c5a6c3d8f921d1')
secret = app_secret.encode("utf8")
key = pbkdf2_hmac("sha256", secret, salt, 4096, 32)

app = Flask(__name__)

@app.route("/")
def hello():
    message = request.args.get('message')
    if message is None:
        return "Give me a message and I'll sign it for you"
    else:
        h = hmac.new(key, message.encode(), hashlib.sha256)
        return "<b>Original message:</b> {}<br/><b>Signature:</b> {}".format(message, h.hexdigest())

if __name__ == "__main__":
    app.run()

Pretty straightforward, then you build the docker image with:

docker build -t alevsk/app-env-vars:latest .

And run the application on a container with:

docker run --rm --name hmac-signature-service -p 5000:5000 -e APP_SECRET=mysecret alevsk/app-env-vars:latest

Test the endpoints works fine running a `curl` command:

curl http://localhost:5000/?message=eaeae

<b>Original message:</b> eaeae<br/><b>Signature:</b> cce4625d3d586470bc84ac088b6e2ae24c012944832d54ab42a999de94252849%

Perfect, everything works as intended, however if you inspect the running container the content of `APP_SECRET` is leaked.

docker inspect hmac-signature-service
    ..
    ...
    "Env": [
        "APP_SECRET=mysecret",
        "PATH=/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
        "LANG=C.UTF-8",
        "GPG_KEY=E3FF2839C048B25C084DEBE9B26995E310250568",
        "PYTHON_VERSION=3.9.9",
        "PYTHON_PIP_VERSION=21.2.4",
        "PYTHON_SETUPTOOLS_VERSION=57.5.0",
        "PYTHON_GET_PIP_URL=https://github.com/pypa/get-pip/raw/3cb8888cc2869620f57d5d2da64da38f516078c7/public/get-pip.py",
        "PYTHON_GET_PIP_SHA256=c518250e91a70d7b20cceb15272209a4ded2a0c263ae5776f129e0d9b5674309"
    ],
    ...
    ..

Additionally, you can get the `process id` of the app inside the container (`1869799` in this case) and then look at the content of the `/proc/[pid]/environ` file and your application secret will be there too.

docker inspect hmac-signature-service
       ...
       ..
       .
       "State": {
            "Status": "running",
            "Running": true,
            "Paused": false,
            "Restarting": false,
            "OOMKilled": false,
            "Dead": false,
            "Pid": 1869799,
            "ExitCode": 0,
            "Error": "",
            "StartedAt": "2021-12-14T08:40:24.338541774Z",
            "FinishedAt": "0001-01-01T00:00:00Z"
        },

pstree -sg 1869799

systemd(1)───containerd-shim(1869776)───gunicorn(1869799)─┬─gunicorn(1869799)
                                                          ├─gunicorn(1869799)
                                                          ├─gunicorn(1869799)
                                                          └─gunicorn(1869799)

sudo cat /proc/1869799/environ

PATH=/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/binHOSTNAME=6a9110ff90e1APP_SECRET=mysecretLANG=C.UTF-8GPG_KEY=E3FF2839C048B25C084DEBE9B26995E310250568PYTHON_VERSION=3.9.9PYTHON_PIP_VERSION=21.2.4PYTHON_SETUPTOOLS_VERSION=57.5.0PYTHON_GET_PIP_URL=https://github.com/pypa/get-pip/raw/3cb8888cc2869620f57d5d2da64da38f516078c7/public/get-pip.pyPYTHON_GET_PIP_SHA256=c518250e91a70d7b20cceb15272209a4ded2a0c263ae5776f129e0d9b5674309HOME=/root%

Mount secret file to the container and read from there instead

You can fix that by doing a small change in the application source code, mainly how the application reads the secret, this time instead of reading from the `APP_SECRET` environment variable the app will read from a file located at `/tmp/app/secret`.

secret_path = "/tmp/app/secret"
app_secret = open(secret_path).readline().rstrip() if os.path.exists(secret_path) else ""

Build the docker image and run the container mounting the secret file.

docker build -t alevsk/app-env-vars:latest .
docker run --rm --name hmac-signature-service -p 5000:5000 -v $(pwd)/secret:/tmp/app/secret alevsk/app-env-vars:latest

The secret file looks like this:

mysecret

Try `curl` again:

curl http://localhost:5000/?message=eaeae

<b>Original message:</b> eaeae<br/><b>Signature:</b> cce4625d3d586470bc84ac088b6e2ae24c012944832d54ab42a999de94252849%

The generated signature looks good, if you inspect the container you will not see the secret used by the application.

docker inspect hmac-signature-service
    ..
    ...
    "Env": [
        "PATH=/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
        "LANG=C.UTF-8",
        "GPG_KEY=E3FF2839C048B25C084DEBE9B26995E310250568",
        "PYTHON_VERSION=3.9.9",
        "PYTHON_PIP_VERSION=21.2.4",
        "PYTHON_SETUPTOOLS_VERSION=57.5.0",
        "PYTHON_GET_PIP_URL=https://github.com/pypa/get-pip/raw/3cb8888cc2869620f57d5d2da64da38f516078c7/public/get-pip.py",
        "PYTHON_GET_PIP_SHA256=c518250e91a70d7b20cceb15272209a4ded2a0c263ae5776f129e0d9b5674309"
    ],
    ...
    ..

Also, if you exec into the container by running `docker exec -it hmac-signature-service sh` the `APP_SECRET` environment variable is not there, nor in the `/proc/1/environ` file or in the `printenv` command output.

cat /proc/1/environ

PATH=/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/binHOSTNAME=afe80d4cafcaLANG=C.UTF-8GPG_KEY=E3FF2839C048B25C084DEBE9B26995E310250568PYTHON_VERSION=3.9.9PYTHON_PIP_VERSION=21.2.4PYTHON_SETUPTOOLS_VERSION=57.5.0PYTHON_GET_PIP_URL=https://github.com/pypa/get-pip/raw/3cb8888cc2869620f57d5d2da64da38f516078c7/public/get-pip.pyPYTHON_GET_PIP_SHA256=c518250e91a70d7b20cceb15272209a4ded2a0c263ae5776f129e0d9b5674309HOME=/root

printenv

HOSTNAME=afe80d4cafca
PYTHON_PIP_VERSION=21.2.4
SHLVL=1
HOME=/root
GPG_KEY=E3FF2839C048B25C084DEBE9B26995E310250568
PYTHON_GET_PIP_URL=https://github.com/pypa/get-pip/raw/3cb8888cc2869620f57d5d2da64da38f516078c7/public/get-pip.py
TERM=xterm
PATH=/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
LANG=C.UTF-8
PYTHON_VERSION=3.9.9
PYTHON_SETUPTOOLS_VERSION=57.5.0
PWD=/app
PYTHON_GET_PIP_SHA256=c518250e91a70d7b20cceb15272209a4ded2a0c263ae5776f129e0d9b5674309

What if I cannot change the source code of my application?

In case you don’t have access or cannot change the source code of the application not all is lost This time, instead of passing the `APP_SECRET` environment variable via docker `-e` flags, you will mount a `secret` file directly into the container and then modify the container entry point to source from that secret file.

The `secret` file will contain something like this:

export APP_SECRET="mysecret"

Launch the container like this:

docker run --rm --name hmac-signature-service -p 5000:5000 -v $(pwd)/secret:/tmp/app/secret --entrypoint="sh" alevsk/app-env-vars:latest -c "source /tmp/app/secret && gunicorn -w 4 -b 0.0.0.0:5000 main:app"

This will prevent the `APP_SECRET` environment variable to be displayed if someone runs the `docker inspect` command.

docker inspect hmac-signature-service
    ..
    ...
    "Env": [
        "PATH=/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
        "LANG=C.UTF-8",
        "GPG_KEY=E3FF2839C048B25C084DEBE9B26995E310250568",
        "PYTHON_VERSION=3.9.9",
        "PYTHON_PIP_VERSION=21.2.4",
        "PYTHON_SETUPTOOLS_VERSION=57.5.0",
        "PYTHON_GET_PIP_URL=https://github.com/pypa/get-pip/raw/3cb8888cc2869620f57d5d2da64da38f516078c7/public/get-pip.py",
        "PYTHON_GET_PIP_SHA256=c518250e91a70d7b20cceb15272209a4ded2a0c263ae5776f129e0d9b5674309"
    ],
    ...
    ..

However `APP_SECRET` will still be visible inside `/proc/[container-parent-pid]/environ` (requires to be inside the container or root privileges on the machine running the container)

Takeaway

Environment variables are great but the risk of leaking secrets is just not worth it, if you give access to the docker command to somebody on your system that person can pretty much look what’s running inside the container by running the docker inspect command (having docker access is equivalent to have root access on the system anyways), because of this reason it’s preferable that applications read configurations and secrets directly from files and to leverage on the OS file system permissions.

Download the source code of the app here: https://github.com/Alevsk/docker-containers-env-vars-security

Happy hacking

Docker images are just TAR files!

I have been using Mac OSX for development for half a decade now, I love the macbook pro design, the operating system and that everything works out of the box, but I’ve always struggled with the fact that once you got your mac you “cannot” upgrade its components, that is a problem if you are a distributed systems engineer and the projects you are working on increase in complexity (ie: adding new services), of course you can always rent a big machine on the cloud but sometimes you just don’t have an Internet connection.

Anyway this blog post it’s about how can you transfer docker images between docker registries on different machines, currently I’m using a 15 inches 2019 Macbook pro with 16GB of Ram and when I open a couple of Chrome windows, IntelliJ and run a couple of services everything start to run very slow. I need more power.

So a couple of weeks ago I got the Lenovo X1 Extreme Gen 2 with 6 cores and 64GB of Ram, powerful enough, this laptops come with Windows preinstalled and you can even play videogames on them, but the main goal was to leverage the resources to the Macbook pro and I was not planning on switching machines right now, I love Linux, I use it every day but I still think the Linux desktop experience isn’t great yet, so the first thing I did was to install PopOS https://pop.system76.com/ a distro based on Ubuntu.

In previous blog post I’ve explained how to enable SSH, add the public key of your clients to the authorized_keys file and create ssh configs in Linux box, you will need all of that https://www.alevsk.com/2020/03/build-your-own-private-cloud-at-home-with-a-raspberry-pi-minio/.

Once you have ssh access enabled, and docker installed with a couple lines of bash you can start transferring docker images between your machines.

Let’s say you have a docker image called minio/minio:edge in your local registry and want to use it in your remote machine (and you don’t want to use the docker public registry for that), first you will need to export the image as a TAR file:

docker save -o /tmp/minio.tar minio/minio:edge

Next is transfer the file to the remote machine using scp.

scp /tmp/minio.tar [email protected]:/tmp/minio.tar

Finally load that image in the remote docker registry via ssh.

ssh [email protected] "docker load -i /tmp/minio.tar"
rm -rf /tmp/minio.tar

It’s a simple trick but it’s very useful and allowed me to have this dual laptop setup.

You can put all of this on a simple script, I called it dockerc.

#!/bin/bash
set -x

if [ $# -eq 0 ]; then
    echo "Usage: dockerc minio/minio:edge"
    exit 1
fi

echo "Copying container: $1"

IMAGE_TAR_FILE="image.tar"

REMOTE_HOST=""
REMOTE_USER=""
REMOTE_PATH="/tmp/"

LOCAL_PATH="/tmp/"

ABS_REMOTE_IMAGE=$REMOTE_PATH$IMAGE_TAR_FILE
ABS_LOCAL_IMAGE=$LOCAL_PATH$IMAGE_TAR_FILE

docker save -o $LOCAL_PATH$IMAGE_TAR_FILE $1
scp $ABS_LOCAL_IMAGE [email protected]$REMOTE_HOST:$ABS_REMOTE_IMAGE
ssh [email protected]$REMOTE_HOST "docker load -i $ABS_REMOTE_IMAGE && rm -rf $ABS_REMOTE_IMAGE"
rm -rf $ABS_LOCAL_IMAGE

And just do.

./dockerc minio/minio:edge

Bonus

If you wish to run more commands apart of docker load (in my case I’m using kubernetes kind) you just keep adding more after the &&, ie:

docker load -i $ABS_REMOTE_IMAGE && kind load docker-image $1 && rm -rf $ABS_REMOTE_IMAGE

Happy Hacking 🙂

#Docker para #hackers y pentesters, ejecutando #metasploit desde un container

Se acabó el 2016 y como ultima publicación del año les traigo un tutorial exprés que involucra docker y seguridad informática. En publicaciones anteriores explicaba que durante estos últimos meses he estado trabajando bastante con docker, orchestration e infraestructura de cloud en general (parte habitual en un trabajo de full stack engineer).

Docker es una herramienta muy poderosa para desarrolladores pues nos ayuda a construir imágenes con todas sus dependencias y nos deja el paso libre para enfocarnos en lo que realmente importa: deployar rápidamente una aplicación (o varias) que sabemos que va a funcionar.

Bajo esa premisa no es de extrañarse que la comunidad de seguridad haya adoptado docker tan rápidamente, docker es una herramienta fantástica 🙂 y así como nos permite dockerizar una aplicación también podemos dockerizar herramientas de seguridad y en general cualquier cosa que tengamos en nuestro arsenal para pentest.

La gente que trabaja o ha trabajado en seguridad, específicamente en el área de penetration testing, estará de acuerdo en que uno de los recursos más importantes que tenemos son las ventanas de tiempo, por lo general cuando se realiza una prueba de penetración a alguna aplicación o sistema se hace durante un periodo de tiempo bien definido, el tiempo es valioso y no podemos desperdiciarlo en instalar y configurar herramientas, o peor aun ¿que pasa si la infraestructura que estamos auditando nos bloquea? ¿cuánto tiempo vamos a invertir en preparar un nuevo nodo desde donde podamos lanzar ataques y recibir shells?, para todo lo anterior llega docker al rescate 🙂

En este tutorial mostraré como ejecutar una de las herramientas de seguridad más populares utilizando docker: metasploit, específicamente utilizaremos el módulo exploit/multi/handler para recibir sesiones de meterpreter.

ojo: no voy a mostrar como dockerizar metasploit, eso lo dejamos para un siguiente tutorial donde veamos como dockerizar aplicaciones

Ejecutando metasploit desde un contenedor de docker

Para evitarte el problema de abrir los puertos en tu router y hacer un mapeo de puertos para exponer tu maquina a internet, puedes contratar un vps con algún proveedor de tu elección, hoy en día es muy sencillo contratar un vps y puedes tener uno en línea prácticamente en minutos, yo recomiendo digitalocean por qué sus vps son baratos y el soporte es muy bueno, con un nodo de 10 USD al mes es suficiente para correr una imagen de metasploit, puedes contratar el de 5 USD pero tendrás que habilitar el swap o si no quieres gastar dinero siempre puedes aprovechar la promoción que te ofrece Amazon Webservices (más o menos 1 año de uso gratuito de una instancia micro)

Sea cual sea el proveedor que hayas elegido el siguiente paso es instalar docker en tu instancia, para este tutorial lo haré sobre ubuntu / debian pero podrías instalarlo en el sistema operativo de tu elección, acá tienes una lista de sistemas operativos soportados

Desde la terminal de tu instancia y como root vamos a ejecutar algunos comandos para instalar herramientas necesarias como compiladores de gcc/g++, algunas librerías, utilidades, etc. al final vamos a instalar docker

apt-get install build-essential
apt-get install libxslt-dev libxml2-dev zlib1g-dev --yes
apt-get install docker
apt-get install docker.io

Lo siguiente que vamos a hacer es crear un directorio en nuestra instancia, este directorio lo vamos a utilizar como un volumen persistente cuando ejecutemos metasploit desde el contenedor para poder almacenar ahí todo el loot, scripts y en general archivos que nos genere la herramienta.

mkdir /root/.msf4

Llego el momento, en el docker registry oficial existe una imagen llamada remnux/metasploit que contiene todo lo necesario para ejecutar la herramienta, ejecutamos el siguiente comando y docker comenzara a descargar la imagen y posteriormente procederá a correr el contenedor.

docker run --rm -it -p 443:443 -v ~/.msf4:/root/.msf4 -v /tmp/msf:/tmp/data remnux/metasploit

En el tutorial de docker anterior explicaba para que era cada parametro, en resumen -p nos permite mapear puertos y -v definir volúmenes persistentes (mapear una carpeta de nuestro sistema de archivos a una del sistema de archivos virtual del contenedor).

Una vez la imagen haya sido descargada el contenedor será creado y todas las dependencias necesarias comenzaran a ser instaladas, nos olvidamos de instalar todas las gemas y resolver conflictos y nos podemos ir por un café ya que es un proceso bastante automatizo 🙂

Después de unos minutos tenemos un nodo listo para recibir conexiones.

En el caso de necesitar más listeners no hay problema pues podemos ejecutar múltiples contenedores de metasploit en diferentes puertos y así tener nuestras shells organizadas, y si nuestro servidor es baneado rápidamente podemos desplegar otro ejecutando esos 6 comandos.

Como lo he comentado en varios artículos, docker es una herramienta muy poderosa que puede ser utilizada en varias situaciones además del desarrollo de software, como lo vimos en este tutorial. Puedes dockerizar casi cualquier cosa, yo en lo personal tengo una imagen con un set de herramientas que utilizo en mi día a día (fierce, dirb, sqlmap, nmap, enum4linux, hashcat, Responder, etc.)

Saludos y Happy Hacking.