AWS CLI: Amazon Simple Storage Service (Amazon S3)

AWS CLI S3

Now that everything is ‘in the Cloud’, DevOps is not a buzzword anymore and SRE (Site Reliability Engineer) is kind of a cool title (the sysadmin title was not, I know they are not the same but…..). I have decided is time to take a Cloud Certification.
I hold several certs myself. Lots from F5 Networks (All of them actually), CheckPoint, BlueCoat, RSA and I even got my CISSP (that was a hard one), but I don’t have a cloud specific cert, even I had worked on Cloud environments for some years already, so it is time to change that.

I will be sitting for the ‘AWS Certified Solutions Architect Official: Associate Exam‘ and so far my main study source is the study guide:
AWS Certified Solutions Architect Official Study Guide: Associate Exam by Joe Baron and others.
The book is obviously organised in chapters and the majority of charters have some exercises.
What I will do is to document the exercises for each chapter and although the book doesn’t require it, I will try to complete the exercises using the AWS CLI.
Why? Well I like CLIs but most importantly, now whoever you speak in IT, they will tell you they are working in automation (like computers before were used for creating Art?).
CLIs are great for Automation, have you tried to automate something through a GUI?(yes..possible but a pain..)

I also want to keep this post series as a reference or kind of cheat sheet for myself or whoever needs it.
If you are also studying for this cert, I recommend you not to just copy and paste the solutions, it is better you try to do it yourself, using AWS CLI help and then if you get stuck you can take a look ;).
So, without further do, on this post we are getting into Amazon S3 (Simple Storage Service).
I am running Ubuntu 17.10, but it should be similar for other SO as long as you have AWS CLI installed.
Have fun!

Exercise 2.1
Create an Amazon Simple Storage Service (Amazon S3) Bucket

Let’s first list the bucekts

user@australtech.net:~$ aws s3api list-buckets
{
"Owner": {
"DisplayName": "australtech",
"ID": "aaf0bc9a9022285428404623222e4d5d346236332337d490662f40e5dd762988"
},
"Buckets": []
}

Now we create the bucket

user@australtech.net:~$ aws s3api create-bucket --bucket australtechbucket
{
"Location": "/australtechbucket"
}

Let’s list the buckets again

user@australtech.net:~$ aws s3api list-buckets
{
"Owner": {
"DisplayName": "australtech",
"ID": "aaf0bc9a9022285428404623222e4d5d346236332337d490662f40e5dd762988"
},
"Buckets": [
{
"CreationDate": "2019-03-18T14:20:04.000Z",
"Name": "australtechbucket"
}
]
}

Exercise 2.2
Upload, Make Public, Rename, and Delete Objects in Your Bucket

We are going to create a test file which we will upload to the bucket

user@australtech.net:~$echo "S3 Test file AustralTech">s3test.txt
user@australtech.net:~$cat s3test.txt
S3 Test file AustralTech

Upload an Object


user@australtech.net:~$ aws s3 cp s3test.txt s3://australtechbucket/s3test.txt
upload: ./s3test.txt to s3://australtechbucket/s3test.txt

Open the Amazon S3 URL
Check if we can access the object within the bucket using curl.

We are going to install the xmllint tool which will help us having the XML output in a nicer way:
The command for installing on a Debian/Ubuntu distro is:

sudo apt-get install libxml2-utils

Now we can run:

user@australtech.net:~$ curl -s https://australtechbucket.s3.amazonaws.com/s3test.txt | xmllint --format -

AccessDenied Access Denied 34FB6AF4C65C04E0 nMnLWQ2kVpZj4+7V42dsaw1yHLvA0SeZRKn0Rp8GZBfZXYJuKkk4FgaXFKQlMUfNIEs/BVCzpLc=

if you got an AccessDenied Error, that is expected because we haven’t set permission for the bucket yet.

Make the Object Public
Lets make the bucket readable to everyone:


user@australtech.net:~$ aws s3api put-object-acl --acl public-read --bucket australtechbucket --key s3test.txt
user@australtech.net:~$ curl -s https://australtechbucket.s3.amazonaws.com/s3test.txt
S3 Test file AustralTech

Nice!

Rename Object


$aws s3 mv s3://australtechbucket/s3test.txt s3://australtechbucket/s3test2.txt
move: s3://australtechbucket/s3test.txt to s3://australtechbucket/s3test2.txt

Lets try to access the renamed object

user@australtech.net:~$ curl -s https://australtechbucket.s3.amazonaws.com/s3test2.txt | xmllint --format -

AccessDenied Access Denied 6B6E516A6AEC100E t9Lcc6D2J/mLesxN8BVzqsabBqGEvtcmS0r8wA2R/r9xqE9+63Wq2J0CUFYoYN0qTA6+enEnKpA=

The AccessDenied output is not exactly what we expected, we just renamed the object, we should be able to access it as before. However, AWS handles the move copying the content to a new file and then removing the old one. It doesn’t preserve the ACL associated to it. So we will need to set the ACL again:


user@australtech.net:~$ aws s3api put-object-acl --acl public-read --bucket australtechbucket --key s3test2.txt
user@australtech.net:~$ curl -s https://australtechbucket.s3.amazonaws.com/s3test2.txt
S3 Test file AustralTech
user@australtech.net:~$

Delete the Object


user@australtech.net:~$ aws s3api delete-object --bucket australtechbucket --key s3test2.txt
user@australtech.net:~$

EXERCISE 2.3

Enable Version Control
Enable Versioning


user@australtech.net:~$ aws s3api put-bucket-versioning --bucket australtechbucket --versioning-configuration Status=Enabled

Create Multiple Versions of an Object

user@australtech.net:~$ echo "blue">foo.txt
user@australtech.net:~$ cat foo.txt
blue
user@australtech.net:~$ aws s3 cp foo.txt s3://australtechbucket
upload: ./foo.txt to s3://australtechbucket/foo.txt
user@australtech.net:~$ aws s3api list-object-versions --bucket australtechbucket --prefix foo.txt
{
"Versions": [
{
"LastModified": "2019-03-18T15:08:14.000Z",
"VersionId": "JclCNa2PWH_fqNp3Rz7Grnc3icPZI_xA",
"ETag": "\"1098e2cb1442f45f8ca2e74e1cd24bd0\"",
"StorageClass": "STANDARD",
"Key": "foo.txt",
"Owner": {
"DisplayName": "australtech",
"ID": "aaf0bc9a9022285428404623222e4d5d346236332337d490662f40e5dd762988"
},
"IsLatest": true,
"Size": 4
},
{
"LastModified": "2019-03-18T15:07:26.000Z",
"VersionId": "T5GAlgeGisskB0VpjxVVXnf0qVubw97w",
"ETag": "\"daa5960a123ff55e594be19f9ddc940d\"",
"StorageClass": "STANDARD",
"Key": "foo.txt",
"Owner": {
"DisplayName": "australtech",
"ID": "aaf0bc9a9022285428404623222e4d5d346236332337d490662f40e5dd762988"
},
"IsLatest": false,
"Size": 5
}
]
}

EXERCISE 2.4
Delete an Object and Then Restore It
Delete an Object


user@australtech.net:~$ aws s3api delete-object --bucket australtechbucket --key foo.txt
{
"VersionId": "xxyMS13fDhCuGVI_RgujNI14QeRiT2df",
"DeleteMarker": true
}

Restore an Object


Trying to restore a deleted object from the AWS CLI is a tricky one. If you try this:


user@australtech.net:~$ aws s3api restore-object --bucket australtechbucket --key foo.txt

You will get this output:


An error occurred (NoSuchKey) when calling the RestoreObject operation: The specified key does not exist.

When an object is deleted from a version-enabled bucket, Amazon S3 creates a delete marker associated with the object. When there’s a delete marker, Amazon S3 responds to requests as if the object was deleted (for example, returning a 404 response to a GET request). However, the object is not permanently deleted, because versioning is enabled. To undelete the object, you must delete this delete marker.

Run the following command:

user@australtech.net:~$ aws s3api list-object-versions --bucket australtechbucket
{
"DeleteMarkers": [
{
"Owner": {
"DisplayName": "australtech",
"ID": "aaf0bc9a9022285428404623222e4d5d346236332337d490662f40e5dd762988"
},
"IsLatest": true,
"VersionId": "xxyMS13fDhCuGVI_RgujNI14QeRiT2df",
"Key": "foo.txt",
"LastModified": "2019-03-18T15:12:18.000Z"
}
],
"Versions": [
{
"LastModified": "2019-03-18T15:08:14.000Z",
"VersionId": "JclCNa2PWH_fqNp3Rz7Grnc3icPZI_xA",
"ETag": "\"1098e2cb1442f45f8ca2e74e1cd24bd0\"",
"StorageClass": "STANDARD",
"Key": "foo.txt",
"Owner": {
"DisplayName": "australtech",
"ID": "aaf0bc9a9022285428404623222e4d5d346236332337d490662f40e5dd762988"
},
"IsLatest": false,
"Size": 4
},
{
"LastModified": "2019-03-18T15:07:26.000Z",
"VersionId": "T5GAlgeGisskB0VpjxVVXnf0qVubw97w",
"ETag": "\"daa5960a123ff55e594be19f9ddc940d\"",
"StorageClass": "STANDARD",
"Key": "foo.txt",
"Owner": {
"DisplayName": "australtech",
"ID": "aaf0bc9a9022285428404623222e4d5d346236332337d490662f40e5dd762988"
},
"IsLatest": false,
"Size": 5
}
]
}

See the VersioniD for “DeleteMarkers”

Run the following command to remove the delete marker of the object. Be sure that you enter the version ID of the delete marker as the value for –version-id.

aws s3api delete-object --bucket australtechbucket --version-id 'xxyMS13fDhCuGVI_RgujNI14QeRiT2df' --key foo.txt
user@australtech.net:~$ aws s3 ls s3://australtechbucket
user@australtech.net:~$ aws s3api delete-object --bucket australtechbucket --version-id 'xxyMS13fDhCuGVI_RgujNI14QeRiT2df' --key foo.txt
{
"VersionId": "xxyMS13fDhCuGVI_RgujNI14QeRiT2df",
"DeleteMarker": true
}
user@australtech.net:~$ aws s3 ls s3://australtechbucket
2019-03-18 11:08:14 4 foo.txt
user@australtech.net:~$

EXERCISE 2.6
Enable Static Hosting on Your Bucket


user@australtech.net:~$ echo "Hello World" > index.html
user@australtech.net:~$ echo "Error Page" > error.html
user@australtech.net:~$ aws s3 cp index.html s3://australtechbucket/
upload: ./index.html to s3://australtechbucket/index.html
user@australtech.net:~$ aws s3 cp error.html s3://australtechbucket/
upload: ./error.html to s3://australtechbucket/error.html
user@australtech.net:~$ curl -s http://australtechbucket.s3-website-us-east-1.amazonaws.com/
Hello World
user@australtech.net:~$ curl -s http://australtechbucket.s3-website-us-east-1.amazonaws.com/randomfile
Error Page

You may have notice in the example above, the URL is slighly different to previous examples. This is because if S3 is used for website hosting, we do need to specify the s3-website- in the URL.
If we just specify the usual .s3.amazonaws.com FQDN, the website hosting feature won’t work.

The website is available at the AWS Region-specific website endpoint of the bucket, which is in one of the following formats:

.s3-website-.amazonaws.com
.s3-website..amazonaws.com

That is all for Amazon S3! on the next post we will get into AWS EC2 (Elastic Compute Cloud).