Practice Exams:

Amazon AWS SysOps – S3 Storage and Data Management – For SysOps (incl Glacier, Athena & Snowball) Part 2

  1. S3 Replication (Cross Region and Same Region)

Okay, now let’s talk about Amazon S three replication that is CRR and SRR for crossregion replication and same region replication. At its core of the idea, we have an S three bucket, for example, in one region, and we want to replicate it asynchronously into another region, into another bucket. So to do so, we first must enable versioning in the Source and destination buckets, and then we can set up two different things crossregion replication. If the two buckets are in different regions or same region replication SRR if the two buckets are in the same regions, note that the buckets can be in different accounts. So it is very possible for you to save a copy of your data into another account using s three replication.

The copying happens asynchronously, but it’s very, very quick. And for the copying to happen, you need to create an Im roll. We’ll see this in the hands on and that Im role will have the permissions from the first extra bucket to copy to the second Sri buckets. So, the use cases for cross region replication is for compliance or lower latency access of your data into other regions or to do cross accounts replication and SRR. So, same region replication, the use cases could be log aggregation.

So you have different logging buckets and you want to centralize them into one bucket or live replication, for example, between a production and your test accounts. And so, here is the fine print about S three replication. After you activate s three replication, only the new objects are replicated. So it’s not retroactive, it will not copy your existing states of your Sere bucket. And if you do a delete operation, so if you delete without a version ID, it will add a delete marker and it’s not going to be replicated. And if you delete with a version ID, again, it will delete the source and it’s not replicated. So, to make it short, any delete operation is not replicated.

And finally, there is no chaining of replication. That means that if bucket one has replication to bucket two, which has replication into bucket three, then any object written in bucket one will be in bucket Two, but will not be replicated to bucket three. So you cannot change your replication. So that’s a fine print for entry replication. Now, let’s go in the hands on to see how that works. Okay, so let me create a bucket, and I’m going to call it Stefan Origin Bucket for the replication.

And I’m going to set it in EU Ireland. EU west one. And I’m going to create that bucket. And then I’m going to create a new bucket as well. Stefan replica buckets. And this time I can create it in another region, for example, the Stockholm region. EU north one. And click on Create the Bucket. So I’ve created two different buckets now, and they’re in two different regions. So I’m going to open the configuration of each bucket and the first thing you have to do, for example, let me just upload one file. Let me upload my Coffee file. For example, before we activate Versioning.

So here my coffee file is uploaded. And now I’m going to go to Properties and activate Versioning. We need to activate Versioning to make sure we can set up the replication. And let’s go into the Replica Bucket and in Properties I am going to also enable Versioning. Okay, that’s great. Next, what we have to do is to enable replication. So for this under management, I have the replication tab and I can configure cross region replication or same region replication. And for this I will add a rule and I say, okay, what do I want to replicate? Is it my entire bucket or is it a certain prefix and tag? So I’ll say it’s my entire bucket. And do I want to replicate objects that are encrypted using Kms?

No, for now it’s fine. Click on next. And we need to choose a destination bucket. So I’ll choose my stiffen Replica Bucket in here and as you can see, the bucket can be in this account or it can be in another account. So in this case, my bucket is in this accounts and automatically it will choose based on if you’re using a different region of the same region, it will choose between Cross region Replication or Same Region Replication. Okay, then do we want to change the class ownership and do some time control settings? This is not relevant for us. So we’ll just click on Next and then finally we need to create a new role for this to perform this replication.

So we’ll create a new role and then the rule name is going to be Replication Demo Status Enabled. So I click on Next and click on Save. Okay, so my Replication Configuration has been successfully created. And as we can see, we have an Im role right here that has been created for this replication. So what I like to do is just show you what this Im role is made of. So let’s go to Im. I’m going to go to my roles and I’m going to try to find in here the S Three CRR role that was created for me. And in it I can look at the policy and see what this policy does. So I’ll go to JSON and this policy does allow to do get earn list on the Origin Bucket. So to get all the files and then finally allows to do a replication action onto the Target bucket. So this is perfect. This is a great I’m role.

And this Im role is going to be used by the S Three service. Okay, excellent. So this is enabled. And so if I go back to my Origin Bucket and refresh, I have my Coffee JPEG file and if I go to my S Three bucket here in my Overview and Refresh, I get nothing. So as I said, activating the replication is not retroactive. So this coffee file that had uploaded from before is not going to be replicated into the new bucket because it was created before the replication.

But if I upload a new file at this file and I re upload my coffee JPEG, for example, and upload it, so this will upload a new version of my file. And we can see this because if you do versions show, then we see we have a new version. And if we go to our other bucket and now refresh, we should see, yes, the coffee JPEG file that has been replicated and we can show the versions as well. This is great. And so we can also try it with another file. If we upload, for example, say, the beach jpg and click on upload, as we can see, this file is also going to be replicated. So the replication happens very, very quickly as soon as I refresh, it’s already there. So this is great. Finally, let’s look at delete markers. So I’m going to go ahead and delete a specific version.

So let’s see if that works. I’m going to delete this specific version of beach jpg and see if that gets replicated. So it has been deleted from my bucket. And what about here if I refresh while the file is still here, so this deletion did not get replicated. And if I just want to add a delete marker, so I just go on the coffee and action and then delete. So this is just going to add a delete marker. So if we go to versions show, then we have a delete marker here and go back to Amazon S Three on my other bucket and refresh. As we can see, this delete marker was not replicated either. So any delete action happening here are not going to be replicated onto the other S Three bucket. And that is something you need to know about S Three replication. Okay, well, that’s it for this lecture. I hope you liked it and I will see you in the next lecture.

  1. S3 Policies Hands On

Okay, so just a quick lecture in this one. We’re going to analyze the policies at this URL, and so I recommend you go to this URL right now and try to analyze the policies on your own, but we’ll do it together as well just to make sure we have the same understanding. The reason I’m doing this is that because maybe in the exam sometimes they will show you a bucket policy and you have to understand what it does. So let’s have a look at it right now. Okay, so let’s scroll down on this URL. And so the first one is to grant permission to multiple accounts with added conditions. So here we basically allow some principles, and here you can see that the principles we allow are from other accounts. So we have this account 111-2233 root and then this 144-4566.

And so we allow them to do put object and even do put object ACL. So they’re allowed to edit the access controller to that object in our bucket called example buckets, and only if the string equals the ACL being public read. So they on top of it have to specify this header s three ximz ACL, which means Amazon Access Control list means public read. So they’re able to only do requests on public elements. So it’s really, really important to understand that. Now we have the other one to grant read only permission to anonymous user.

So here we allow principal star to get object on our buckets and there’s a slash star at the end which means anything in our bucket. And so this is something we’ve seen before. This is to grant anonymous people access to our bucket and it will make our bucket effectively public if we wanted to restrict to a specific IP address. As we can see here, you can have conditions. And so conditions are really helpful. And so we can say, all right, we’re going to authorize anything with this IP address range. And so there’s a 24 here, so that allows quite a few IP addresses, about 255, but we say not IP address this 132. So we’re saying, okay, all the IP addresses in that range. So 255 IP addresses except the one that finishes in dot 1888 will be allowed. And so this is the kind of things you can see at the exam as well. So make sure you understand what slash 24 and 32 mean. We will see this later on in this course anyway. And then you need to understand that these conditions allow you to restrict who this bucket policy applies to.

Now we can also IPV four and IPV six. I’ll let you watch this. You can specify a referral so where the website comes from. And so the referee says, okay, only access requests coming from example. com or www. example. com or example. com are allowed in our bucket. And so this is a nice way to view how to do some kind of course, but definitely a way to specify that only a few websites are allowed to access our buckets, which is nice. And let’s scroll down CloudFront origin identity. This is something we’ll see, but this is a way for us to basically say only CloudFront is allowed to access our buckets. Anything else will not be able to access it. And so we’ll specify such a policy when we get there. And so we’ll basically allow a CloudFront origin identity to access our bucket and do get object. And then finally we can do MFA.

So we’ve seen how to do MFA delete, but you can extend MFA to a lot more operations. And so we’re saying, okay, you can’t do anything on tax documents unless you have multifactor of age specified. And so that means that basically you need to specify a multifactor authentication token anytime you want to do any action on this example, bucket tag documents and it’s kind of cool. And then finally cross account permissions to upload objects.

So we’ll keep it like this and it just shows you again an example, the fact that we are allowing another account to do put object in our bucket. But in terms of owning that object, it’s denied for the other accounts. So we keep full accounts. So it’s kind of neat. Just remember that these are things you need to be able to read. So read these on your own. Honestly, for me it’s very quick because I know them and I went quickly through them. But the idea is that I want you to be able to understand what a bucket policy does and how it works just by looking at it. And I think this document has a lot of bucket policies that can happen at the exam. So that’s it for this small hands on. And I will see you in the next lecture.

  1. S3 Pre-signed URLs

So now we’re talking about s three presign URLs and so we’ve seen them before but now we’re going to do hands on. So you can generate a presign URL using either the SDK or the CLI and the easy thing to do is for downloads we can just use a CLI but for uploads it’s a bit harder and you must use the SDK to use them. Nonetheless it’s quite easy and we’ll do downloads in this lecture. Now when you generate a presign URL, by default it will have an expiration of 3600 seconds which is 1 hour and you can change that timeout using an expires in parameter argument and you specify the time in seconds and when the user you give it, you give him a pre signed URL.

Basically they will inherit your permissions. So the one permissions that created the object so they can do get or put accordingly. So why would you do this? Well there’s many reasons. Maybe you want to allow only logged in users to download a premium video on your S three bucket. So you only want to give a download link for maybe 15 minutes to a premium user that is logged in. Maybe you have an ever changing list of users that need to download files and so you don’t want to give them access directly to your bucket because it could be very dangerous or it’s not maintainable because there are so many new users all the time. So maybe you want to generate URLs dynamically and give them the URLs over time by presigning them, all of them. And then maybe you want to allow temporarily a user to upload a file to a presence location in our bucket for example.

Maybe you want to allow a user to upload a profile picture directly onto our S three bucket. And so for this we would generate a prestand URL. So there could be a lot of use cases but let’s go ahead and see how we can generate a person URL for a download. So let’s take my sample bucket monitored Stefan, and we have a beach JPEG in there and maybe in that bucket I want to give an access to beach JPEG. So if I were to use the link directly in there, I would get a denied access denied because we don’t have the permissions, it’s not a public file so we can’t see it.

But maybe I want to give someone access to that file using a presign URL. So what I could do here is going to generate a presign URL using the CLI option. So I’m in the CLI and the idea is to do AWS s three presign and then we do help just to get the help. And so this generates a presign URL for an Amazon s three bucket which allows anyone to do an Http get request. So great, we’re just going to presign an S three Uri and then we’re going to specify an expires in value.

And by default, we can see it’s 3600 seconds, so 1 hour, but we can set it to whatever we want. And so before we do so, we need to configure the AWS CLI to generate a signature version called SIGV Four. And so for this, we’ll have AWS configure set default S three signature version, S three V four that will basically allow us to have the generated URL to be compatible with KS encrypted objects. Just something that I discovered, really. So we’ll just do this. So make sure you enter this and when you’re done. Now we have to generate a resigned URL. So I’m going to basically get the path to this file. So it is in my sample bucket monitored. So I’ll do AWS s three presign and then the s three URL. So my sample bucket monitors define and the name is beach JPEG.

And then something I need to do is expires in just to set how long I want it. So expires in maybe 300. So it’s five minutes. And then the other thing I have to do is to set the region. And so it’s something that you may forget because Esther is a global service, but S Three buckets are localized and so we need to set the region. And so for this, my bucket right now is in EU West One. So I will set EU West one. And I found that if you don’t specify the region, you may get issues. So now we’ve set okay, the presign help. Then we configured it. Then we generated a presign URL.

And here it is. It has been generated for us. And now I have five minutes to use this URL. So let’s go have a look and open it. And hopefully here we go. We can see our beach picture. And this will work only for the next 300 seconds, then it will just go away. So, preset URL are a really great way of generating basically ways for users to get access to files without getting straight access to the bucket. So I really like it. And you have to remember that based on the encryption you have, again, I ran into a lot of hurdles because I didn’t configure my default signature to be V four, and also I didn’t specify the region in my pre signed command. So make sure that if you run into any issues, you do both these things. And this is just troubleshooting I had to do on the Internet, really, to figure that out. So I hope a presence URL makes sense. And I will see you in the next lecture.