Amazon AWS DevOps Engineer Professional – Incident and Event Response (Domain 5) & HA, Fault T… part 7
- ASG – CodeDeploy Integration Troubleshooting
Okay? So we’ve seen something very interesting is that when we have an asg and if I went to edit and increase the desired capacity to three, for example, the new instance being created would have could deploy do a deployment to it. Okay? So that means that whenever we create a new instance in this deployment group, then it will have the latest version of Code Deploy deployed to it. Okay? For our application we also saw that we can do some in place deployment for example, and update the application and get the new version out on our EC Two instances. So the question comes with what if we do a deployment? For example, an in place deployment on our two instances.
So right here and while this deployment is happening, we also increase the desired capacity to three. Think about it for a second. So let’s see if she gets right. So what will happen is that the existing two instances right here will get the new version deployed by Code Deploy because this was the deployment we did. And the instance that just came up during this deployment will have the old version of the code. Because for Code Deploy, as long as the deployment has not succeeded, the last succeeded version will be what will be deployed to the new instances. So this is something that is described right here in this page called integrating code deploy with Amazon EC two auto scaling.
And so if you go right here, scale up events during deployment, it says that if an Amazon EC Two Auto Scaling scalp event occurs while a deployment is underway, the new instances will be updated with the application revision that was the most recently deployed. While the other application, the other ecu instance will not receive the newest. And so to resolve this problem, because you don’t have old instances with the new application and newly scaled instances with the old application, you can redeploy the application again to update every single instance or you can simply use the suspend processes. So if we go back to the details here and we choose to suspend a process and we say okay.
You cannot launch any type of easy two instances, then while Code Deploy is happening, we have the guarantee that no new instances will be launched and as such this will be suspended. And of course after the deployment is done, we should remove the suspended process launch and everything will back to normal. So important to know. I think the exam can test you on this. So definitely make sure to read through this integrating could deploy with Amazon to Auto scaling page because it does have some really good information. And that’s it for this lecture. Just a bit of theory, not something I can demonstrate, but hopefully you get the point of what I said and I will see you in the next lecture.
- ASG – Cleanup
So I just want to clean up this ASG just so we don’t have anything remaining. So this demo ASG launch template was created manually so we can delete it altogether. Then if we go into cloud formation, this demo ASG cfn, we can delete it right away from cloud formation. And if we go to demo ASG Code Deploy Two, we can delete it. But remember, because Code Deploy has done a bluegreen deployment and it has created new Auto Scaling Group for us, what will happen is that if I went ahead and delete the stack, this stack would not delete this Code Deploy group right here. So what I need to do is to also delete this myself. So I’ll click right here and delete this code. Deploy auto scaling group automatically. So now if I refresh this page yes, the whole three ASG will be deleted automatically. And this will complete the cleanup for this section on Oduskelling. So I hope you like this lecture, and I will see you in the next lecture.
- ASG – Deployment Strategies
So to close this up with autoscaling groups I want to talk about the different deployment strategies we’ve seen in this course just so we can wrap it up and get a good overview of what they mean. So this one is called in place. That means we have one load balancer that has a target group and that target group is pointing to the easy two instances in our asg. And so in this case our instance is running our application v one and by doing an in place deployment, for example using code deploy then we’ll have the same instance that will be running the version two of our application. Okay? So in here the instance state has mutated so it’s not immutable has mutated has changed thanks to you could deploy.
So the application was stopped. Then the new application files were deployed onto the instance to a V two and the application was started. So this is in place and it has its own implications. Then we have rolling. So rolling means that we have one load balancer, one target group, one asg. But instead of directly updating each instance we’re going to create a new instance that will have the V two on it and then once that instance is running and operational then the first instance running V one is going to disappear. So this is a kind of a rolling update. Okay and we’ve seen this before. So with this strategy as we can see here we are serving version one and version two at the same time.
But it is maybe safer to do as a deployment because we don’t deploy it to the old instance, we deploy to a fresh new instance and we probably have a higher likelihood of deployment to succeed as such. Okay? Then we have replace. So replace is something we’ve seen where we have one load balancer that is pointing to one target group and that has an auto scaling and has an instance running V one. And what we do is that we create a new auto scaling group altogether with a new launch configuration or launch template and that will have a new instance in there running the V two application.
So at some point the ald would be pointing to both auto scaling groups and then when we’re done we will terminate the first auto scaling group and replace it altogether. Hence the name replace. Okay? And finally we have blue green. So in this case we have two load balancers. So we have our first application stack in here and we create another entirely new application stack in which we have a new application load balancer and we’ll be running version two in our auto scaling group. What will happen is that here we are more free to use something like route 53 to have a simple record or a weighted record and maybe direct a little bit of traffic to our version two to see how it performs before we switch the entire traffic all over to the version two application.
And something to be very conscious of when you do have this kind of architecture is that because you are creating a new application load balancer, it cannot receive all the load at once. You need to make sure it is being prewarmed. Okay? So when you do have such architectures, make sure you gradually shift traffic to your alb or make sure you ask aws to pre warm it for you because otherwise you’ll get scaling issues because you have created an entirely new application load balancer. So, something to be very conscious of.
So what architecture is the best for deployments? Well, it really matters on how your application is, what the requirements are in terms of speed, in terms of reliability, in terms of cost and so on. But at least you’ve seen the different deployment strategies. What I do recommend right now is to read more about it using this blue green deployment white paper which has an in depth discussion on what I just said. But that was a high level overview and hopefully that makes a lot of sense right now and you understand that there are implications and strategies you can have when you do deployments on asg and alb. alright, that’s it. I will see you in the next lecture.
- DynamoDB – Review Part I
Okay, so Dynamodb is something that I would assume you would know, but I’m just going to go over what you need to know going into this DevOps exam. Now, there’s nothing more than what you know already for the developer exam. But still, I’m just going to go over quickly every feature just to make sure we are on the same page. If you’re looking for a deep dive on Dynamodb, then please look at the developer course because it has all of that. Okay, so let’s go ahead and create a table on Dynamodb. So we have to give a table name. So I call a demo table. And then we have to choose a primary key. So a primary key can be made either of just a partition key, in which case the partition key has to be unique.
Or if the partition key is not unique, then you can add a sort key to have two attributes. I do recommend this blog called Choosing the Right Dynamodb partition Key, which is actually amazing, which explains exactly how you should use and create a partition key and a sort key. So here it says that if we just use a partition key, then it has to be unique, or we can have a partition key and a sort key, in which case it’s called the composites primary key. And that means that there’s going to be a possibility of having, for example, product ID having the same value twice, but the type for the sort key to be different values. Then all the other things in your table can be defined at runtime and they’re attributes.
Okay, so you have to read this blog. I do recommend it just in case you’re not very familiar with how to choose a partition key for Dynamodb. Okay, so in this example, let’s just go ahead with some sample data. So let’s create a thread sample data. So I’m going to have forum name and subject as my specific composite primary key. So the first one is going to be forum name and the second one is going to be Subject. Here we go. So we’re going to have two different forums and different subject within each forum. Then we need to talk about how to provision the read and write capacity. So the default settings are not good because we don’t learn anything with them. So let’s just go into here and learn everything.
So the first thing is about secondary indexes. So second indexes are called local secondary indexes or Lsi, and you need to define them when you create your table in the beginning. So you cannot add them later on. You can add global secondary indexes, but local secondary indexes, you cannot recreate them afterwards. So why do we need a secondary index? Well, for example, say we’re very happy with forum name subject, but we want to be able to have a different sort key for whatever reason. Then you need to add a secondary index. So you say okay, my partition key would be a forum name, for example, and then you can add a sort key and that would be a different attribute, for example username.
I’m not sure, let’s just assume this is a different attributes and in which case because we have used the same partition key as we have defined in the table itself, we are able to create this as a local secondary index, okay? And we can project attributes. So we can project all attributes just to keys or include just a list if we want. We will just say all for now. So local secondary indexes mean that the partition key is the same and the sort key is different and they can only be created at table creation time. You also have global secondary indexes, in which case you can have whatever stuff you want in this. So you can have subject as the partition key. And for example, you can have username as the sort key.
Why not? In which case this cannot be a local secondary index because we subject is not the same partition key as forum name as we have defined from before, okay? And we can project some attributes and we can add the index and this one will be a gsi or a global secondary index. So the difference again, lsi same partition key as before above and different sort key, whereas for global secondary index they can be added afterwards after you create your table and they can have a different partition key and different sort key. So we use indexes when we want to have different type of queries and when the query is not optimal or efficient on the original partition key and sort key, then we create different indexes such as our queries become more efficient.
Okay? So this is why we have secondary indexes. Then we have the capacity of the table, so we can do whatever we want. Here we have three different options. So we have provisions, in which case we say okay, we’re going to provision wcu or rcu or read capacity units and write capacity units. And if you choose provisions, then we can tick autoscaling for read capacity and write capacity. So what does that mean? That means that we let Amazon take care of a target utilization of 70% for my rcu and 70% for my wcu and let Amazon scale between five units and 40,000 units for my rcu and my wcu to handle the load as it goes on my table. So this is great for auto scaling, to be honest.
And this is a setting you have to remember, and you can apply the same settings to global secondary indexes if you want to. So gsi do inherit must have some wcu and rcu, whereas Lsi inherit those of the main table. And so if you don’t want to have auto scaling, you can untick these boxes, in which case you can just define the rcu and wcu of your table and index for global secondary index. So you can say, okay, I want one and one. And for my subject username index, which is my global secondary index in here I’m able to say I want one and one as well. Okay, this is just a way of doing it. So this is when you have provisioned wcu and rcu. And remember, you can auto scale or not autoscale.
This is a choice and on demand in which you don’t specify rcu and wcu and every read and every write will work automatically. But you pay a lot more money for on demand than provisioned. So use on demand if you have an unstable workload, you don’t know if you’re going to need a lot at some point and nothing the next day or whatever. Whereas provision is when you have a more smooth type of distribution and usually provision goes well along auto scaling for read capacity and write capacity. In this case though, I’m just going to keep it as provisioned because it’s free tier eligible and I’m going to put one everywhere.
Okay, then we can define encryption at rest for our table and we can have a default kms key or the default kms key or a kms key that we specify if you wanted to. But I’ll just keep it as default for now. Okay, let’s click on create and our table is being created. So, so far everything you should know now the table is being created. Okay, so next let’s imagine that our demo table is being read to a lot. So we have items and maybe we’ll just create an item, we’ll just call it A-B-C here we go. And we have one item. So let’s for example, let’s say that this item is requested a lot and therefore we may have some throttling because this item is a hot item and we kind of want to be cached.
So to cache this item we can use dax or Dynamodb accelerator. So with dax you can create a cluster and it’s going to be a cache right in front of your table. So alcohol table cache or cache table. And you can choose a type of node. So there are different type of nodes, but you can choose the smallest one which is dax T, two small, and I don’t think this is free tier eligible. The cluster size can be anywhere from one to ten nodes. So it can be a pretty big dax cluster if you want to. I’ll just choose one node. You can enable encryption. You need to select an im role and you’ll create the role and you say, okay, you can have rewrite policy and so on.
On all tables you need to choose a subnet group. So you’ll create a new subnet group. Subnet group. So my subnet group description. And then we’ll just take the three subnets we have in here in this epc and we are good to go. We’ll use the default settings. So this is to define the ttl and so on for your cache. So we’ll just use this and we’ll launch this cluster. And thanks to this dax cluster, now, what will happen is that I can make directly the queries using the Dynamodb api to the dax cluster. And what it will do is that it will cache the values that are read very, very often on our table automatically. And this will ensure that we can deal with hot partitions as well.
So dynamodb accelerator is something you should know as well, but what you have to remove, remember, from this little hands on, is that for the dax cluster, you need to provision it in advance. So right now, get a permission status error for doing it, whatever. We don’t create it anyway. You just need to know at a high level what it is, and we don’t need to deal with this error right now. Okay? So in the next lecture, I’m just going to go over some advanced features of Dynamodb around the streams and so on. So I will see you in the next lecture.