Amazon AWS SysOps – Databases for SysOps Part 4
- Aurora Hands On
So let’s create an Aura database and we are in the new interface. So there was an old interface and now you can switch back to the old one, but I’ll keep the new so that the video is more compatible with you. And so we’re going to create an Aura database so we can do a standard create to configure everything, or an easy create, but obviously we want to configure everything. So we’ll go into a standard create. I’ll choose Aura and then you have to choose either if you want Aura with MySQL compatibility or PostgreSQL compatibility. So these are the only two modes you can have Aurora with. So we’ll choose MySQL because it has more options. But whether you choose MySQL or Postgres, you can see there’s a version drop down and you can choose a version you want.
Now, for this hands on, I’m going to use MySQL because it has the most features for Aurora to demonstrate and then for the version I’m going to use 5. 6 point ten A. And the reason I do so is that if you look in between here, for example, we have a database location, regional, global but if I select next version, for example, if I select this one, I don’t have that feature anymore. So I don’t demonstrate as much. So if you want to follow along with me, remember by the way that this is not a free hands on, okay? This is something you’ll have to pay because Aura is not part of the feature but just to see the options you can follow along.
So choose Aura MySQL 5610 A just to have an option on the zip based location. So, regarding the database location, you can either have a regional aura database within a single region, or you can choose a global aura database in multiple AOS regions, in which case the rights are going to be replicated to your other regions within 1 second. And there is a way for you, in case there is a regional outage, to failover to a different region by separating the different region as its own cluster. So we’ll keep it as regional for now because it also shows us a lot of cool features we can get out of Aura. So here we have to choose the database features and we can see there’s four different modes we can use. Either we have one writer and multiple readers, which is the one I explained to you, which is the most appropriate for general purpose type of workloads. But we can also have one writer and multiple reader parallel query to analyze to improve the performance of analytics. Queries you have multiple writers where you can have multiple writers at the same time in Aura this is when you have a lot of writes happening continuously and finally you have serverless, which is when you don’t know how you will need Aura. You have an unpredictable workload, maybe you need a little bit in the morning, maybe you need a lot at night. And so you need to be more scalable, in which case you would choose serverless and this would be a great option.
So regarding the exam, the one you should know definitely are going to be the general one and the serverless one. Okay, so we’ll go into the general one configuration because there is more configurations to do than the serverless. So for the general one, we can go into either production or dev test. And these are like templates prefilled that will fill the settings in the bottom. So I’ll choose production and we’ll go one by one. So in terms of the DB Identifier, you can call it whatever you want. I’ll call it my Aurora DB. And then if I scroll down the master username, I’m going to use something that I know. For example, I’m going to use Stefan. And then for the password, I’ll use password just like before. So password and here password. Okay, great.
So next I’m going to scroll down and we have the DB instance size. So this is where you choose the performance of your database. So you can choose either memory optimized classes with R and X classes. So you can see all these instances in this drop down or we can have burstable instances that are going to be cheaper and it includes a T classes. So a T two Small is going to be my cheapest option right now for this demo. So that’s what I’ll be choosing. But as you can see, based on the workload you’re having, if you have a production type workload, memory optimize is definitely going to be better. If you’re doing dev and tests, DBT too small is going to be probably a better option with the most cost saving, but still not a free tutorial to do this one.
Okay, now let’s talk about availability and durability. So we can create aura read replica or read a node in a different AZ, which is great because if an AZ is down, then we can fail over to a different High Availability Zone and that gives us high Availability. And so this is why it says multiaz deployments. So we can create one or don’t create one, regardless if we do so the storage itself is across multiple AZ. That’s a feature of Aurora, but it’s more about getting your instances of aura into across multiple AZ. And if you want a multi AZ deployment, then please enable this and I will keep it as is because it’s a good option, but obviously a more expensive one then for connectivity. So where do you want to deploy your aura cluster? In your VPC? And then what do you want to have in terms of subnet? You want it to be publicly accessible, yes or no? I’ll leave it as no, we won’t connect to it.
And then you want to default security group or create a new one. It’s up to you to choose whatever you want. This is not important, we won’t connect to this database anyway. I just want to show you the options. And finally you have a lot of additional configurations. Okay, so the DB instance Identifier, you could for example, have it as this one. This is great. The initial database may be Aurora and then you could specify parameter groups. These are not in scope for the exam. You can define a preference or failover, but we won’t do that. Backup is really great if you want to have snapshots or database and you want to restore from them. So they’re great for disaster recovery.
So you can set up the retention you want for your backup between one day and 35 days. Okay, then encryption. So do you want your data to be secure and encrypted with Kms? And this is a great option if you want to make sure that your data is not accessible by anyone, even AWS. So encryption is maybe something you want to enable. And then finally backtrack issues allowing to use to actually go back in time for your database. So if you did some bad commits, some bad transactions, you can go back in time, which is a nice feature, we won’t enable it right now. Monitoring to monitor the database with an enhanced monitoring with a high granularity and finally the logic exports and so on. So as you can see, there’s a lot of different options. Finally, maintenance for the maintenance windows and the upgrades of the version which are very similar to what we get in RDS Normal. And finally, the last setting is deletion protection to ensure that we don’t delete this database mistakenly by just right clicking saying delete. Now we have an extra step to make sure that we don’t do that.
So when we’re ready and we’ve seen all the options, so from an exam perspective, again, the very important ones are going to be around multiaz, it’s going to be around the fact that you can have one writer and multiple readers or serverless. These are going to be the important points of Aura. Okay, so when we’re ready, we’ll just create the database and here we go. Okay, so it took a bit of time, but my aura cluster has now created. So as you can see, we have a regional instance of aura and we have a writer database and a reader database. So remember, the writers and the readers are separate. So I’m going to click on this Aura database to get a bit more details. And as we can see, we have two endpoints here, we have a writer endpoint and then we have a reader endpoint and we know it because it says minus ro here, which means read only.
Okay, so this is recommended to use the writer endpoint to write to Aura and to use the reader endpoint to read from Aura regardless of how many databases you have. But if you wanted to, you could click on this database itself and get the Endpoint to connect to it. But this is not recommended. What the Recommended way and what the exam will test you on is that you should select an Endpoint that is either the Writer Endpoint to Write or the Reader Endpoint to Read. Okay, you can have lots of options interior, we won’t go over them. We have seen the main ones. Lastly, we can have a bit of fun and we can on the top right, either add a Reader so I have to cross Region rate Replica, create a Clone or add Replica auto Scaling to give us some aspect of elasticity.
So say my scaling aura. And then you could select, for example, a Target CPU utilization of 60% for your scaling, which looks a lot like what we had for Auto scaling Groups. And we could also specify additional configuration through the Coolant period, the scaling and so on. And finally the min and the max capacity. We’ll leave it as is and add the Policy. And all of a sudden we have added auto Scaling to our Aura Database. That was really simple.
And now we have a fully functional Aura database. So before finishing the hands on, if you did create a database with me, please make sure to delete it so you don’t spend some money. So to do so, you click on this one and you delete it. You delete this one instance. So you type delete me. And to do the same, you have to do the same with the reader and points or Actions delete and then say Delete Me. And this can take a bit of time. Okay, so now if I Refresh, I can see my Database and have zero instances, but you completely delete it. I cannot do it right now because Deletion Protection is on.
So I click on Modify and then at the very bottom of this page, I’m going to disable deletion Protection. I click on Continue, and then I will apply this immediately to make sure that I do have disabled my deletion protection. And so now if I click on my database and do Actions, I’m able to delete it. And I do want to take one final Snapshot. No, I’m fine. And then I won’t recover my data. That’s fine. And I’ll delete the database cluster and I’m done. So that’s it for Aura. I hope you liked it and I will see you in the next Lecture.
- ElastiCache
Now we’re getting an AWS Elastic cache overview. So the same way RDS is to get a managed relational database, ElastiCache is to get a managed cache, in this case, Radis or Memcache. D and Cages are basically in memory databases. So they run on Ram and they have really high performance and usually really, really low latency. And their role is to help reduce the load off of databases by caching data. And so that the read intensive workloads, read from the cache instead of reading from the database. So basically, it also helps make a bunch of applications stateless by storing states in a in a common cache. And it has a read and write scaling capability using Shading. It has a read scaling capability using Read Replicas. It has multiaz capability with failover. So just like RDS, and it has a S taking care of OS, maintenance, patching optimization, setup, configuration monitoring, failure recovery, and backups. So basically, it looks a lot like RDS, and there’s a very good reason. It is pretty much the exact same thing. It’s an RDS for caches, and it’s called Elastic cache.
Okay? That’s what you should remember. So there is write scaling, read scaling, and multiaz. Now, you may be asking, how does it fit into my solution architecture? And at first I was troubled too. And really, this diagram that I created really helps put things into perspective. So when we have our application, it communicates to RDS, as we’ve seen from before, but we’re also going to include an ElastiCache. And so our application will basically first query Elastic cache. And if what we query for is not available, then we’ll get it from RDS and store it in ElastiCache. So that’s called a cage hit when you get into Elastic cache, and it works.
So we have an application, it has a cache hit, and we get the data straight from ElastiCache. In that case, the retrieval was super quick, super fast, and RDS did not see a thing. That sometimes our application requests data and it doesn’t exist this way. It’s a cache miss. So when we get a cache miss, what needs to happen, where our application needs to go ahead and query the database directly. So we’ll go ahead and query the DB, and RDS will give us the answer. And our application should be programmed, such as it writes back to the cache results in Elastic cache.
And the idea is that if another application or the same application will ask for the same query, well, this time it will be a cache hit. And so that’s what a cage does. It just caches data. So the caches will help relieve the load in RDS, usually the read load, definitely. And the cache must also come with an invalidation strategy, but that’s up to your application to think about it. So that only the most current data and most relevant data is in your cache.
Now, another solution architecture you must have is user session store. And so in this case, our user basically logs into our application and our application is stateless. So that means there’s a bunch of applications running. Maybe they’re running into an auto scaling group. And so all of them need to know that the user is logged in. So the process is such the user logs into one of the application, and then the application will write the session data into ElastiCache.
So this is it. The application, the first one just wrote the session data into ElastiCache. Now, if the user hits another instance of our application in our auto scaling group, for example, then that application needs to know that our user is logged in. And for this, it’s going to retrieve the session off of Amazon ElastiCache and say, oh yes, it exists. So the user is logged in, and basically all the instances can retrieve this data and make sure the user doesn’t have to authenticate every time. So that’s basically another very common solution architecture and pattern. With ElastiCache, it is basically, number one, to relieve load of a database, and number two, to share some states, such as the user session Store, into a common ground, such as all the applications can be stateless and retrieve and write these sessions in real time. So now let’s talk about the difference of Radis and Memcache D.
So Redis is going to have a multiaz feature. That means that you can have it in multiple Availability Zone with an auto failover feature. So that means that if one AZ is down, you can failover automatically to another one. You can also enhance your read scales by creating read replicas. And so you have more reads and high availability, and you can enable data durability using AOF persistence. So even if your cache is stopped and then restarted, you can still have your data that was in the cache before stopping it available to you. And that’s because of AOS persistence. That means that you can backup and restore your redis clusters. Okay? So if you think of Redis, think of two instances.
One being the primary, the second one being the replica. And think data persistence, think backup, think restore. Okay? Very, very much similar to RDS, I would say. So redis RDS kind of similar, right? Think of like mendo technique. But Memcache d is very different. Memcache d is going to use multiple node for partitioning of data. So it’s called sharding. There’s going to be a non persistence cast cache. That means that if your memcache d node goes down, then the data is lost. There is no backup and restore features, and it’s a multithreaded architecture.
So if you want to conceptually, see Memcache D it’s around Sharding. So a part of the cache is going to be on the first Shard, and another part of the cache is going to be on the second Shard. And each Shard is a memcache d node, conceptually speaking. Okay? So they’re very different one is going to have more industrial RDS type features. So Redis is going to be more like RDS, while Memcache D is going to be a pure cache that’s going to live in memory. But there’s going to be no backup and restore, no persistence, multithreaded architecture and so on. So try to remember these going into the exam so you can make the right decision based on not if you want backup and restore. Based on not if you want multiaz, and based if you want charting or read replicas. Okay, well, that’s it. I will see you in the next lecture.
- ElastiCache Hands On
So let’s try creating an ElastiCache cluster. So we’ll go to the Elastic Cage service and then we’ll click on Get Started now, so we have two options for the cluster engine. We can either choose redis or memcache. Deep. So if you have redis, this is what we know. It offers multiaz with auto allover and enhanced robustness. And we can even enable cluster mode if we wanted to, to have even more robust nest and scalability. So because it’s redis and it has persistence, we can use it as a database cache and message broker. Whereas if you choose Memcache D, then it’s a high performance distributed memory object caching system.
And this is really intended to serve as a pure cache, while redis can also be used as a database. So for the sake of this exercise, we’ll go ahead and just create a redis cluster engine. But I invite you to explore the options for main Cache Deep. So we’ll say, okay, this is my first redis and my first redis instance. The engine version compatibility, I’ll just use the latest one. The port is the standard port for redis 6379. The parameter group is the one I’ll choose by default and the note type, because I don’t want to overpay, I’m not going to choose a cage or four x large.
I’m going to go into T two and choose a T two micro which is within the free tier. I’ll click on Save and for number of replicas right now I don’t want anything else. Choose zero, otherwise I’m going to pay more money. So as you can see, if I had two, there was more options. There will be a multiaz with autofelliver option, or even if I have it as one, I should have that that setting. Here we go. It’s still here. But if as soon as that I have it as zero, you can see that I’m losing the multi AZ. So let’s have it as zero and I lose the multiaz setting. So one I have it and zero I lose the multiaz. So we’ll keep it at zero because we want things to be free. But there you go. If there is a replica, obviously you can have multiaz.
Then you need to create a subnet group. So I’ll create one and I’ll call it my subnet group, my first subnet group and my first subnet group. I’ll choose my VPC ID and I’ll select one of these subnets. Maybe EUs 38. I don’t have any preferred Availability zone. I’ll scroll down a security group. I can have a default one. Do we want encryption at rest using Kms? And do we want encryption in transit? And if we do select encryption in transit, then we can enable Redis off. And with redis off, I’m able to set a token so I can set whatever I want. And this token will be necessary for my applications to connect to redis in order to work with redis. But if I disable encryption in transit.
I have no options for redis off. Finally, do we want to import it into the cluster? No. Do we want backups? Absolutely. So we’ll say yes. We want backups and want one day of retention. And this is a redis only feature. We don’t get backups with memcache, D and the maintenance window for anything. But that’s fine. We won’t specify it, and I click on Create, and there we go. Our Elastic Cache redis cluster, or it’s one instance, so it’s not really a cluster, but it’s one instance anyway is being created, and to use it I’m sorry, I can’t really demonstrate that to you.
This is more an application specific concern. You need to download the redis driver and start interacting with your redis cache. But as far as we’re concerned, from an Exempt standpoint, we’ve seen how to create a redis cache. We’ve seen all the configuration options, and now the cache is just creating, but I don’t need it. So what I’ll do is just when this is done, I will remember to delete it. And so now I’m able to click on Actions and then delete my redis cluster. Once it’s been created, I can create a final backup. I’ll just say no, and I am done. All right, that’s it for this lecture. I will see you in the next lecture.
- Section Cleanup
So for this section, you may want to clean up. So cleaning up just means deleting everything. So it’s pretty easy for this, what we need to do is just delete the databases. So for this, you can stop this one, and then you’ll be able to delete your aura cluster. So I’ll just stop it right now. And for these databases, I’ll go ahead and just delete them right away. As you can see, when you delete the database, you can take a final snapshot and you can give a name to your final snapshot, and you can also choose whether or not you want to retain your automated backups.
So for seven days but for now, I just won’t create any of these things. And I’ll just say, delete me into the deletion. So it says, which only recommend so delete me maybe is one word. No, I acknowledge you need to check that box. I acknowledge the fact that upon instance, deletion, I lose everything. So I’ll say, okay, click on delete from my DB replica, same action delete. And I’ll just type delete me in there and click on Delete. Okay, we’re done. And now for just this one, we need to wait. Now for the snapshots.
If you want to basically delete your snapshots, you go ahead, select all of them, and then you would take an action maybe on manual snapshots, and you can delete that manual snapshot. In terms of the other snapshots, you can’t delete them right away. So you have to wait a little bit, and your database actually should delete these snapshots automatically. And now we’ll just have to wait a little bit more. And so to delete the aura cluster, I was completely wrong.
You actually have to, if you try to delete the writer manually. So you click on delete, it says you need to start the my aura DB cluster first. So I’ll just start the cluster one more time. So I have to wait until I can start it, and then I’ll be able to delete my aura DB and my reader. Now I’m just going to restart my aura cluster, the focus, I’ll click on it and then action start. And this will make me start my aura cluster, and then I’ll be able to delete it. So I’ll just wait again a little bit more. And now that my aurora cluster is started, I can click on my aura DB this one, deletes, and say, Delete me. Okay, perfect. And then I can also do the same for my reader.
So I’ll do. Action, delete. I’ll type in delete me, press delete. And now my database, my cluster, is starting to get deleted. And as you can see, now even my top cluster is automatically in the deleting status. So they should go away. Now for snapshots, as I saw, as I said, everything is gone and automated back up, it’s gone too. Now for parameter groups, you could delete your parameter groups, but you actually don’t get billed for those.
So you can as well keep them if you wanted to. So that’s databases, everything is going and so the only thing we have to clear is now ElastiCache. So I go to Services ElastiCache, and in my Elastic cache, I go to Redis, see my cluster, and then Delete say yes. Do you want to create a final backup? No, I don’t want to create one. Click on delete and you have basically deleted everything for this section. Congratulations, you are now clean and you won’t pay anything more. All right, see you in the next section.