AWS Certified Solutions Architect - Professional: AWS Certified Solutions Architect - Professional (SAP-C01) Certification Video Training Course
The complete solution to prepare for for your exam with AWS Certified Solutions Architect - Professional: AWS Certified Solutions Architect - Professional (SAP-C01) certification video training course. The AWS Certified Solutions Architect - Professional: AWS Certified Solutions Architect - Professional (SAP-C01) certification video training course contains a complete set of videos that will provide you with thorough knowledge to understand the key concepts. Top notch prep including Amazon AWS Certified Solutions Architect - Professional exam dumps, study guide & practice test questions and answers.
AWS Certified Solutions Architect - Professional: AWS Certified Solutions Architect - Professional (SAP-C01) Certification Video Training Course Exam Curriculum
Getting started with the course
-
1. New Exam Blueprint 20194:00
New Domain 1 - Design for Organizational Complexity
-
1. Multi-Account Strategy for Enterprises05:18
-
2. Identity Account Architecture13:23
-
3. Creating Cross-Account IAM Roles06:18
-
4. AWS Organizations03:20
-
5. Creating first AWS Organization & SCP12:16
New Domain 2 - Design for New Solutions
-
1. Understanding DOS Attacks08:46
-
2. Mitigating DDOS attacks18:41
-
3. AWS Sheild09:27
-
4. IDS/IPS in Cloud05:24
-
5. Understanding Principle of Least Privilage11:12
About AWS Certified Solutions Architect - Professional: AWS Certified Solutions Architect - Professional (SAP-C01) Certification Video Training Course
AWS Certified Solutions Architect - Professional: AWS Certified Solutions Architect - Professional (SAP-C01) certification video training course by prepaway along with practice test questions and answers, study guide and exam dumps provides the ultimate training package to help you pass.
New Domain 2 - Design for New Solutions
37. DynamoDB Streams
Hey, everyone, and welcome back. In today's video, we will be discussing DynamoDB streams. So, DynamoDB streams are essentially the time-ordered sequence of item-level changes made within the DynamoDB table by you or your application. Now, basically, this allows a lot of use cases to be done in a much easier manner, like continuous analytics, real-time notifications, and various others.
Now, I'm sure that, through this definition, understanding DynamoDB streams is something that is difficult. So we'll jump right into the practicals and investigate what DynamoDB streams are all about. So I'll go to the DynamoDB console, and I'll click on "Create a table." I'll give the table name as KP lapse the partition key, and I'll give it as the course name.
And I'll go ahead and create the table. All right, so the table is created, and within the table, if you look at the stream, the stream is currently not enabled. So we'll click on "Manage stream." And there are various types of streams over here. One is for keys only. The second is a new image.
The third image includes both new and old images. So I'll select these new and old images. So, basically, it will record any changes I make to the table or new entries I make to the table. So, let's assume that I have a new item within my DynamoDB table. And I'm modifying that new item so it will remember what the value and the key associated with the older item were, as well as what the new, modified value is. So that is the difference between new and old images.
Again, you'll understand it. When we get to the practical part, I'll click enable for now, and it will give us the stream ARL. Perfect. So, the next step will be to create an im role, because we want the modification details to be stored within the cloud watch.
So I'll click on roles. I'll create a role. The role type would be "lambda," the permission. Let me quickly just give administrator access just to ease things out. And I'll say, "administrator Lambda." And I'll click on "Create a role." Perfect. So now that I have created a role, the third part is to go to lambda and click on "create a function." This time, we'll select a blueprint. And within the blueprint, I'll say DynamoDB.
And this is the first one that comes from the DynamoDB process stream. So this is based on the blueprint. I'll select that and name it KP Labs 3. And within the new role, I'll choose an existing role. The existing role name would be "administrator lambda." So for the initial position, I'll simply make the stream horizontal. I'll select "enable trigger" and proceed to create a function. As a result, this is a function code.
We'll ignore this part. I will just create a function. So we have a DynamoDB function that gets triggered when there is a certain modification being made to the DynamoDB. And then it will store the output in the cloud watch log group. So what I'll do is, while I'm in my DynamoDB table, I'll click on "create an item." So within the course name, I'll say AWS developer associate, and I'll click on save. Perfect.
So I have already saved a new item. Let me append a few more things here. I'll say the launch date, and within this, I'll say August, and I'll click on save. So basically, anytime you make a modification or you add a new item, let me add a new item once more, I'll say "awsops administrator," and I'll click on save. So all the changes that you make—you add a new item, you modify an older item—all of these things will be saved within the cloud watch log group.
So I'll quickly go to the cloud watch, and there should be a log group named after your lambda function. So, if I go to Logs and see that I have a KPOP stream over here, what happens is that it takes a little time in seconds. Definitely, it takes like 10 to 15 seconds for new things to get updated.
So let's do one thing. I'll go to the administrator here. I'll append a new string, I'll say "release date," and I'll say it is December. Okay, so we created this new item and changed some of its associated attributes. So now, within the cloud watch, if I click on this stream, you would actually see there have been a lot of inserts and a lot of modifications.
So if I just click on "modified," it basically will give me the timestamp, and below that, it will actually give me the actual thing that was modified. So within this, you have the AWS administrator. So you have the old image here. The old image was what was there before there was a modification in place. So earlier there was coaching, then it was this AWS operations administrator. So then there is something called a new image. And within the new image, you have a coursename, and it also shows what was modified. The release date was added as part of the second iteration. So this is what the old image and the new image are all about.
So, if you recall, when we enabled the streams during DynamoDB setup, we enabled both the old image stream and the new image stream. So this is what it's all about. So let's look at an example use case. Let's understand where DynamoDB streams would really help here.
So whenever a new item gets added to the DynamoDB table, we already know that it will be part of streams. As a result, this could also be in cloud watch. Now, this DynamoDB stream can trigger a lambda function that is associated with the SNS topic. And the SNS topic would basically have the content of the stream. So here it says that this is Bark from the Wolfer social network.
So now anytime someone updates a message in the DynamoDB item, then again an SMS notification would be created, but this message would be updated and it would be sent to email or Slack channels or others. So this is one of the use cases where DynamoDB streams can be used. Again, there can be lots, but this is a simple use case for us.
38. Global Secondary Index and Local Secondary Index
Hey everyone, and welcome back. In today's video, we will be discussing the global secondary index and the local secondary index of DynamoDB. Now, with a global secondary index, it basically allows us to query the data based on any attribute that is part of the table. Now, generally, whenever we create a table, we have a partition key and a sort key, and we generally query the data based on those. However, with the GSI, the partition key and the sort key can be different from those of the table.
So, for example, if you look into the table, you have the partition key as the user ID and the sort key as the game title. All right? Now if you look into the global secondary index here, the partition key is the game title and the sort key is the top score. So it is completely different from that of the base table. Now, this proves to be an advantage in various situations. So, for example, let's say that you want to sort based on the top scores here.
So what you can do here is that you don't really need various other data like the top score, date, time, wins, losses, and various other data that might be present. You only need a certain set of data here. So you can create a global secondary index based on game title and top score, and that's about it. So the partition keys, since this is the base partition key of the base table, would be part However, the only two things that are primarily important here are the game title and the top score. And on top of that, what happens is that whatever results you see within the top score can be ordered as well. So it can be ordered with the scan index forward set to false. All right?
So basically, if you do a scan index forward to false, what will happen is that you will get the scores in descending order, so whichever score is the highest is the score that you see at the top. So it becomes much easier, and it also helps in the overall performance because in this case, in the first case, you will have to scan the entire table. Instead of that, you can just create a global secondary index and then have a scan index set to false, and you will get the results immediately. So this is what the global secondary index is all about. Now, you also have a local secondary index. A local secondary index now essentially keeps an alternatesort key for a given partition key value. So within the local secondary index, you cannot have a different partition key. The partition key has to be the same as that of the base table. The only thing that remains different is the sort key.
So you can have multiple sort keys based on your requirements within the local secondary index. In terms of the global secondary index, you can have a different partition key and a different sort key as well. So if you look at this diagram here, the primary partition key is the forum name. And then you have the key for sorting by subject. Now, in here, you can change it. So the primary partition key remains the same. So the forum name is something that you cannot change. However, you can change the sort key. You can change the sort key to be the last post date and time instead of the subject. So that can be done. Now, let me quickly show you a few things. So if you go to DynamoDB, let me create a DynamoDB table.
I just wanted to show you how you can create a global secondary index here. So let's call it a demo table. The primary key Let's use the same user ID as shown in the diagram. And let's skip a sort key. The sort key here is the game title. Let's give it a game title here. Alright, let's not use the default settings. And now, within the secondary index, let's create a new index where you can have a different partition key. So you can have a partition key, or as we call it the game title, a partition key.
And we can use a different sort key here. So the different sort key, let's call it "top scores" here, All right. And here, within the project attributes, you can even have key-only. So this can also improve the performance if needed. So you can go ahead and add an index. And now you can see that it has been detected.
So within the type, it is clearly stating that this is a global secondary index. So now let's do one thing. Let's also see how we can add a local secondary index. Now, we have discussed that for local secondary indexes, the partition key has to be the same as that of the base table. So within a base table, the primary key here is the user ID. So I call it "User ID" here, and let's add a new sort key. As soon as you create a new sort key, you'll notice that you have the option of creating a local secondary index. So now you can add a different sort key here; let's call it "last postdate time." So I'll call it the "last post date time," and you can go ahead and add an index. So now you see that it is a local secondary index. So there are certain important pointers that you should remember specifically when it comes to the global secondary index.
The first is that whenever you create a global secondary index on a provision mode table, you must specify, read, and write capacity units for the expected workload on that specific index. So the provisioning throughput settings for a global secondary index are separate from those of the base table. So basically, coming back to our console, if you go a bit down within the provisioned capacity here, you see the provision capacity for the table is fifth, and the provision capacity is also different. So again, you have f here, but it is separate from that of the base table.
So if you just deselect the auto-scaling part, you can have a different provision capacity for the table as well as that of your global secondary index. So a specific query operation that you make on a global secondary index consumes the read capacity units from the index and not from the base table. So if you're making, let's say, a read operation on a global secondary index, then the RCU will be from the GSI and not that of the base table.
39. S3 - Cross Region Replication
Hey everyone, and welcome back to the Knowledge Portal video series. Now, today's topic is cross-region replication. Now if you go in properties, we're slowly covering a lot of things like versioning, lifecycle policies, et cetera. And today we will be specifically speaking about the cross-region replication-related feature.
Now, if you remember from the previous lecture, we were speaking about the durability concept, where if the region itself goes down, then, independent of the availability and durability that AWS offers, your object will not be accessible. So in this case, what you need is that if your objects are, let's assume, stored in the US West region, you can replicate them in one more region, like Mumbai. So in that scenario, what will happen is that even if the entire region goes down, you still have the same objects in one more region, like Mumbai, which is geographically apart.
Now, along with this, there is one important thing to remember as far as S3 is concerned: by default, the objects that you create in the bucket, which is in the specific region, will never leave that region. So, if this is the bucket in the Oregon region, and any objects you create inside it will never leave this region by default, So this is one important thing to remember. So that is one of the default scenarios. So in order to demonstrate the cross-region replication, let's go ahead and create two buckets. So what I'll do is say that KP Labs is in Region 1. In our case, let's have Oregon as a region, and I'll click on Create. Okay. Now along with that, I'll create one more bucket, KP Labs region Zero 2.
And this time, I'll create it in Mumbai by selecting Create. Perfect. So now we have two buckets created in two different regions. What we will be demonstrating in today's scenario is that when we upload some objects over here, the same objects will be replicated in the second bucket, which is present in a different region. So the first important thing to remember is that cross-region replication needs versioning enabled as a mandatory thing.
So the very first thing that we'll do is enable versioning in both buckets because this is one of the mandatory requirements. Perfect. Now I'll select the first bucket, which is in the Oregon region. I'll go to Properties Management Replication, and I'll click on Add Rule. So, if you notice that it is cross-region replication, I will select Add Rule.
The source will be all the contents within this bucket. I'll select next. Now it is asking for the destination bucket. You can choose a destination bucket in your AWS account or a different AWS account. So for our case, it will be the same AWS account, and I'll select the bucket name, which is region two. Now you can also change the storage colours for the replicated object.
This is again a great feature. Now, in the source S3 bucket, if you are storing all of the objects as standardstorage classes in the destination bucket where your objects are being replicated, you can either choose IA or reduce redundancy to save money. So let me select standard IA for our demo purpose. Now you need to select the IAM role. I'll click on "create a new role."
So what this role basically does is allow the bucket to transfer the objects to the destination's three buckets. So I'll click on "Save." Let's wait. Perfect. As a result, our replication rule has been created. So let's try this out. Let me go here and let me upload a file. Let's upload Finance THC again, and I'll select Upload Perfect. So this file has been uploaded to this bucket.
Now let's go to the second bucket. And if you see it in the second bucket, the file is present. So let's try a few more things. Let me create a folder. This test folder will be named. I'll click on "Save." Now if I go to and click on "refresh," you will see the contents are getting replicated.
Now, one more thing that you will see over here for the objects that we are uploading is that the storage class is standard. However, for replicated objects, the storage class is automatically changed to the standard Hyphen IA.
So this is the basic information about cross-region replication. Now, one important thing to remember is that if you choose an existing bucket whose contents are already present, then during cross-region replication, the older contents will not be replicated. Only the new contents that you will be uploading will be replicated. So this is yet another important thing to remember. So this is it for this lecture. I hope this has been informative for you, and I look forward to seeing you in the next lecture. It.
40. Disaster Recovery Models
Hey everyone, and welcome back to the Knowledge Pool video series. And in today's lecture, we will be primarily speaking about disaster recovery techniques. So what this basically signifies is that if there is a disaster that might occur, what are the ways in which we can recover our industry's infrastructure in a specific amount of time? So, when it comes to disaster recovery techniques, one of the most important things is the RTO and the RPO. So there can be various disaster recovery designs that a solutions architect can implement.
Now, the design that can be implemented for disaster recovery directly depends on how quickly we want to recover from a disaster. So let's assume we have a website in a single availability zone, and if that availability zone went down and that website is like a part-time website that isn't that important, then we don't really have to worry about designing a multi-AZ base architecture. So that will just lead to more costs. However, if we want even one availability zone failure to have no effect on the performance of our website, the disaster recovery design must be very different. So when you talk about design, there are four broad steps in which we can design our architecture based on disaster recovery.
One is the simple backup-and-restore-based strategy. The second is the pilot light. The third option is warm standby. The fourth option is multi-site. Now, again, one important thing to remember is that whichever technique we choose, it comes with its own implications related to how fast we can recover, related to the performance, and related to the cost as well as the complexity factor. So let's go ahead and understand more about each one of them. So the first is the backup and recovery. So backup and recovery is a very simple and cost-effective method that requires us to constantly take backups of our data and store them in services like Three to restore them when disaster strikes. Now, this is a very simple technique, and I still remember a lot of my friends who have their own blogs.
Now they are personal blocks, and they cannot really afford a multi-AZ-based architecture because that will lead to more complexity, which will lead to higher costs. So what they go ahead and do is go ahead with a simple backup and recovery where, if their database gets corrupted or if something goes down, they can actually recover the database dump from the S Three. So they take the database dump every day and store it in S 3. And if the database gets corrupted or something happens, they pull the dump from S3 and recover the blog. So this is a very simple backup and recovery for on-premises servers, which have a huge amount of data, typically in the tens of terabytes. Then they can make use of technology like direct, connect, import, or export to backup their data to AWS.
Now, this is one important thing to remember because, for many organizations, they have a huge amount of data on premises and cannot really use the internet connection to backup. Because if you don't have a very good internet connection, backing up terabytes of data will be a huge pain. Now, in order to back up such huge data, there are various ways in which you can do that. One is direct connect, which is like a direct lead connection to AWS. And second is the use of import and export, which you can use to directly back up the data. Now, don't worry; we'll be speaking about each of them in great detail in the relevant upcoming sections. So, this is the first way. Now, second way is the pilot light. Now, Pilot Light Essentially, we have a minimal version of servers in the backup region in the stop state or in the form of an AMI in this approach.
So let's assume that this is the primary region where your web server, your app server, and the DB server are running. Now, as part of the pilot light, you have the same setup, but the servers are in a stopped state. As you can see, the web server is in a stopped state, as is the app server. The database, however, is currently mirroring. So this is one important thing to remember. So whenever a disaster strikes, you can start these servers, and your website will be up and running. So, this is one approach. The second approach is to have all of these AMIs in the backup region. So whenever this region goes down, since the AMI is from the second region, you can launch the instance from the AMI, and the website will be up and running.
So this is the pilot light. Now, as you can see, the Pilot Light is not a very fast solution for getting the website up and running. However, it does provide good disaster recovery because the entire servers are in different regions. So third is "Warm Standby," where the servers are actually running. So now you see the difference over here between Pilot Light and Warm Standby is that the servers are constantly running, but with a minimal version. So when the disaster happens, the servers are scaled up for production. So let's assume that this is a 4 GB RAM server. Then in standby, this might be a 1 GB ramp server under an elastic load balancer. So if the disaster strikes, we can quickly increase the size of our servers, and our application will be up and running. One important distinction between Warms and Pilot Light is that in Pilot Light, the server does not have to be in a stopped state.
Also, it might be a possibility that you just have the AMI of the Web server and the AMI of the App server, and whenever a disaster strikes, you can launch the servers from the AMI. However, in warm standby, you must have the servers in running conditions. So this is the difference. AMI cannot exist if no servers are operational. You should have a service running, but the server should be at its minimum size. So this is a warm welcome. The last is multisite, where you have a complete one-to-one mirror of your production environment.
So, if this is a four-GB RAM server, the backup server should also be four-GB RAM. So this is an exact replica of the production environment. Now, as far as cost is concerned, multisite will cost you the most, but multisite will also allow you to recover from disaster in the least amount of time. So these are some of the ways in which you can design a disaster recovery solution. Remember that each technique comes with its own cost and has its own level of complexity. So whichever technique you choose, make sure that you also test things out. It should not happen that you have a multisite network. However, when you switch to the backup server in the event of a disaster, these servers are either not running or are experiencing problems. So you need to do a lot of testing. Now, I still remember that in one of the organisations that I work with, we have testing every two weeks of testing. So what we do is, every two weeks, switch from one region to another and see whether everything is working perfectly or not.
So the entire production traffic is migrated from the primary region to the disaster recovery region, and we actually see if everything is working perfectly or not. So this is a nice way to make sure that when the actual disaster happens, we have a perfect working production environment. So again, there are various AWS services that we can use for disaster recovery, like S3, Glacier Import, and Export. You have a storage gateway. Direct connect VM, import and export, Trout fifty S, and many other features are available. So throughout this course, we will be looking into, I would say, all these services in the case of disaster recovery and also in terms of how exactly we can use them for our production environment.
Prepaway's AWS Certified Solutions Architect - Professional: AWS Certified Solutions Architect - Professional (SAP-C01) video training course for passing certification exams is the only solution which you need.
Student Feedback
Comments * The most recent comment are at the top
Can View Online Video Courses
Please fill out your email address below in order to view Online Courses.
Registration is Free and Easy, You Simply need to provide an email address.
- Trusted By 1.2M IT Certification Candidates Every Month
- Hundreds Hours of Videos
- Instant download After Registration
A confirmation link will be sent to this email address to verify your login.
Please Log In to view Online Course
Registration is free and easy - just provide your E-mail address.
Click Here to Register