- Home
- Amazon Certifications
- AWS Certified Security - Specialty AWS Certified Security - Specialty (SCS-C01) Dumps
Pass Amazon AWS Certified Security - Specialty Exam in First Attempt Guaranteed!
AWS Certified Security - Specialty Premium File
- Premium File 509 Questions & Answers. Last Update: Nov 13, 2024
Whats Included:
- Latest Questions
- 100% Accurate Answers
- Fast Exam Updates
Last Week Results!
All Amazon AWS Certified Security - Specialty certification exam dumps, study guide, training courses are Prepared by industry experts. PrepAway's ETE files povide the AWS Certified Security - Specialty AWS Certified Security - Specialty (SCS-C01) practice test questions and answers & exam dumps, study guide and training courses help you study and pass hassle-free!
Domain 2 - Logging & Monitoring
18. CloudWatch Events
Hey everyone, and welcome back. In today's video, we will be discussing the Cloud Watch events. Now, Cloud Watch events is a great feature that AWS has released, and basically, it allows us to respond to the changes that happen within your AWS environment in real time. Now, one of the examples used in the use case that I can share is something that we used to implement in one of the organisations that I've been working with. Let's say that the EC2 instance gets terminated. Now, if that EC2 instance is connected to a central server, it can be a Spacewalk satellite or any other central server. We want to deregister that instance from those central servers so that those servers will not try to monitor and unnecessarily alert. So whenever an easy-to-instance used to get terminated, we used to receive a Cloud Watch event, which was associated with the lambda function. Lambda function is then used to deregister that instance from all the central servers. Now, another example use case that I can share again: we used to implement that in a production environment. Like, we did not really have any auto-scaling groups, so we had an alarm where Cloud Watch events were used.
So anytime an instance used to get stopped or used to get terminated, we used to receive an email as well as a lack notification saying that the specific instance ID has been stopped or that the specific instance ID has been terminated. Now again, there can be huge amount of use cases that you can achieve with the help of Cloud Watch events. Let's jump into the practical and see what exactly it looks like. So currently, I'm in my Cloud Watch console. So within Cloud Watch, you have the tab of events, and within this, you have a subtle set of rules. So this is where you can create the rule. Let's go to events, and let's click on "Get Started." Now, if you click on Get Started, it takes you to step one of creating a rule. Now, while creating a rule, you have the option of specifying the service name and event type. So, depending on the service name and the use cases that you have, you can select the service name accordingly. Again, I'll tell you one of the very common use cases that organisations use. So typically in a development environment for organisations that have hundreds of EC2 instances running during the night, the EC2 instances are running and it just increases the cost.
So what you can do is shut down all the easy two instances in a specific environment during the night, let's say from 09:30 p.m. to 09:30 p.m. And Cloud Watch events are the easiest way in which you can implement that. So if you click here, there are two options. One is the event pattern, and the other is the schedule. So a schedule is like a cron. Now, typically, what used to happen was through cron or something where a dedicated instance was used. So let's say you can specify a cron expression over here. Let's say that the expression would be invoked at 9:00 p.m. in the night, and it would call a target. So the target would be the lambda function, and that lambda function would stop all the easy instances of a specific region. Let's assume that the region is only for development environments. So it will stop all the EC instances in the development environment. Now again, there would be one more schedule during the morning, at 9:00 a.m. And again, there would be a second lambda function, which would start the instances at 9:00 a.m. The morning. So this is one of the use cases that a lot of organisations use.
Anyways, for our demo use case, what you can do is you can go, let's select EC 2 over here, and there are a lot of event types. You have EC 2, change notification, EBS volume notification, and various others. So currently, if you have seen the rule, I have already created one rule created. So if you click on this rule, you can see this is the event pattern. The source is easy to find, the state is running or stopped, and the target here is the SNS topic. So what happens is that any time an instance starts or stops, I receive a notification in my email. So let's look into how exactly that would work. I have one running instance over here. So let me go ahead and let me stop this instance. Great. So now the instance state is top, and you would see the mail changed from 119 to 120. Now, within my mailbox, this is the JSON event that appeared. Let's do one thing. I'll just copy this JSON event and paste it into a website that does the JSON formatting so it is easier for us to read. Now, here are the detail type. It says that easy to instance state change notification.
It gives you the account ID, which is quite useful if you have multiple accounts. It gives you the arm, and it gives you the precise instance ID that was changed, and the status here is stopped. So this is quite useful. Now again, this is one simple example. There are a number of possibilities that you can achieve with the help of cloud watch events. One thing that I already shared is that you can stop all the easy two instances at night in the development environment and start back in the morning. helps save a huge amount of money anyway, so let's do one thing. Now that we have seen the demo, let's do it practically so that we are aware of how to do that. So I'll go ahead and create a rule. Now the rule will create type EC 2. Now, we don't want all the events; we just want the EC two instance state change notification. And here you can specify any state, or you can specify a custom state over here. Now, within the custom state, you can specify when you want to receive a notification.
So it might happen that it's better to receive a notification when an instance gets terminated in a production environment, specifically if you're not running auto-scaling groups; otherwise, you'll get a lot of spam there. So let's do one thing. I'll say running, and we'll have one more state in stock. So these are the two states, and in the target, there can be a lot of targets that you can associate with. Lambda function is again a great target, and it allows you to have a number of possibilities here. Now, one of the important targets, specifically if you want to receive an email or an SMS, is the SNS topic. So before we do that, we need to create a SNS topic here. Let's go to topics. And here I already have one topic created, but for the sake of demonstration, I'll create one more topic. I'll name it the KP Labs hyphen events demo. Let's go ahead and create a topic. Let's go inside, and we'll create a new subscription. So the subscription protocol would be email, and the endpoint I'll give is instructors at the ATK Platoon. Once you are done, you can go ahead and create a subscription. Great. So once you have done that, you will typically receive an email asking for confirmation. And if you look over here, this is the email. Now, if everything seems perfect, you can go ahead and click on Confirm subscription.
Great. So now the subscription has been confirmed. So from the targets this time, we'll just select SNStopic over here, and we'll select our KP Labs event. In case this does not appear, you might have to refresh your page there. We can go ahead and click on "configure details." So just give it a name. I'll say KP Lab and the state needs to be enabled. We can go ahead and create a rule. Great. So this rule is now enabled. So what I'll be doing is the first rule that we had used for demo, I'll just delete it, or let me just disable it so we don't really get confused on this part. So now, once you have done that, let's go to the EC-2 instance, and I'll start our EC-2 instance here. So once the instance state changes from pending to running, you should typically receive a new email. And you have this new email over here. Now let's do one thing. Let me just paste this in the JSON format so it becomes easier for us to read. And again, it says the account ID, it tells you the source, it tells you the region, it gives you the arm, and here it gives you the exact instance ID. And what is the current state? The current state is running. So this is the high-level overview of the Cloud Watch events?
19. AWS Athena
Hey everyone, and welcome back. In today's video, we will be discussing the AWS Athena service. So Athena again is a pretty new service, and it's a very interesting service not only for the developers but also for the security engineer folks. So let's go ahead and understand what Athena Service is all about. Now, in a very high-level overview, AWS Athena is a service that allows us to analyse various log files from S3 using the standard SQL statements. So we'll understand this with an example. So let's assume that we have the cloud trail logs, which are stored in AWS S3, and you want to perform certain kinds of analytics on top of those cloud trail logs. Let's assume that you want to see who logged into your AWS account in the past three days.
Now, in order to perform a certain kind of analytics, Cloud Watch is one of the solutions. However, you won't be able to really run complex queries in Cloud Watch. So in order to achieve those use cases, ideally, what organisations do is they have a log monitoring server like Elk Stack, like Splunk, or various other log monitoring solutions. So in order for log monitoring solutions to work, generally what organisations do is create EC2 instances, deploy a monitoring stack like Splunk or Elasticsearch Cabana, or use various other vendors, and add the data source from S3 so that they can import the cloud trail logs to those monitoring solutions. And once those cloud trail logs are imported, the organisation can begin writing certain queries in those monitoring tools. So this is definitely one of the approaches, but it is not very, you know, efficient; it takes a lot of time, and it would also lead to a certain kind of complexity as well as infrastructure costs. So what AWS Athena allows us to do is instead of doing all of these steps, what you can do is query the S-3 bucket. So you have to put the S-3 bucket, you decide the table format, and you directly run the query, and that's it. You don't really have to do all of these things. Now, initially, it might look a little complex or confusing, so let's jump into the practical and look at Athena in a practical way.
So I'll go to Athena from my AWS console; this is Athena, and Athena is now supporting views. Also, this is one of the recent features; it was actually released last week. So we'll just discuss all the things that are important for our basic understanding as security engineers as well as the exam. So we had discussed the cloud trail example, so let me open up the cloud trail real quick and look into where exactly the cloud trail logs have been stored. So if I go into trails, the S-3 bucket where the cloud trail logs are being stored is a packed trail. So let's quickly verify whether the logs are being stored there because from Athena we will be querying for the logs that will be storing this cloud trail-specific information. So I'll type "packed trail," and within this, if you will see, I have the cloud trail log; let's do a US-east one, and you see, I have a lot of cloud trail logs that are stored in this bucket. Perfect. Now this is the first information that is needed. The second formation that will be needed is the query-specific structural language. So a cloud trail has a specific structure. So if you just open up any of the logfiles and if you look into the event, the event has a specific structure like "Event version," then within user identity you have Type, you have PrincipalId, you have ARN, and various other things, so this structure is the first thing that needs to be put in the AWS Athena.
So I already have a structured document, and if you'll see over here that the event version is of type string, then within user identity you have struckwhere you have type being equal to String principleID being String, and various others. So if you see user identity ARN has type-principal ID, ARN, account ID, and various others, this actually directly refers to the log format. You have a user identity RN, which has a type principal ID ARN. So this is basically the structure of the logs that Athena can really collaborate on. Now if you go look at the last line here, we are specifying the location of the cloud trail logs. So let's quickly copy this format, and what I'll do is paste it in the Athena. All right, now the last part we discussed is the location. So this is one of the parts that you will have to change. The cloud trail bucket name is packed, the trail in the AWSlog is the same, and second is the account ID. So if you see packed Hype and Trail AWSlogs, and then you have to put the account ID, I'll be using the account ID.
Let me quickly copy this up. So in order to copy it, I'll just quickly select it, copy this account number, and paste it within my query. So this is the structure, and you're also specifying the location where my cloud trail logs have been stored. I'll go ahead and click on "run a query." So if you will see, the query has been successful, and it has created a new table called "Cloud Trail underscore logs," and within this table, you have all of these log-specific parameters. Perfect. So this is the first part that we are interested in. So now that you have the table that was created, the next thing that we need to do is go ahead and run various types of queries. So I have a sample query, which is present over here. Let's copy this, and I'll paste it over here. So what this query is doing is making a select statement. And what we are selecting is user identity (ARN).So if you look into the Cloud Trail log, let me click on the event where you're selecting User Identity ARN. So this would mean this specific field. So this would return the ARN of the principle for which the Cloud Trail event has been generated.
So this is the first field that we are selecting. The second field that we are selecting is the event name. The third is the source IP address. The fourth is event time. And all of these are references to the Cloud Trail event. If you see that you have an event name, you also have the event time as well as the source IP address. So these are the things that we are querying for from this specific select statement, and from where we are querying, we are querying from the Cloud Trail underscore Log Stable. So this is the log stable, and we are limiting the result to up to 100 output. So let's do one thing. Let's go ahead and run this query. So, if you see the query has been running, it will take a little amount of time to be executed. Perfect; it has been executed, and for whatever relevant feeds, if you'll see, I have an ARN. So first is the user identity. ARN. The second is the event name. The third is the source IP address. This is the source IP address field. The fourth is the event time. And you can see that I am getting a lot of events specific to So if you see this, this is the ARN. So someone from the root account has done this to get Trail status. The IP address from which this event has been occurring is 5932, and this is the event time.
Now, one of the great things that you would have seen is that we did not really have to create any infrastructure, and we did not really have to pull the logs from the Cloud Trail Event. All of these things are automatically taken care of by AWS Athena. And this is the real magic of the Athena. So if you have a huge amount of logs that have been stored in S3 and you do not really want to have that complex setup of installing and configuring a log monitoring server, all you have to do is specify the S3 bucket name and run the sequel query statements, and all the magic will be done by the AWS Athena service. So, this is the high-level overview of AWS Athena. So, one thing that I want to share before we conclude this lecture is one of the real-world use cases where, in one of the organisations that I recently joined, the organisation had received a huge amount of spiked traffic due to which a lot of production systems went down and basically the entire application was down. Now, the question that came up was whether that huge spike was genuine or whether it was part of an attack. So, since that organisation did not really have a great infrastructure or any log monitoring solution, we decided to use AWS Athena to query the VPCflow log for certain kinds of information. So, if you know, VPC flow logs can also be stored in s three.
And what are the information sources that we queried? We basically requested the number of accepted and rejected logs an hour before the spike occurred. We also queried for the number of accepted and rejected logs 1 hour after, or I would say during, the time the spike occurred. We also queried for the most number of IP addresses from which reject logs had occurred and to which elastic network interface the highest spike had occurred to. And from these, we came to know that there were around five specific IP addresses that were trying to do an HTTP-based attack to take the website down. And after the information that we received from Adina, it actually took us around ten minutes to get this information. And we decided to block those IP addresses from the ACN as part of the initial blocking investigation. So this is one of the real-world use cases, and there are a number of possibilities that can be realised with the help of the AWS Athena service.
20. Overview of AWS CloudTrail
Hey everyone, and welcome back. In today's video, we will be discussing the Cloud Trail. Now, Cloud Trail is one of the very important services, and typically, this is the first service that I generally enable whenever I create a doubles account. So let's go ahead and understand more about Cloud Trail. Now, basically, it is very important for us to record each and every activity that is happening within your infrastructure, your cloud service provider, and even your service.
So typically, sometime it might happen that your servers are breached, and if you do not know what the activities were that were happening, you will not be able to find out the root cause behind those beats. And that has actually happened with a lot of organization. And hence, it is very important to record each and every activity that is going on within your account. Now, a "Cloud Trail" is a service that allows us to record every API call that happens within your AWS account. So let's understand this with an example where you have an auditor who is auditing your organisation and he asks you a question that states, "Show me what did Annie do on January 3, 2017 between 10:00 a.m. and 2:00 p.m.?" Now, you will only be able to see this if you have Cloud Trail enabled.
Now, do remember that this question is specific to an AWS account. If the auditor asks you this question, saying, "What did Annie do inside the server in this time frame?" then you need a different mechanism for that. But as far as AWS is concerned, Cloud Trail is something that will help you achieve or answer this specific question. So how Cloud Trail works is that you get something similar to this table where it says that at 3:50 p.m. A user named James logged in and Annie modified a security group at 07:30 p.m. And Suzanne created a new EC2 instance at 11:00 p.m. So from now on, you can say all right that, on this specific timeframe, Annie had modified a security group. So this is a very simple table that can give you a glimpse of what Cloud Trail is all about. So let's do one thing; let's go ahead and understand this in a practical manner. So I'm in my AWS console, and basically what I did a few minutes ago, before recording the video, was start the demo instance, and we just wanted to show you how exactly it might look in Cloud Trail. So I'll go to services and I'll put Cloud Trail, and within the event history, I already have Cloud Trail enabled.
We will also look into how we can enable it. But for demo purposes, the Cloud Trail has already been enabled. So now, if you look here, you have the event time, the username, the event name, the resource type, and the resource name. So the first event name here is startinstances, and if you click here, it will basically give you a lot of aspects here. One of the more detailed ones is the view event. So if you click on "View event," you will get the actual JSON value of what exactly happened. So let's understand this. So it is basically saying that this is the ARN; ARN is of root, and if you go a bitdown, the event source is EC2 Amazon amazon Aws.com. That means that this specific event happened on this service, which is EC Two. Now, what was the event that happened here?
So the event that happened here is that instance's event. Where did the instance start? It started in the EC-2 region, or I would say the U.S. East-1 region. Now who started it? Which IP address started it? This is the IP address of the user who started the EC2 instance. And the final question is: what is the instance ID of the instance that was started by this specific user? and this is the instance ID. So this instance ID is phi e 30, and basically, if you see over here, it says phi e 30. So basically, from this cloud trail log, I can say that someone is a root user. So basically, root user with the IP address 277101165 started an EC2 instance in the North Virginia region, and the EC2 instance ID is this. So this is one of the sample Cloud Trail events. So if you see where there are a lot of cloud trail events, each one will have a similar functionality around. So coming back to how we can enable cloud trails So in order to do that, you need to go to the CloudTrail Dashboard, which is here, and you need to go to Trails. Now, within trails, you see there is one trail that was created and is called Demon KP Labs, and it basically has an association with S Three Bucket.
So basically, what happens is that whatever event history you see within the cloud trail does not really get stored for an unlimited amount of time. In fact, it says that you can view the last 90 days of events. Before this, you could only view up to seven days, but AWS has increased it to 90 days, which is very beneficial. But what really happens after 90 days? So after 90 days, these events get stored in the S-3 bucket, which is Demon KP Labs, which is specified within the configuration value. So let's look into this specific SV bucket. So now you see that within this trail, we're more interested in US East 1. So basically, you will get the Cloudtrail events associated with every region. So if you just want to see what events happened within the US East One region, which is North Virginia, you can just click over here; it gives you 2018 624, and all of these are the compressed files. So when you download it, you'll have to uncompress it, and you will see the JSON event that we had seen within the Cloud Trail console just a moment ago. So in order to create a trail, what you need to do is come to the Trail tab and click on Create Trail. Now, you need to give the trail a name. I'll say Kplabscloud Trail, and you have the option of applying it to all regions.
This is very important. Make sure that this option is always selected, which applies the trail to all regions. Now, for the management event, we need to log all the read and write events. So I'll select all, and basically for the data events, you can select all three buckets within your account. So basically, if you want to record the three object-level API activities, then you need to select this very important option. Make sure that you even select this within the lambda. Also, you can basically record the invokeAPI operations that are happening. So make sure you select and log all the current and future functions within the Data event field. Now, we already know that Cloud Trail will only store a maximum of 90 days. So it's always recommended to never delete your cloud tracking activity, at least for a period of one year. Now, how will you store it in those three buckets? It's defined by the storage location here. So you say, "Create a new SD bucket." You specify the bucket name. I'll say "kplabs." Hyphen cloud trail Hyphen demo. So this is the bucket name, and then you can go ahead and do a create. So once your trail is created, which is Kplash Cloud Trail, if you go to the event history, you should be able to see the Cloud Trail activity up in your dashboard.
Now, do remember that if you enable it right now, you will not get the past event. You'll only get the events from the time frame in which you enabled Cloud Trail. And also remember that the cloud trail events that appear over here are not very instantaneous. It might take a few minutes for the event to appear here. So by that, I mean that if you say, "If I stop this EC2 instance," it will not immediately come here. It will take a certain amount of time—typically a few minutes—for that event to appear within the cloud console.
21. Improved Governance - AWS Config 01
Hi everyone, and welcome back to the Knowledgeable Video series. So today we are going to talk about AWS configuration. Now, before we go ahead and understand what an AWS configuration is, let's look at a scenario that will help us understand why we need a configuration in the first place. So, one thing that is universal across most of the organisation is that infrastructure keeps changing.
So if you have an enterprise, it might be possible that every week there will be a new application that might be coming.And for that new application, you might have to create a new EC2 instance, a new RDS, new SQS queues, et cetera. So this is the same across most of the organization. So let's look at a very simple example where you have a new enterprise where this is a NewBase account. So in a week, you have a couple of easy-to-understand instances where your website is running, and suddenly you find there are a lot of users or a lot of visitors that are coming to your website. So in week two, you increase the number of EC instances. And what you did was you also added an elastic load balancer. So this is something that you did in week two.
Now the traffic kept on increasing, and so in week three you added various things more. So you added many EC2 instances, you had an elastic load balancer, you created an S3 bucket for maybe content delivery, and you also created a relational database, or RDS, within your Amazon account. So every week, if you see your infrastructure changing a lot, what happens is that your CFO or CEO comes and says, "Show me how the infrastructure looked a week ago?" So you cannot show him the cloud trail logs, and this is even important for auditors, because if you want to really see what exactly changed from week one to week two to week three, just by looking at the logs and having to manually draw the diagrams to look into what changed, it is not very feasible. And this is one of the reasons why AWS came up with the configuration service. So what the configuration service does is keep track of the inventory as well as the inventory changes. So it will show you that on this day, this was an inventory, and on the next day, these were the changes that happened within your AWS account. So it becomes very easy for you to track the changes. So let's do one thing: let's go to the AWS account and see what the configuration is.
So let me open the AWS configuration. Okay. So let's click on "get started." It's very simple to configure. So by default, it is taking all the resources supported in this region. But we also want to include Im resources like global resources, which include Im. So I specify this now; it is saving me the bucket name. So let me just give it a sample bucket name. Basically, what a bucket will do is that AWS configuration will keep the configuration snapshots within the three AWS s three bucket. So let's say after one year you want to see the backdated data; a few months ago you could actually open the logs from the S3 bucket. So the next thing you have to configure is the SNS topic, and this is the role I will click on. Next, we'll talk about the conferences in the upcoming lectures. But for the timing, let's configure the AWS configuration. So it is setting up the AWS configuration. Generally, it might take some time for the AWS configuration to configure because, once configured, it will take all the inventory from your AWS account. In my case, the AWS account is pretty empty; I hardly have anything in this test account, so it loaded up pretty quickly.
Now, one thing that is important to remember is that the AWS configuration does not support all the resources. It only supports specific resources related to Cloud Trail EC. Two elastic load balancer IMRDS and a few more So not all the resources are supported in this AWS configuration. The second important thing to remember is that AWS configuration is region-wise. It is tied to a specific region and is not global. So in my case, what I have is my infrastructure within the North Virginia region. So let me go to the North Virginia region so that we can actually see much better way. So I'm in the North Virginia region. Let's do one thing. Let me select the security group over here, and let me go ahead and click on look up. Okay, so what it is showing me is all the security groups that are present in this particular region. You can also specify, say, an instance. And then you can go ahead and click on Lookup, and it will show you all the data related to the EC2 instances and the security groups. So these are the two EC instances, and these are the security groups that are available.
So let me open EC 2 as well. Okay, let's do one thing. Now I have one security group called "OpenVPN Access server." Let me click on the security group, and let's take the security group ID. I'll take the security group ID, let me unselect the instance, and let me lookup for this particular security group ID. So if there is a column called "configuration timeline," let's click here. So what this configuration timeline will do is show you any related changes that were made to this particular security group. Now that we have enabled the configure command a few minutes ago, it will not show you any configuration changes. But if you come down here, there are two very important features to remember. First are the relationships. So relationship means this security group is attached to which instances or resources does it have a relationship with? So if I click over here, it says that this security group is connected to this network interface, it is attached to this EC2 instance, and it is part of this particular VPC. Another very important thing to remember is that the second important field is changes. So within this, it says "configuration changes," where if you modify some aspect of the security group, it will show that these are the aspects that were changed. So let me give you a practical example that will make things much clearer.
So let me add a few rules over here. Let me add, say, 91, one six, dot 75, slash 16, dot zero, and let me delete the port 22, and I'll click on save. So we change some aspects of this security group so that these things will be reflected in the AWS configuration console. So let me do one more thing; let me attach that security group to another instance as well. I'll change the security group, so generally, whenever you make changes, they will not come instantaneously; it will take a few minutes before they are reflected in the AWS configuration. Let me add a security group to one more instance over here.
Okay, so what we did was change the security group, which is the OpenVPN access server, and also attach the security group to a different instance. So these changes should be reflected back to the AWS conflict console. So again, it might take some time—I would say a few minutes—before it is reflected over here. So let's pause this video, and I'll be back in a few minutes. Okay, so it has been around 5 minutes, so let me just refresh this page and let's see if the configuration has come up. Okay, so if you see over here, it is showing me that there have been some changes that have been made to this particular security group. So if you see there is a difference in the time, So let's look at what has changed. So let's go to the "Changes" section over here, and basically it is showing me what the changes are that have been made to this particular security group.
So it shows me the exact details about what exactly has changed and what exactly got removed. Now along with that, if you remember, we had also attached this particular security group to a new EC2 instance. So this will fall under the relationship data. So it shows that this particular security group has been attached to one more network interface. So if you remember, security groups are attached to the network interface. So this has been shown in the relationship status. So this is the EC-2 network interface. Now we have two, whereas earlier there was only one, and within the changes section, you get the security group-related changes as well as the network interface it got connected to.
Now if you are wondering from where it is getting the data then, It is actually getting the data from our old friend, which is the cloud trail. So if you remember all the API-related activities, anything that you do within your AWS account gets logged via the cloud trail. So what configuration will do is pull cloud trade-related data and then interpret that data into an easier form for us to look at. So it also shows you the timing within which the events were changed for. It is a very important thing to remember that configuration is something that is very important as far as the enterprise or even medium-scale organisations are concerned. So coming back to the PowerPoint presentation, I hope you understood the basics of why AWS configuration is required. Let's look into a few of the use cases that might help you where AWS configuration is enabled. So one use case is let's say your infrastructure cost has spiked suddenly and your chief financial officer (CFO) wants to see what exactly has changed in the past three weeks.
So instead of showing logs and all, you can directly open up the configuration and show him what details have been changed, and he might actually get impressed as well. And the second use case is that let's say you are in charge of DevOps at XYZ organization, and last night everything was working fine, but suddenly in the morning users are reporting that they are not able to access the website. So, as you know, there was some change related to the EC2 instance or the security group. So in this case, you can use the configuration to see what exactly changed from yesterday night to this morning. So these are a few use cases in which you can use the AWS configuration for now, but there are a lot more features that the configuration does provide that are really amazing, and we'll be talking about some of these features in the next video.
22. Improved Governance - AWS Config 02
Hey everyone, and welcome to Part 2 of AWS Config. Now in the previous lecture we looked up into thebasic of AWS Config and we also looked into howAWS conflict can help us track the infrastructure changes. So today we look into more features of AWS Config, and there is one very amazing feature called "compliance check" that Config provides. So let's understand what that means. So again, just infrastructure-related changes. Monitoring is not enough. As a security specialist, we should be monitoring the security aspect as well.
So there are various use cases related to the best security practices. Like all the S buckets, root MFA should be enabled. Now, security groups should not have access to port 22 (or maybe another port like 3306, etc.). The cloud trail must be enabled. Now, one more rule: no unused EIP should be present. So this can be part of the cost factor as well. So these are the five points related to security, as well as cost optimization, which are important. Now, how do you actually monitor all of these things? Now, this is just a sample of five. There can be hundreds of different points. So there should be some kind of centralised dashboard that can say that your account is compliant with all of these rules.
And this is what AWS Configure allows us to do. So again, based on the use case that you configure, AWS Config can show you the compliance status. So this is the compliance status that you see now restricting SSH. You see, it is compliant. So SSH is restricted; it's not open to all the EIP that are attached. So that means there is no unused EIP. You have. Root MFA is enabled. So usually, it is compliant. However, there are certain resources that are noncompliant here. So directly by looking at the controls, you can actually see whether your infrastructure is compliant or not. And generally, if the auditor comes, you can directly show the auditor this page, provided you have all the documentation over here. So this is what AWS Config allows us to do. So let's look at how we can configure these rules. So, going back, let's go to the AWS configuration. Now these are the resources in inventory. If you look into the first tab over here, it says rules. And by default, Amazon gives us a lot of rules that we can use within our infrastructure. So for the timing, there are 32 rules that come by default in your config-related data. So these rules basically check various things like IAM, EC2's two instances, root MFA, S3's three buckets, et cetera.
So let's do one thing: let's enable certain rules out here. Let me enable the EC's detailed monitoring. Okay, let me enable this particular rule. Okay, so it is being evaluated. Let's add a few more rules over here. Let's see; let's go to the next part. OK, three bucket logging is enabled, and three bucket versioning is enabled. So we want that all the S3 buckets should have versioning enabled. So I'll click on "Save." I'll add this particular rule as well. I'll click on "add rule." Let's add a few more rules. Let's see. Cloud Trail is enabled. This is again a very important rule that should be there. So I'll add this rule. Let me add a few more rules so that our dashboard looks pretty nice. Okay, let me go to the next step, the EIP attached. Again, this is very important because specifically for free tires, if you don't have an EIP that is attached to the EC2, you will be charged for that EIP. So, a very important thing to remember is that this should be present in at least three tyre usages. A lot of people get charged because they don't have EIP attached to any EC2 instances. So just remember that you should have an EIP attached.
I'll click on "Save." So we have around four rules here, and you see that it is showing me the compliance as well as the noncompliance status. So for EC-2 instance, detailed monitoring, it is saying that it is a non-compliant error, and there are three resources that are not compliant. Three-bucket versioning is enabled. Again, there are two noncompliant resources. Cloud trailing is enabled. Yes, we have a cloud trail enabled. So it is showing me as compliant, and it will also report to me whether the EIP is attached or not. So this is one of the ways in which you can configure the rules of AWS configuration. Now, again, as we discussed, there are around 32 default rules that comes. Now, what happens if you want to add more rules? Well, definitely, you can add more rules. You can put those rules in lambda, and you can connect those rules with a configuration service. So here you see that there is one EIP that is not attached. Okay, this is dangerous because I will be charged for this particular unused EIP. So I should be removing the EIP, and you should be doing the same if you have an EIP that is not attached.
So there is one noncompliant resource that you see. I have four EIPS, among which there is one that is noncompliant. So let me go to this particular EIP. Okay, so this is the EIP. Let me actually go to the EC2 and ElasticIPS and paste the EIP, and you will see that this EIP is not attached to any of the instances. So why keep it? Just release it; you'll save the cost and I'll release this particular EIP. So this is the basic information about the AWS configuration. Now, there is one more important thing that you should remember. We already discussed the CIS benchmark, and there is a very nice GitHub repository that contains a lot of AWS configuration rules that you should have within your AWS account. Specifically, if you're running the production service, security is something that's important to you. So if you go to the rules.md file over here, this file basically tells you what the rules are that are present within this particular GitHub repository.
So you see, there are a lot of rules that are present related to the IAM password, policy, key rotation, whether the im user has MFA enabled or not, whether VPCflow log is enabled, and so many other things. So there are around 34 rules that are present over here, and there are around 32 rules that are present by default within the AWS config repository. So AWS keeps on updating this rule set, so as long as they keep on updating, you can add the rules or, for the time being, it does not update. You can write your own rules within the lambda function. So this is the basic information about the conflict resolution service. I hope this has been useful for you, and I will really encourage you to practise this once. And if you are managing the AWS infrastructure of an organization, I really recommend that you have some kind of dashboard where it shows you, hopefully, compliance for all the resources. So this is it. I hope this has been useful for you, and I'd like to thank you for viewing.
Amazon AWS Certified Security - Specialty practice test questions and answers, training course, study guide are uploaded in ETE Files format by real users. Study and Pass AWS Certified Security - Specialty AWS Certified Security - Specialty (SCS-C01) certification exam dumps & practice test questions and answers are to help students.
Exam Comments * The most recent comment are on top
Why customers love us?
What do our customers say?
The resources provided for the Amazon certification exam were exceptional. The exam dumps and video courses offered clear and concise explanations of each topic. I felt thoroughly prepared for the AWS Certified Security - Specialty test and passed with ease.
Studying for the Amazon certification exam was a breeze with the comprehensive materials from this site. The detailed study guides and accurate exam dumps helped me understand every concept. I aced the AWS Certified Security - Specialty exam on my first try!
I was impressed with the quality of the AWS Certified Security - Specialty preparation materials for the Amazon certification exam. The video courses were engaging, and the study guides covered all the essential topics. These resources made a significant difference in my study routine and overall performance. I went into the exam feeling confident and well-prepared.
The AWS Certified Security - Specialty materials for the Amazon certification exam were invaluable. They provided detailed, concise explanations for each topic, helping me grasp the entire syllabus. After studying with these resources, I was able to tackle the final test questions confidently and successfully.
Thanks to the comprehensive study guides and video courses, I aced the AWS Certified Security - Specialty exam. The exam dumps were spot on and helped me understand the types of questions to expect. The certification exam was much less intimidating thanks to their excellent prep materials. So, I highly recommend their services for anyone preparing for this certification exam.
Achieving my Amazon certification was a seamless experience. The detailed study guide and practice questions ensured I was fully prepared for AWS Certified Security - Specialty. The customer support was responsive and helpful throughout my journey. Highly recommend their services for anyone preparing for their certification test.
I couldn't be happier with my certification results! The study materials were comprehensive and easy to understand, making my preparation for the AWS Certified Security - Specialty stress-free. Using these resources, I was able to pass my exam on the first attempt. They are a must-have for anyone serious about advancing their career.
The practice exams were incredibly helpful in familiarizing me with the actual test format. I felt confident and well-prepared going into my AWS Certified Security - Specialty certification exam. The support and guidance provided were top-notch. I couldn't have obtained my Amazon certification without these amazing tools!
The materials provided for the AWS Certified Security - Specialty were comprehensive and very well-structured. The practice tests were particularly useful in building my confidence and understanding the exam format. After using these materials, I felt well-prepared and was able to solve all the questions on the final test with ease. Passing the certification exam was a huge relief! I feel much more competent in my role. Thank you!
The certification prep was excellent. The content was up-to-date and aligned perfectly with the exam requirements. I appreciated the clear explanations and real-world examples that made complex topics easier to grasp. I passed AWS Certified Security - Specialty successfully. It was a game-changer for my career in IT!