AZ-204 Microsoft Azure Developer Associate – Monitor, troubleshoot, and optimize solutions part 1
- AZ-203/ 204 – Auto-scaling Azure Web Apps
Hi and welcome back. Now in this chapter, let’s talk about Azure web App auto scaling. So, remember, as part of the basic app service plan, you can get allocated three virtual machines that can be used to host your web applications as part of your Azure web App. Now let’s say that you’ve allocated at one virtual machine to your Azure web App. And now maybe you’ve seen that the CPU percentage. So you go to the metrics. You’ve seen that the CPU percentage is going beyond maybe 70% to 80%. And now that’s slowing down the web applications which are being hosted on your Azure Web App. Basically your virtual machines. Then what you do is that you then add another virtual machine to your Azure web App to kind of balance the load.
So now your web applications can run on multiple virtual machines, but with a basic app service plan, you have to do all of this manually. So that’s a key word. Manually. You have to go yourself and see if the CPU threshold is being met. You can generate alerts, but then you have to go ahead and add the instance in the configuration. But in the world of automation, it is very important to enable auto scaling. So your app service plan should enable the automatic scaling of virtual machines without any intervention. Now, that can be done with the autoscale feature. The autoscale feature is available from the standard app service plan or higher. So with this in place, you can automatically scale the number of virtual machines. So, just as an example, you can add different conditions and rules.
So you can say that if the CPU percentage goes beyond 70%, then go ahead and increase the count of the instances as part of your app service plan by one. So this is the benefit of auto scaling. Now let’s go ahead onto the Azure portal. Let’s see how we can enable auto scaling. So here we are in Azure. Now, if you go on to scale out the scale out section for your app service plan. So this is the scale out feature for your app service plan, which is available as part of your Azure Web App. Remember that the scale up is used to increase the tier of your app service plan. The scale out is used to enable auto scaling. So kind of remember this differentiation. Now, over here, you can enable autoscale. So in the autoscale right, you can now add different conditions. So there is only one condition in place with the condition.
Now what you have to do is that you have to add a rule. So based on what should you autoscale? Should you change the number of virtual machines hosting your web application? So first you have to choose what is your metric source. So do you want Azure to check the metrics for your Azure App service plan? Or maybe you want to check the metrics of another service. So let’s say your web application is working with another service, let’s say a storage queue. So remember, in Azure storage accounts you can create something known as a queue. So maybe there’s another module sending messages to the queue and then your web application is reading messages from the queue. Now, let’s say that there is a lot of messages piling up in the queue. That means your web application or maybe a virtual machine has reached a limitation.
So maybe the CPU threshold is going high for the underlying virtual machine. And that’s why your Azure web app is not performing as it should. It’s not able to get all the messages in that message queue. So what you can say is that let’s choose the metric has a storage queue so you can base your source has other resources. In Azure, it’s not necessary that you scale up your web app only based on the metrics of the App Service plan. Let’s go back on to the current resource. So now what it’s looking at is looking at your App Service plan. So it’s looking at all the web applications being hosted across your virtual machines. Next you choose so what is the metric you want to measure? CPU percentage, the disk Q length or the Http Q lens or different metrics available.
So when you choose the CPU percentage, you’ll also get a graph of the current CPU utilization of your App Service Plan. So remember, this is the CPU’s utilization or the collation of all the web apps which are running on your virtual machines as part of your App Service plan. Next you can decide on what is the operator and the threshold. So if the CPU percentage goes beyond 70% over a duration of, let’s say ten minutes, then what is the action do you want to perform? So, do you want to increase the count of virtual machines? Or do you want to increase the percent by the increase count two? Or maybe you want to perform a decrease operation? For now, let’s choose increase count by count by one. And there is a cooldown time period.
So here we are saying that when you increase the virtual machine based on this condition, then give some cooldown time of five minutes so that the web applications start to stabilize with this new virtual machine added. Now, let me go ahead and click on Add. So now we’ve got a scale out condition. Very important. This is a scale out condition. Now remember, you’re adding a virtual machine to your App service plan based on the scale out condition. You will incur a cost for the running time of that new virtual machine. Now let’s say that the load on your application has reduced. You don’t need the extra virtual machine. So let’s say there was a scale out condition that was met, the virtual machine was added, but now the demand on your website has decreased, but you would still be paying a cost because of the extra virtual machine.
So in such a case, what you have to do is that you also have to add a scale in rule. So when the demand is less, make sure that you decrease the number of virtual machines as part of your app service plan. So again, you can choose the same resource, the same metric. If you go down. What you can do is that you can now say that if the threshold you can now say that if the CPU percentage is less than or equal to 70%, so it’s going down, then go ahead and now decrease the count by one instance and click on Add. So now you also have a scale in rule. Now please also ensure that when you add the scaling conditions, you have this minimum, maximum and default.
So make sure that you specify a larger value for maximum. Otherwise, even if the scale out condition is met, it will not scale out if your maximum is less than the number of virtual machines already running. So let’s say in the beginning you had one virtual machine running as part of your app service plan. Then the scale out condition was met, it became two. Now, because it is the maximum, if you have one more scale out condition, it will not trigger any more addition of virtual machines because you have specified a maximum value, right? So it’s important to understand the different concepts behind the Autoscaling feature, which is available for Azure Web apps. This marks the end of this chapter.
- AZ-203/ 204 – Lab – Auto scaling a web app
Right? So in the last chapter we had looked at autoscaling. Now let’s look at an example of autoscaling. So over here I have a web app in place. Now if I go on to the scale out feature, so over here it’s saying that the scale out feature is not possible. Now, the reason for this is I created this web app using the shared app service plan. So remember that automatic scaling is available only from the standard app service plan and later on. So one important thing in such a case is to go ahead and scale up your app service plan. So it’s very easy. So currently it’s using the D One or the shared infrastructure app service plan. We can go on to production, choose the standard S One plan and hit on apply. So now it’s going to go ahead and update our app service plan, right? So it’s done as simple as that.
Now, if you go on to the scale out now, you can see that you can configure auto scaling. So let’s go ahead and click on custom autoscale. So first we have our scaling condition. So let’s use the default condition. We can then go ahead and add a rule, right? So here we adding a rule based on what should we scale the underlying virtual machines which are hosting our web applications. So we can choose the metric source as the current resource. So we could look at the CPU percentage of this entire app service plan. We can see how much CPU percentage is being consumed by the web applications as part of this web app service plan and then we can scale out accordingly. But we can also use the metric source. And this is important from an exam perspective, has other resources available on Azure.
So in our case, what we’re going to do is that we are going to choose the metric source as a storage queue. So let’s assume that we have a storage account in place. So let me open all resources in a new tab. So I only have a storage account in place. So let’s go over to the queue service. Let’s add a queue. Let’s give it a name. Click on okay, so what we’re going to say is that let’s scale our app service plan the number of virtual machines as part of our app service plan based on the number of messages in this particular queue. So if you go back to our scaling rule, let me just close this for the moment. If I just go back on to the overview. So currently we can see our app service plan has S One and we can see the number of virtual machines which are part of the app service plan.
So if you click on this, you’ll actually go on to the plan itself if you look at the properties. So here you can see that the instance count is one. So let’s keep a note of this. Let’s go back to our app. Let’s go back on to scale out. Let’s go on to custom autoscale. Let’s add a rule. Let’s choose storage queue. Let’s choose our storage account. Let’s choose RQ. Let’s choose the metric name as approximate message count. So it’s going to gather the average stats for the message count every 1 minute and then we’re going to say that so here I’m going to mention the threshold has one. Since this is just a demo, I’m going to say that when the number of messages is greater than one in the queue, then let’s scale out the number of virtual machines hosting this app service plan. Again, this will be over a duration of ten minutes.
I’ll say increase to count by the instance count as one and I’ll leave the cool down period has five minutes. Let me click on add. Right, so I’ve got the scale out rule here. I’ve even got the instance limits. So I’m making sure that I have a maximum number has two. We can also add a scale in rule. Again, we can choose the resource has our storage queue. Again, choose our storage account, choose our queue. Understand? What I’ll say is that when the number of messages is less than one, let’s go ahead and decrease the count by let’s go ahead and decrease the count by one. So let’s add a scaling rule as well. Let’s go ahead and click on save. Right, so our auto scaling configuration is in place. Now let’s go back onto the app service plan. Let’s go on to the properties. So we still have the instance count as one. Let’s go on to the queue. Let’s go ahead and add two messages to the queue. Let’s add another message.
Right, so now we’ve got two messages as part of the queue. If you go back to our app service plan, let me go and just refresh this page. Now, after some time, when you refresh this page, you can now see the instance count has become two. So based on that rule, it has seen that the number of messages in our queue is greater than one. And that’s why it’s gone ahead and incremented the number of instances in that app service plan. So this is the scaling out condition. Let me go ahead and clear the messages in the queue. So now I have no message in my queue. So after some time when you come back, you can now see that the instance count has become one. So not only can you scale a resource based on the metric of the resource itself, but you can overscale based on conditions set by other resources in Azure, right? So this marks the end of this lab.
- AZ-203/ 204 – What is Azure Cache for Redis
Hi and welcome back. Now, before we go on to the topic of Azure Cache for Redis, let me just explain quickly some concepts behind why would you want to have Redis in the first place? Now, Redis itself is a software that’s already available. What Azure has done is that they’ve actually provided us a service on the Azure Platform. So in the background, it’s still running the same aze or Redis engine. Now, what’s the need of having a cachet in the first place? So let’s say you have your application that’s interacting with a database. Now in the database, you could have a whole lot of data that, remember, that’s stored on storage. It could be physical storage. So it can be physical storage that holds all of the data. Now let’s say that you have frequently accessed queries. Let’s say you have the query to find out what is maybe the highest placed order.
Maybe you have an orders table. So you want to find what is the highest placed order in the particular R. So what would you have to do is that maybe you would like so maybe you would have to execute maybe a stored procedure, if you have one. Or maybe you have to collect the records, find the maximum order value. So all these operations are expensive. What if you had something in between that could temporarily store this information and your application could just fetch that information which is required from this particular service in between? The advantage is that it doesn’t have to make a trip on to the database because the information is already in hand. And if the application is making frequent calls to get this information itself, it can just go on to the service.
It also reduces the load on the database itself, so you’re not firing at constant queries just for the same information. So this is the entire purpose of having a cache in between. So this is basically a tool that allows you to store frequently access data. Your application can then go on to the cachet to get that frequently accessed data instead of getting all the data from the database. Now, normally, the cachet is a key value store. So you have a key and you have the value. So let’s say looking at the maximum order value for an R, you could have that has the key and what is actually the value of that order for the R, you can have it as the value. So your application could make the first call, get that value, place it in the cachet, and then every subsequent call from your application would then go on to the cachet to get that value into going all the way to the database.
So in a nutshell, this is like just a very simple example of why you would have a cache in between. And Azure Cache for Redis is nothing but a managed service on the Azure Platform, because if you have to manage your own Redis cache then you have to maintain the service. You have to install the application or the software onto service and then use that Redis software. But in Azure, this infrastructure can be managed for you by using this service. Now, just one other important point when it comes to a Redis cache because this is also given as an objective in the exam. Now remember that assume Redis cache is nothing but key value pairs. Don’t make the mistake of taking your entire database information and having it in the cache again. It then defeats the purpose. This is an inmemory database which should have a limited size. It’s not supposed to have all of the database information.
It’s only the information which is queried frequently. You’d have it as key value pairs in the cachet. So I said you have multiple key value pairs in your cache. Now, let’s say in the prior example that you are using the cache A to store the order value, the maximum order value for the R, right? So now this is the duration of the R. But let’s say in the next R the order value has to change. Now you only store that has, let’s say, a key value pair. Now you have to ensure that you change the value after an R. Or if your application is adding another key value pair you have to ensure that somehow you have to mark this old one has like kind of invalid. So this is known as a process in cashew known as Eviction. How do you basically evict the stale items, those items which are no longer valid from your cache? So that’s an important concept. So you could use that using policies. So Redis has something known as policies that are available.
So the most popular policy is basically the least LRU the least recently used. So if an item is not being used, that means it’s probably no longer valid. If the application is not querying that item frequently from the cache then probably is no use of that item in the cache itself. So that could be deleted. So you could have a policy that’s applied onto Redis in. Net, you can also do an expiration. So you can specify an expiration for the item itself. So do you want the item to expire after, let’s say, a minute, after an hour? You can do that as well, right? Here is an extra point when looking at working with Azure Cache for Redis. So the Azure Cache for Redis is basically a service that is based on the access software of Redis. Redis is used as an in memory data store that can be used to improve the performance and scalability of your applications.
Now, applications can use Redis cache aid to fetch the most commonly used items. This can increase the performance and also decrease the overall load on the database. So if your application is going ahead and querying a database you can actually decrease the overall load on the database itself if the application goes to the Azure Redis Cache to get the or fetch the commonly used items. Now you can use Azure Cache for redis for storing session data. So you can use the session state provider for Azure Cache for redis to store session information from ASP. Net application. This session state can then be shared across your applications. It supports Control concurrent access to the same session state data for multiple readers and a single writer. You can also store the entire STP response generated by ASP. Net applications onto Azure cache for redis. Now, the different price offerings are available for redis. The first year of the Basic.
This is basically a single node cachet. This is ideal for development and test environments. Next year of the standard here you have a two node cache. So you have basically a primary and secondary configuration. The replication occurs between the primary and the secondary node. And you also get an SLA of 99. 9%. And then you have the Premium, which offers redis software on more powerful hardware. You get better performance and higher throughput. Now, remember, when you’re using Azure Cache for redis, ensure to have an Eviction policy in place. The most popular one is the volatile LRU. So this ensures that only the keys which have a Time to live set will be eligible for the Eviction. Otherwise, the keys will remain hazardous. And then you can also set an expiration value on the keys itself. So let’s move on to the subsequent chapter in which we look at a lab on how to use a zero cache for redis.