Practice Exams:

AZ-140 Windows Virtual Desktop on Microsoft Azure – *NEW* – Monitor and Maintain an Azure Virtual Desktop Infrastructure part 1

  1. Disaster recovery plan for Azure Virtual Desktop

You need to keep your applications and workloads up and running during planned and unplanned Azure outages, you need to have a good PCDR plan. PCDR stands for Business Continuity and Disaster Recovery. So when an outage occurs in a region, the service infrastructure components will fail over to a secondary location or region and it will continue functioning as normal. So you can still access service related meter data and users can still connect to available hosts. A good BCTR plan should cover the following areas the first area is the VM Replication, the second one is Virtual Network, the third is User Identities and the fourth is Users and Applications Data.

So in this lecture we will cover all of these areas starting with the VM replication. So with the VM replication you can replicate your Virtual Machines to a secondary region and sometimes called the failover region in three different methods. The first option or method is to configure all the Virtual Machines for both pooled and personal poles with Azure Site Recovery. With this method, you will only need to set up one host pole and its related application groups and workspaces. And then you can use the Azure Site Recovery, which is the service provided from Azure to do the Disaster recovery and the Site Recovery and you use that service. So this is option one.

Option two is to create a new host pool in the failover region, which means the secondary region, the secondary location, while keeping all the resources in failover location turned off. So you have a replica that is turned off on the secondary location and this is option two. Option three for the VM replication is to create a host pool that is populated by Virtual Machines built in both the primary and the secondary region while keeping the VMs in the secondary region turned off. So in this case you only need to set up one host pool and it’s related application groups and work spaces. But the VMs are in two different regions. Of course. The recommended solution is to use the Azure Site Recovery to replicate the Virtual Machines to have a smooth transition to the secondary region in case of an outage.

If there are existing user connections during an outage before the admin, which is you in this case can start a failover to the secondary region, it is important to know that the admin will need to end the user connections and the current region first. So this is when it comes to the VM replication. Let’s move to the second area in our VCDR plan and that is the Virtual Network.There are some considerations for you when it comes to the networking and the VCDR plan. First, you will need to set up a Virtual Network in the secondary region of course. And in case of the users need to access some resources on the on premises site, you will need to configure the secondary Virtual Network to access these on premises resources, and you can do that using VBN connections or Express Route or Virtual.

When you can use the Azure Site Recovery to set up the VNet in the Failover region. It is recommended actually to use the Azure Site Recovery to set up the VNet in the Failover region because it preserves your primary network settings and doesn’t need peering moving to the third area, and that is the user identities. You will need to keep the domain controller available at the secondary location in case if you do a failover. And there are three options for this. The first one is to have the Active Directory domain controller at a secondary location. So you can have a secondary domain controller just as you do with an on premises infrastructure. But this is on the cloud.

You have one on the first region and second on the other region. Option two is to use an on premises Active Directory domain controller. So this could be an option is to have your main domain controller on premises and it can be used in both regions, but this is not a recommended one because of the potential of latency. The third option is to replicate the active directory domain controller using Azure site recovery. The last area we need to cover in the PCDR plan is User and application data. So you need to set up data replication in a secondary region. There are three options for this as well. Option one is to set up native Azure Replication. For example, Azure Files Storage Replication or Azure NetApp Files Replication or Azure Files sync for file servers.

Option two would be to set up FSLogix Cloud Cache for both application and user data. Now. Cloud cache is an addon to the FSLogix technology. It allows the use of multiple remote locations which are all continuously updated during the user session, creating true realtime profile replication. Option three is to set up a disaster recovery for applications and applications data only to ensure access to business critical data all the times. With this method, you retrieve user data after the outage is over, unlike the Cloud Cache option, which is a true real time profile replication.

  1. Cloud Cache for Azure Virtual Desktop

Cloud cache is an addon to the FSLogix. It uses a local cache to service all reads from a redirected profile after the first read. Cloud cache also allows the use of multiple remote locations which are all continuously updated during the user session. Creating true realtime profile replication using Cloud Cage can insulate users from short term loss of connectivity to remote profile containers as the local cache is able to serve many profile operations in case of a provider failure, Cloud Cage provides some sort of business continuity. Cloud cache is only one of many options that may be considered for business continuity when using profile containers. Cloud cache provides realtime duplication of the user profiles that will actively failover if connectivity to a Cloud cache provider is lost.

The way that Cloud cache works is the following first, the Cloud cache observes, reads and optimizes writes into costeffective payloads. It adds a local cache component. So then applications communicate with the local cache and the cache connects with the remote container. If the connection to the remote container is interrupted, the applications still work and that is because they are connected to the cache to the local cache. And then if the interruption is short or data that isn’t in the cache is not requested during the outage, everything behaves normally. When the connection comes back online, the system reconnects and rethinks if necessary. So this is how the Cloud cache works.

There are some concerns, however, that you need to be aware of. The performance of the local cache file will determine the user experience and that is because the local cache file will service must input and output requests. It is executed the Cloud cache on the host utilizing processor, memory network and storage resources of the client locally. So it is critical that the storage used for this file be high performing and highly available. And in the last point you can see that the storage used for the local cache file should either be a physically attached storage or have reliability and performance characteristics that meet or exceeds a high performing, physically attached storage.

So at the end, because of the resource utilization, it may be more cost effective to consider alternate backup disaster recovery solutions for office logix profile containers. Cloud cache is generally used when one of its features provide unique value such as real time profile high availability. So if this is what you are looking after, this is the option to go with. You can use the FSLogix cloud cage.

  1. Azure Virtual Desktop Monitoring – Introduction

When it comes to the Windows Virtual Desktop monitoring you can go to the Windows Virtual Desktop service in the Azure Portal and there you will notice under monitoring you can see Insights. This is the builtin functionality in the Windows Virtual Desktop to monitor your host poles and whatever metric you want to see. Now, it is important to note once this loads properly, as you will see in the coming seconds, it will not show us actually the metrics or the graphs or whatever that you would expect from a monitoring service. And you can see it says there are session hosts not sending data to the expected Log Analytics workspace and also the WVD workspace doesn’t have diagnostic settings configured correctly.

So it seems that the built in functionality is not working properly. And you will need to do some things, you will need to provision some services and configure others to make it work properly. And so you can monitor your WVD environment. In the coming lectures we will discuss together many topics. For example, the requirements, the permissions needed, what is Log Analytics, how to provision it, how to configure it as well with the Azure monitor and the other related services.

  1. Requirements and Permissions

Let’s talk about the requirements and the permissions. The first requirement is that all Windows virtual desktop environments you monitor must be Arm based or Azure Resource Management. So there is the Azure Classic deployment model and there is the Arm, which is like the newest one for all the deployment in our course. It was based on Arm, so this is already fulfilled. The second one, would that be to at least have one configured Log Analytics Workspace, which is an Azure resource that we will use for the monitoring. It basically collects the data and we will talk about it more later. So you will need to use a designated Log Analytics Workspace for the Windows Virtual Desktop Session hosts so you can ensure that Performance Counters and events are only collected from the session hosts in the Windows Virtual Desktop deployment.

And the third requirement is, once you configure these, you will need to enable data collection for the following things in the Log Analytics Workspace, you will need to enable the data collection for the diagnostics from the Windows Virtual Desktop environment. You will need to also enable the data collection for some recommended Performance Counters from the Windows Virtual Desktop Session hosts. And you will also need to collect data from the recommended Windows event logs from the WVD environment. We will do that in the next coming lectures, of course. So this is when it comes to the requirements like what do you need to have set up and ready so you can monitor your WVD environment by Microsoft? Let’s move on to the permissions to use Azure Monitor for monitoring Windows Virtual desktop environment.

You will need the following read access permissions you will need to have read access to the Azure subscription that holds the Windows Virtual. Desktop resources also to the resources group where you have the Windows virtual desktop session hosts and to have read access to the log analytics workspace. Of course, that holds the data collected from the resources. So these are the Read Access permissions that you need to have. Of course, when we say Read Access permission, it translates into the reader role when it comes to the Azure identity and security roles that Azure provide to us. If you are using your global admin in the testing or in the project that you are doing, you will have these by default, you will have Access Management and more. So you will not need to do anything else.

  1. Create a Log Analytics Workspace

Let’s configure our log. Analytics workspace. What is Log Analytics workspace and how to provision it? I will show you right now. So let’s go to create a resource in the Azure Portal and let’s look for Log and it shall actually appear to you. So Log Analytics workspace, let me select it. Before we continue with the provisioning, let’s see what Microsoft says about the Log Analytics. So it considers this service as part of the Azure Monitor, which is the main functionality in the Azure or the main service for monitoring everything. So it says with Azure Monitor Log Analytics you can easily store, retain and query data collected from your monitored resources. In our case, these are the WVD environment resources in Azure and also other environments for valuable insights. So this is the main purpose of it.

This is what we need it for and this is why we need to provision it. So we can actually store, retain and decry it data collected so we can actually know the status of our resources. We can keep them monitored properly. So let’s go and create one. It is a straightforward provisioning for the service. It asks you about the subscription, about the resource group. So let me just put it right now in Decision Host Resource Group for example, and let’s give it a name. Maybe I would say WVD work space. So this is the name I’m going to use and it shall be okay, let’s go to the region. It must be in the same region actually where I have the resources I want to monitor. So let’s go and make sure we put it and place it in the same place, which is going to be if I go to all resources. So let me see it’s west US. So make sure it is in the same region. And here is the west us.

Next is the pricing tier and it’s going to give you one option, which is the current one, pay as you go. There are legacy pricing, but we’re going to go with the pay as you go and I will go to review and create. So this is what we are doing to create our Log Analytics workspace so we can store, retain and query data about our WVD resources. And you hit on create, it shall take like one, two minutes and it shall be created for you. So you can use it in the later steps.

  1. Configure the Log Analytics workspace

We need to configure the workspace to enable the collection of the data from the Virtual Machines. And we do that by going to the Azure Monitor service. You can go here and search for it. You say Monitor, for example, and you will find it. And once you actually access it, you will see it here as well. So if I go to the Azure Monitor and then I scroll down to Virtual Machines because we want to be able to monitor the session hosts. And if we go to the Workspace configuration, we see that it is not enabled. And if we go to other onboarding options, there are many options to configure and to select from. But the one that we will go with is to configure a workspace. And I click Configure and notice what it says here configure the workspace. It helps you to enable.

If we highlighted here Azure Monitor for VMson your log, analytics Workspace will install the VM inside solution that will collect the performance counters and metrics from all the Virtual Machines connected to the workspace. So this is what we are basically doing with this option. It asks you for the Azure subscription and then it asks you for the workspace. And this is the one we have created. And then you click on Configure and it will initialize the deployment. It’s going to take maybe a minute until it is fully deployed and ready to be used and the deployment succeeded. We are ready if we go to the Workspace configuration again, let’s see if something will change here. And yes, it says enabled. This is how you configure the workspace so you can enable the data collection from the Virtual Machines to the workspace.

  1. Configure Diagnostics Settings for AVD Resources

Now, let’s configure the diagnostics settings to specify what things, what information, what specific details we want to collect from which resources. And to do that, we go to the Windows virtual desktop and actually you will have to enable and to configure the diagnostic settings for everything that you would like to monitor or to collect that for the host poles, application groups and workspaces. Let’s start with the host poles. Let me show you how you can do it. And I have one host pool right now if I scroll down and I go to diagnostic settings, I have none at the moment and I say add diagnostic settings, you can give it a name so let’s call it WVD maybe and Baggnostics. Let me actually copy it so I can enable it for the others.

And you notice here it says look, category details, log and it gives you some names checkpoint, error management and so on. So what are these? Before we continue, I want to explain a little bit more about these names to you so you can understand actually what to use and when briefly. So the management activities is one of them. And actually, this one helps you to track whether attempts to do a change for the Windows Virtual desktop objects whether using the APIs or PowerShell, were successful. So it helps you to determine if these changes were successful. Another one, another type of blogs would be feed and this one helps you to see if users can subscribe to workspaces and see all the resources published in the remote distort client as well.

And we have connections log and those are for users connections to the services. So it gives you information about that and you have the host registration, it helps you to determine if the session host was successfully registered with the service upon connecting errors.Well, obviously it’s errors. It’s for errors checkpoints and this one is for specific steps in the lifetime of an activity that will reach in other words, or to give you an example, is when a user was load balanced. To a particular host, for example. And we have the agent health status, of course, which, as the name implies, is for the health of the agent. So these are the type of the looks and now you can understand like what each one is for.

A recommended setting by many people is to select all of them and then you can specify the destination details you can send to Log analytics workspace which is the option we are going with, you can archive to a storage account and stream to an event hub. You can mix and match of course. But for now, for the purpose of this lecture I will go with the first option, I will select my subscription and my Log Analytics workspace that I have created for this purpose and I will go with save. Now we will do the same for the other resources I have so it says update in Diagnostics. Okay, let me just exit here, and if I go to application groups, I will do the same and then again the same for the workspaces.

So diagnostic settings and add diagnostic settings. I will select the logs that I want, I will give it a name. So I copied the same name, I will send it to the same workspace I use for this purpose and save okay, and I will do the same for the workspaces. I have one for now. I go diagnostic settings, add diagnostic settings, give it a name, select the type of logs, and I will send it to my workspace. So these are the simple steps to configure the diagnostic settings, to specify actually what kind of information, what kind of logs to collect and to send to the workspace so you can later browse and analyze as with the information there.