Practice Exams:

AZ-303 Microsoft Azure Architect Technologies – Implement Management Solutions Part 2

  1. Azure Traffic Manager Walkthrough

Continuing on from the load balancing solutions available, let’s have a look at Azure Load Balancer. So Azure Load Balancer allows us to distribute traffic across virtual machines over the services, which allows us to scale apps by distributing load and offering high availability. In the event that a node becomes unhealthy, traffic is not sent to it. Load Balancer actually distribute traffic and manages the session persistence between nodes in one of two ways.

The default is what’s called a five Tuple Hash. The Tuple is basically composed of a Source IP, source port, destination IP, destination port and Protocol type. Because the source port is included in the hash and the source port changes for each session, clients might be redirected to a different virtual machine for each different session. This is important because if an application needs to store local information about the client between sessions, this would fail as the client could end up going on a different endpoint each time. Therefore, the alternative is something called source IP affinity.

This is often also known as Session Affinity or Sticky sessions. This allows to map traffic to the available servers based on a two Tuple Hash which is comprised of the Source IP and Destination IP and a three Tuple Hash, which is the Source IP, destination IP and Protocol type. The hash will ensure that requests from any client are always sent to the same virtual machine at the back end, even between sessions.

When using an Azure Load Balancer, your virtual machines have to be built in Availability Sets. This is a logical group to ensure that virtual machines are isolated from each other from the point of view of the physical underlying infrastructure so that if an outage to an underlying rack or physical server through update or failure, it will not affect all the VMs running within that availability set.

Availability Sets combined with a Load Balance to protect you from individual hardware failures. Another option is availability zones. Virtual machines can be built into Availability Zones, sometimes known as Az’s. AZs are physically separate data centers within an Azure region and each DC or data center has its own power, cooling and networking. Therefore, building VMs in different zones combined with a Load Balancer protects you from an entire data center failure. Virtual machines but with Availability Sets have an SLA of 99. 95%, whereas virtual machines but with in Availability Zones have an SLA of 99. 99.

When you create an Azure load balancer there are two product test care use available basic load balancers and Standard load balancers. Basic load balancers. Do basic port forwarding. They have automatic reconfiguration and use health probes to determine the availability of the back end services. What’s important difference between Basic and Standard is that Basic only allows you to use Availability Sets, whereas if you want to use virtual machines in the Availability Zones, then you have to go for the Standard.

Finally, when you create Azure Load Balancers, you also choose between whether to have an internal or an external load balancer. External load balancers give you a public IP address that can be accessed from external services. Whereas, of course, internal only have internal addresses and therefore, only accessible internally.

  1. Azure Load Balancer

In this lecture we’re going to create a Traffic Manager. Hopefully you’ve created two Linux VMs. If not, you really need to go back and create them in order for this to work. As with everything else, to create a Traffic Manager we go to create a resource and perform a search for Traffic Manager. We want Traffic Manager profile. And make sure it’s Traffic Managed profile by Microsoft. Click Create. Give it a name. It will have to be a unique name.

Choose the routing method. For now we’re going to choose performance but we’ll have a look at this again later and let’s create a new resource group. Select the location where your virtual machines are and click Create. Once that’s created go to our resource and we need to go and configure it. The first thing we need to do is we need to configure the endpoints. So we’ll add a new endpoint. This is cloud one, it’s a public IP address. And go and select the public IP address from the first Linux VM. We’re not going to have any custom headers and simply click OK.

After you’ve added the first endpoint go and add the second one. Again. It’s public IP address. And again we’ll go and pick the IP address of the second Linux server and click OK. So if we now browse to our traffic manager DNS name, if we go to the route we’ll get a standard Ubuntu page, but if we go to Node HTML we’ll see that we’re hitting Node One. So it’s now sending all traffic to Node One because the current rules that it’s set to Ie performance, it’s sitting that because that’s just coming back as the fastest and so it’s sending all traffic to there. As that got loaded up it might become slower or slightly not as fast as Cloud Two.

So then it would send other traffic to Cloud II. So in that way it should even load balance between the two points. If we go to the configuration blade we can go in and change the different types of routing. So for example, we could change from performance to weighted priority and so on. So for example, if we went to Priority and hit save that would allow us to go back to the endpoints and set a priority to them, as you can see by default, accept them to 100 and 101. So we could change that.

So we could set that to say 50, go to the other endpoint and set it to one and then that might take a bit of time, but eventually that will swap round and it will start serving Cloud Node Two first because it’s a higher priority. It also ensures that if one endpoint goes down or is degraded, the other will take over and we can simulate that. So if we go to our virtual machines and go to Web One, we stop that virtual machine now, go back to our traffic manager and if we go back to the configuration we can see a probing interval. So at the moment the probing interval is set to 10 seconds and it will tolerate three failures and it’s got a time out of five.

So you can change those if you wish. So for example, we could set the number of tolerated failures to one and that will just speed up the fact that it’s detecting our primary endpoint as offline. So now we can see that one is degraded. So if we go back to our URL and hit refresh, eventually it will start going to node too. Now that can take a few minutes for that to take effect. And what we need to learn from that is that traffic manager can load balance and can react to failures. However, it’s not an immediate reaction. Therefore traffic manager isn’t a great tool to use in that way. It is more about sending traffic to endpoints based on other factors such as performance, priority and perhaps more often than not geographic. You.

  1. Load Balancer Walkthrough

Creating a load balancer is very straightforward. First of all, just make sure you’ve still got your Linux virtual machines running. So I’ve still got the Node One and Node Two running and I’ve confirmed the running by browsing to their public IP address and going to our Node HTML file that we created going and as usual click Create a resource, search for load balancer and make sure you select the load balancer by Microsoft. I’m going to create a new resource for this and give it a name.

We need to make sure we build the load balancer in the same region that we built our virtual machines and we get to choose between an internal or a public load balancer. So I’m going to choose public here. We also get to choose the Basic and the Standard. So if you remember basic load balancers can be used for availability sets where the standard load balancers are used for Availability Zones. So our virtual machines are in an availability set, they are not in Availability Zone.

So we’re going to go for Basic. We’ll tell it to create a new public IP address and we’ll just give this a name and finally we can say whether the assignment is dynamic or static. So that’s for the IP address we’ll go ahead and just leave it at Dynamic and then click Create. Once that’s complete, go straight to the resource. So the first thing we need to do is create a backend pool. So a backend pool is what defines the virtual machines or the endpoints that we’re load balancing.

So go. To the back end pools and create a backend pool tell it what virtual network this backend pool is going to be onto it’s going to be on our Linux VMs and we’re going to associate this to virtual machines next we define the virtual machines that are in our pool. So the first one will be the web one server. And then we add the second server and then click add in order for the load balancer to work. It needs to be able to detect when a backend endpoint, such as our virtual machines becomes unhealthy.

And we do that via the use of a health probe. So the next thing we need to do once we’ve created our backend pool is create a health probe so we can give this a name and just give some basic information. So the protocol is what it’s going to use to check the health of the back end. So the default is TCP on port eight. The next is we tell it the interval of the probe and how many consecutive failures before it determines that the endpoint is unhealthy.

So the default is check it every 5 seconds and report is unhealthy. If it’s failed more than twice, So once we’re happy with that just click okay, that might take a couple of minutes once that’s been created the final step is to create an actual load balancing rule. So this will actually tie everything together.

So go to Lord balancing rules. If it’s still showing as updated even though it’s been created, just hit the refresh to reload the page and then go ahead and create a load balancing rule. Again, tell it the protocol. And what we can actually do here, we can have a different incoming and outgoing port. So for example, people might browse to the load balancer itself and port eight, but on the back end go over four, four, three, or vice versa. We’re going to keep them as the default. So next we’ll select the back end pool. Secondly, make sure we’ve got our help probe selected and then we can choose the session persistence.

So this is where we either go on client IP or client IP and protocol. I’m just going to leave that as none and then finally click OK. Again, just give it a couple of minutes to go ahead and create the rule. So once that’s created, we can go ahead and test it. So first of all, go to the overview and we need to get the public IP address of our load balancer. So just copy that. So I’m going to open up a new tab in my browser and paste that IP address in. And the first thing we’ll do is we’ll just browse directly to the IP. So as with the traffic manager, it’s now just taken us to the default Ubuntu page. And again, if we go to Node HTML, we can see that Node Two is serving. If we hit refresh a few times, you’ll see that it will bounce the node back with them forwards and that’s because we didn’t set any session affinity. In other words, it will randomly bounces between the two servers, provided they’re both healthy if we go back to the console and we go to one of the servers.

So for example, Web One, simply because we’re currently on Web One at the moment, now let’s stop that virtual machine. So as that virtual machine has stopped, let’s go back to our browser, to where we’re going to the node balancer and hit refresh. And we’ll see now that we’re going to Node Two because it’s determined Node One is unhealthy.

If we go back into our Lord balancer and have a look at our backend pools, expand our backend pool here and we can see the status of our virtual machines. So we can see now that the Web Two is running fine where the Web One is stopped. If you want to clean up, simply go back to the overview page and hit delete and that will delete your load balancer. Once the actual load balancers deleted, the final step to cleaning up. We need to delete the actual public IP address as well because that’s actually a separate component. So you can either go into it or from the resource group view, highlight it and click Delete. And then just confirm that deletion.

  1. Azure Application Gateway + Web Application Firewall

Let’s now have a look at application gateways. So the previous load balancing solutions we looked at worked on different layers. So the traffic manager works at the DNS layer. It load balances traffic by simply resolving a DNS name to a different IP depending on which region or service you want to direct traffic traffic to.

Next we have the network load Balancher that routes traffic based on IP rules, port and protocol. The application gateway routes traffic based on the URL or other header information of the request. Therefore, an app gateway can route to virtual machines, VM, skill sets, app services, and even on premise. Services and Endpoints are grouped into what’s called back end pools, with services at the same time grouped into each pool. Ie might have two VMs running the same site for resilience or load balancing.

Then they would go in the same pool. App gateway actually offers two different routing options. The first is known as path based routing, and this is where requesting a URL is sent to a different endpoint pool based on the URL. So for example, a URL of forward slash videos would go to one pool and forward slash images may go to a different pool. The other alternative is known as multisite hosting. So with multisite hosting, requests are route to a different backend pool based on the DNS name. So again, contoso. com might go to one pool and fabrican. com to another within the pools themselves. App gateway then load balances requests to the servers in the pools on a round robin basis.

When setting up an app gateway, you need to configure a number of different components a front end IP, listeners routing rules, and back end pools. The front end IP is the entry point for requests. This can be an internal IP or a public IP or both. However, you have to understand there are two versions of the gateway service version one and version two, and at present only version one supports private IPS. Behind the front end IP are one or more listeners. Listeners are configured to accept requests to specific protocols, ports, horse or IPS.

Listeners can be basic or multisite. Basic. Listeners will route the traffic based on the URL and as we said, multisite will route the traffic based on the Horse name. Each listener is therefore configured to route traffic to a specified back end pool based on a routing rule. Listeners also handle SSL certificates for securing your application between the user and the application gateway. Routing rules bind listeners to back end pools. You specify the rules to interpret the Horse name and path elements of a request and then direct the request to the appropriate back end pool.

Routing rules also have an associated Http setting which is used to define the protocol Http or Https session stickiness connection draining timeouts and Help probes which are used to help the Lord Balancer determine which services are available to direct traffic to. Finally, the back end pools themselves define the collections of web servers or services that provide the service you want to provide. Backend pools can be virtual machines, scale sets, Azure apps, or even on premise servers. The application gateway can also include an optional Web Application Firewall or WAF.

The Web Application Firewall provides an additional layer of protection and allows you to implement a common set of measures based on what’s known as the Open Web Application Security Project or or Wasp. This is a standard set of rules for detecting attacks. The WAF supports different versions of the Or Wasp standards. Each version supports a slightly different rule set.

However, the main threat to Waffle monitor and react to are SQL injection, cross site scripting, command injection, Http request smuggling, Http response splitting, remote file inclusion, bots crawlers and scanners, and Http protocol violations and anomalies. When you implement an Or Was set, you can choose to manage all the rules or just specific ones, so use an application gain with a WAF whenever you want to provide an extra layer of protection for your Web services. Finally, an application gateway requires a virtual network on which to operate, and you have to create this virtual network on a dedicated subnet before setting up the application gateway.