Practice Exams:

AZ-303 Microsoft Azure Architect Technologies – Implement Solutions for Apps (10-15%) Part 6

  1. Service Bus Walkthrough Part 2

Now we’ve looked at queues. Let’s go ahead and look at Topics and Subscriptions so that we can see the difference. Again, we can have a click Q from up here, or if we scroll down the sides left, we can go to the Topics view under Entities and then click Add to create a new topic. Give the topic a name name. We’ll call this my topic and again we can set certain parameters such as the topic size, message, time to live, duplicate detection and so on. Accept the defaults and click Create. Once that’s created, go into my topic. And now we can create the actual subscriptions.

So click Subscriptions and then click Add to add a new subscription. For this demonstration, for the code we’re going to get, it’s important that we call the subscription s One. Again we can set certain parameters such as the time to live in a lock duration and also a max delivery count so that’s the maximum number of times this message can actually get delivered. And then click Create. Once that’s created, create two more subscriptions called s Two and S Three. With those done, just go back to the Topics view. However, it’s a win. Now in our service plus namespace pane and I want to go back to Shared Access signatures.

Again, we’re just going to copy the key for this one. And again in a real situation you would have separate keys depending on what you’re doing, whether you’re reading or writing. Now we’re going to go back to Visual Studio. So if you still got the basics sending receive, quick start open from last time, we need to get one of the other options.

Now the easiest way to do that in Visual Studio is up here we’ve got this switch views option. So if we click on there we’ve got all the different solutions that are in this project that we downloaded. So I now want the basic Send receive tutorial with filters. If you are having trouble getting hold of it, simply close it down, then open a project or solution and then navigate to your solution. If you need to go all the way up, it will be in repos as your service bus samples net getting started. Basic send and receive filters and then open the solution.

Open the program again. Now this one is a little bit more complicated. What we want to do is ignore the bits at the top and we want to scroll down until we find this static void mean where again we’re setting the connection string and the topic name. So we’re going to paste our connection string into there again and in there we’re going to type in my topic before we run through the code, I’m going to run that. So this time we get a menu with a number of options. I’m going to go off the number option one first of all, which is to clear the default filters. So subscriptions tend to get a number of default filters created when you use them. So we’ll go and clear those out first.

So next we want to create our own filters. So this really is just setting up the various filters because this program has been designed to work in a certain way. So we’ll create our own filters. So there we’ve created three separate filters. Third is optional to move our own filters. So we’re going to leave that first. I’m going to hit four to now send a number of messages so we can see we’ve sent a number of messages to store two, three, four and so on with varying prices, colors and categories on something called item and they’ve been sent.

Head back to the portal and go back to your topics and then go into my topic. We’ll see now that we’ve got these various different subscriptions and there are different numbers of subscriptions in each one. So there are four in s one, four in s two and two in s three. Go back to our program and now tell it to receive messages. So the program is now hooked into each of those subscriptions to receive the messages.

And if we go back to the portal, click away to something else and then go back to the overview. You can see now that all those messages have disappeared. Once we’re done, if you want we can clear this down now so that we don’t incur any cost costs. Go to the view of your service bus namespace and go ahead and delete it. Once that’s deleted, you won’t incur any more costs and we can continue.

  1. Which Queuing Mechanism to use?

So it’s important to try and understand the differences between the three different mechanisms we’ve looked at and when you should use one over the other. We briefly touched on this on the overview at least when comparing storage queues with service bus queues. So, both storage queues and service bus queues are implementations of the message queue in service that is currently offered within Azure. And as we’ve seen, each has a slightly different feature set, which means you can choose one or the other, or even use both, depending on the needs of your particular solution or problem that you’re trying to solve. So when determining which Korean technology fits the purpose for a given solution, architects and developers need to consider a number of recommendations.

So as if, for example, you might want to choose storage queues over service bus queues when your application must store over 80GB of messages in the queue, or if your application wants to track progress for processing a message inside the queue, which is useful. If the worker processing a message crashes, a subsequent worker can then use that information to continue from where the crashed one left off. Conversely, you might want to use service bus queues or storage queues when you want to ensure first in first amp delivery. So remember, storage queues don’t guarantee this, whereas service bus queues enable you to enable sessions which do provide this functionality. Service bus queues also allow you to have automatic deduplication.

So with a storage queue you would have to be responsible for ensuring that a queue hadn’t been duplicated, whereas there is functionality built in two service bus queues as an option to do this. In contrast to queues in which each message is passed by a single consumer, topics and subscriptions provide one to many forms of communication. This is useful for scaling to large numbers of recipients with each published message is made available to each subscription required by that particular topic. Messages are sent to a topic and delivered to one or more associated subscriptions, depending on the filter rules that we saw. Subscriptions can use additional filters to restrict their messages as well. Some of the key points here, at least as far as the Microsoft Exon is concerned, is that you are given a choice between queues or topics.

You choose queues when you want to ensure each message is only delivered to one consumer. Conversely, you might choose topics when you want multiple consumers to read the messages or when you need to apply filtering. A typical question might be where you’re asked to ensure the FIFO pattern is used for message delivery, and for this we go to use the service Bush option, but also that you have to enforce the FIFO by enabling sessions, which is one of the options when creating queues or subscriptions within a topic. It’s also used to remember that sessions are actually only available in the standard pricing tier and above the basic pricing tier doesn’t support them. Another question might be if you need to have automatic duplication detection again, this is where you would then want to use service bus queues or topics over storage queues.

  1. Containers Introduction

Containerization, and particularly docker, is an increasingly popular technology and Azure provides services to make the use of technology easier. So in the next few lectures, we’ll examine Containerization and Azure’s implementation of it. So a container is a standard unit of software that packages up code and all its dependencies so that an app application can run quickly and reliably from one computing environment to another. A docker image is a lightweight, standalone executable package of software that includes everything that’s needed to run the application. So again, the code, the system tools, container images become containers at runtime. And in the case of docker containers, images become containers when they run on top of a docker engine. And the docker engine is available for both Linux and Windows based applications.

Containerize software will always run the same regardless of the infrastructure, and containers isolate software from its environment to ensure that it works uniformly, despite the differences for instances between deployment and development and staging. Because of this, applications also run safer in containers. Containers themselves are an abstraction at the Appled that packages code and dependencies together. Multiple containers can run on the same machine and share the OS kernel with other containers, and each running as an isolated process in the user space. Because of this, containers typically take up much less space than a virtual machine and therefore they can handle more applications and require fewer VMs and operating systems. In comparison to this, a typical virtual machine is an abstraction of the physical hardware, turning one physical server into many virtual servers.

The hypervisor is what allows multiple VMs to run on a single machine. However, each virtual machine includes a full copy of the operating system, the application necessary binaries and libraries, and so on. Therefore, it takes far more space. VMs are also a lot slower to boot. So in comparison, containerized apps, because they’re sharing the guest OS, you can run many more of them on the same hardware. So fundamentally, a container is nothing but a running process with some added encapsulation features applied to it just to keep it isolated from the host and other containers. One of the most important aspects for container isolation is that each container interacts with its own private file system. And this file system is provided by a docker image.

An image includes everything it needs to run an application. Containerized development environments are much easier to set up than traditional development environments once you’ve learned how to build images, and this is because a containerized development environment will isolate all those dependencies in your app inside the docker image, and therefore there’s no need to install anything other than the docker on your development machine. In this way, you can easily develop applications for different stacks without having to change everything that’s installed on your development machine. In order to support this, we define docker images using something called docker files. Docker files describe how to assemble a private file system for a container and they also contain some metadata describing how to run a container based on that image. Writing a docker file is the first step to containerizing an application.

You can think of a dock file command as a step by step recipe on how to build up an image. So let’s take the following example. This is a very simple docker file, so let’s go through the commands. First we have the from and what this says it tells the docker image to start from a preexisting image, in this case node. In particular, it’s using a version of the node image called Current Slim. This is actually an official image built by the node JS vendors and is validated by Docker to be a high quality image containing the node JS long term support interpreter and all its basic dependencies. Next, we use the workdirk command to specify that all subsequent actions should be taken from within a directory on the images file system. So that’s not your file system, that’s the file system running in your image. Next we issue a copy command. So this copy command is in this example is going to copy a file called Pakistan JSON from your development machine to the host.

Next we have a run command. So the run command here installs NPM inside the image file system. In other words, when the virtual container starts up, it will run this NPM install command and install NPM. Next we have an expose command. So the exposed command tells the container that it’s going to expose a service on port 8080. Next we have the CMD command and the CMD command is again telling the container to actually issue or start a command within that container. So in this case we’re running the NPM command with the start option. Finally, in this example we’ve got another copy command and this copy command is basically then going to copy all the other files on our local development machine or at least within the folder that we have the docker file onto the actual image. Once we’ve created our docker file, we would run a docker build command and this will then create our image. Once an image is built, we can then run it as a container.

And you would normally perform this image development on a development compute, but once you’re happier that you’ve built it correctly, you would want to containerize your app. The final step in developing a containerized application will be to share that image on a registrar such as Docker Hub or in our case Azure Container Registry. This way images can easily be downloaded to any other destination machine, be it another development machine, a test environment or even a production environment. And by moving around our images that we’ve built in this way, we no longer need to install any dependencies except for the docker on the machines that you are wanting to run this on the dependencies of the container.

As applications are all completely isolated and encapsulated within that image we’ve just defined. So the Azure Container Registry is a service that stores and distributes your images. Docker Hub is another example of a public container registry that supports the open source community. So Azure Container Registry provides users with direct control of your own images, as opposed to a public set of images with integrated authentication, georeplication supporting global distribution, and reliability for network clause deployments. In the next lecture, we’ll have a run through creating a docker image, running it locally on your development machine, and then finally publishing it to the Azure Registry.