Practice Exams:

AZ-303 Microsoft Azure Architect Technologies – Implement Solutions for Apps (10-15%) Part 8

  1. Kubernetes Introduction

We’re now going to look at a service called Kubernetes. To understand what Kubernetes actually is, we need to take a small step back. So in the previous lectures, we’ve been looking at Containerization and specifically containerization with Docker. So using DACA, we now have container images that are registered in a container register. In our case, it was the Azure Container Registry and from there we can use those images to spin up instances or running containers.

So two questions may speak to mind. The first is what now? Or perhaps more broadly, why bother? After all, we could in theory achieve the same sort of thing as what’s running in the container by simply running a virtual machine. Of course, one reason for Containerization is that supportability ie. We can run those images on any platform that’s running the Docker instance. However, the other main reason is that it now allows us to run many more of those instances on the same underlying hardware because we can have greater density through the shared use of the OS.

This in fact, allows us to create software using a pattern known as microservices. Traditionally, a software service may have been built as what’s called Monolithic. That is, the software was just one big coal base that ran on a server. The problem with this pattern is that it can be quite hard to scale. If you need more power, you can add more with Ram and CPU. But once you start to hit the limits of the hardware, what can you do?

The first answer to this issue is to build applications that could be duplicated across multiple servers and then have requests lord bounds between them. And in fact, this is still a very common pattern. This is often known as a Web Farm. However, even with this pattern, you generally have to define this setup ahead of time and build out the number of serves that you’re going to want to spread the application across. If you want to scale up or down. It becomes an easy task, of course, because you are simply adding or removing additional nodes to that farm. However, this is traditionally a manual task.

As we saw with Azure Web apps earlier. We were actually able to scale those up automatically by defining rules such as setting a CPU utilization threshold. And in fact, we can do the same thing using virtual machine scale sets whereby additional servers can be added and removed automatically in response to those rules. However, if we remember from the previous discussion on containers, each virtual machine in the farm comes with the overhead of a full OS using an amount of Ram and disk space.

Therefore, with each additional server that we spin up, we’ve got a longer amount of time in which to spin it up because the server’s got to boot first. And in fact, we’re having to pay for that and more Ram than we actually need to run our application. Another consideration is that an efficient resource allocation. In a Web Farm scenario, whenever one part of the system requires more power, our only option is to add power to the entire system rather than just a subsystem that might require it. For this reason, and a few others, software was started to be developed in a more modular fashion. And these individual modules could be broken up and run as separate services, each service being responsible for a particular aspect of the system.

So, for example, we might split off a product ordering component as an individual service that gets called by other parts of the system. And this service could in itself run on its own server. Now, whilst we can’t achieve this by running it on its own virtual server, the additional memory overhead means as we break our system into more and more individual services, the memory overhead increases. And so we’re back to running very inefficiently, at least from a resource usage point of view.

So this is where containers come in. Because they offer isolation without running a full OS each time, we can run our processors far more efficiently. That is, we can run far more on the same hardware than we could with standard virtual machines. So now we have a series of smaller components that we can run far more of, and therefore we can run more cost effectively when we add additional instances to each individual service as needed based on those rules.

So, for example, as the product ordering subsystem becomes busier, we can spin up additional instances of the product ordering service. And as containers that use less Ram, we have less memory overhead, and it can be spun up much faster in response to that demand because we’re not loading up the full OS each time. Now, by this point you might be asking, that’s great, but how do you manage all that? What controls the spinning up of new containers or shutting them down? And the answer is something called Orchestration container.

Orchestrators monitor containers and add additional instances in response to usage thresholds or even for resilience there if a running container becomes unhealthy for any reason. And at last we can answer the question, what is Kubernetes? And the answer is, Kubernetes is an orchestration service for managing those containers. Kubernetes provides a declarative approach to deployments backed by a robust set of APIs for the management operations.

You can build and run modern portable microservices based applications that benefit from Kubernetes orchestrating and managing the scaling and availability of those application components. Kubernetes supports both stateless and stateful applications as teams progress through the adoption of microservices based apps. As an open platform, Kubernetes allows you to build your applications with your preferred programming language, OS libraries, or messaging buffs.

Existing continuous integration and continuous delivery tools such as DevOps can integrate with Kubernetes to schedule and deploy releases. Azure Kubernetes Service, or AKS, provides a managed Kubernetes service that reduces the complexity for deployment and core management tasks, including coordinating upgrades. The AKS cluster masters are actually managed by the Azure Platform, and you only pay for AKS nodes that run your actual applications. AKS in Azure is built on top of the open source Azure container service engine. So let’s dig a bit more into detail as to actual Kubernetes.

So, a Kubernetes cluster consists of a set of worker machines called nodes, and they run the containerized applications. Every cluster has at least one worker node. The worker nodes host pods that are the components of the actual application. The control plane, or cluster master, manages the worker nodes and the pods in the cluster. Now, when you create an aircraft cluster, a cluster master is automatically created and configured for you.

The cluster master is provided as a managed Azure resource extracted away from the user. And there’s no cost for the cluster master, only the nodes that are part of the cluster. The cluster master part that’s free of charge includes the following core Kubernetes components. First, we have the Kube API server. The API server is how the underlying Kubernetes APIs are exposed. This component provides the interaction for management tools such as a tool called Kubectl or the Kubernetes Dashboard, etcd.

To maintain the state of your Kubernetes clustering configuration. A highly configured available database called the etcd is basically a key value store and is used by Kubernetes the Cube Scheduler. When you create a scale application, the scheduler determines what nodes can run the workload and starts them up. And then finally we have the queue controller Manager Controller Manager oversees a number of smaller controllers that perform actions such as replicating or copying pods and handling node operations. In Azure, AKS provides a single tenant cluster master with a dedicated API server and scheduler, ETCA. You define the number and size of the nodes and the Azure Platform configures the secure communication between the cluster master and those nodes. Interaction with the cluster master occurs through Kubernetes APIs using tools such as Kubectl or the Kubernetes Dashboard, which is a web based app and is deployed as part of your cluster.

This managed cluster means that you do not need to configure components like the highly available etcd store, but it also means you can’t access cluster master directly. Upgrades to Kubernetes are orchestrated through the Azure CLI or the Azure Portal, which upgrades the cluster master and then the nodes to troubleshoot any possible issues. You can review the cluster master logs through the Azure Log analytics service. If you need to configure the cluster master in a particular way or need direct access to it, then you would have to deploy your own Kubernetes cluster using your own AKS engine. To run your applications and supporting services, you need a Kubernetes node. An AKS cluster has one or more nodes, which is basically an Azure Virtual machine sorry, not Windows. It could be Windows or Linux, but it’s basically an Azure virtual machine that runs the Kubernetes node components and a container runtime. cube let is the Kubernetes agent that processes the orchestration request from the cluster master and scheduling and running the requested containers.

Virtual networking is handled by the Cube proxy on each node. The proxy routes network traffic and manages IP addressing for services and pods. The container runtime is the component that allows containerized applications to run and interact with additional resources, such as the virtual network and storage in AKS, docker is used as the container runtime. The Azure VM sizes for your nodes defines how many CPUs, how much memory, and the size and type of storage which is available. So, for example, that kind of storage could be either high performance SSD or even regular HDD. If you anticipate a need for applications that require large amounts of CPU and memory or high performance storage, plan the node size accordingly. You can also scale up the number of nodes in your AKS cluster to meet the demands in AKS. The VM image for the nodes in your cluster is currently based on Ubuntu Linux. When you create an AKS cluster or scale up the number of nodes, the Azure platform creates the requested number of VMs and configures them.

There is no manual configuration for you to perform. If you do need to use a different OS container on time or include custom packages, you can deploy your own Kubernetes cluster. As we discussed earlier. In this case, the upstream AKS engine releases features and provides configuration options before they officially spotted in AKS clusters. So for example, if you did want to use Windows containers or a container runtime other than Docker, you could use the AKS engine to configure and deploy Kubernetes cluster that meets your current needs. You don’t need to manage the core Kubernetes cluster on each node, such as Kubelit, Kube Proxy, and the Cube DNS, but they do consume some of the available compute resources. So to maintain node performance and functionality, the following compute resources are reserved on each node CPU at 60 milliseconds and memory at 20% up to 4GB. These reservations mean that the amount of available CPU and memory for your applications may appear less than the node itself contains.

If there are other resource constraints due to the number of applications you run, these reservations ensure CPU and memory remains available for the core Kubernetes components. These resource reservations cannot be changed. Nodes of the same configuration are grouped together into something called node pools. Kubernetes cluster contains one or more node pools. The initial number of nodes and size are defined when you create an AKS cluster which defines a default node pool. This default node pool in AKS contains the underlying VMs that run your agent nodes. When you scale up or upgrade an AKS cluster, the action is performed against the default node pool. For upgrade operations, running containers are scheduled on other nodes in the node pool until all the nodes are successfully upgraded kubernetes uses pods to run an instance of your actual application.

A pod represents a single instance of that application. Pods typically have a one to one mapping with a container. Although you can have advanced scenarios where a pod might actually contain multiple containers, these multiple container pods are scheduled together on the same node and allow containers to share related resources. When you create a pod, you define resource limits to request a certain amount of CPU or memory resources. The Kubernetes scheduler then tries to schedule the pods to run on a node that has those available resources to meet the request. You can also specify maximum resource limits that prevent a given pod from consuming too much compute resource from the underlying node. Best practice is to include resource limits for all pods to help the Kubernetes scheduler understand what resources are needed and permitted.

It’s important to understand that a pod is effectively a logical resource, but the containers are where the application workload runs. Pods are typically ephemeral, which means the short lived they are disposable resources, and individual scheduled pods can miss some of the high availability and redundancy features that Kubernetes provides. So instead, pods are usually deployed and managed by the Kubernetes controllers, such as the deployment controller. So a deployment represents one or more identical pods, and this is managed by the Kubernetes deployment controller. A deployment defines the number of replicas or copies of a particular pod to create, and the Kubernetes scheduler ensures that if pods are nodes and counter problems, additional pods are scheduled on those healthy nodes. You can update deployments to change the configuration of pods containing room is used or attached storage.

The deployment controller then drains and terminates any given number of replicas, creates replicas from the new deployment definition, and continues the process until all replicas in deployment are updated. Most stateless applications should use the deployment model. Rather than scheduling individual pods, kubernetes can monitor the health and status of deployments to ensure that the required number of replicas are running within the cluster. When you only schedule individual pods, the pods are not restarted if they encounter a problem, and are not rescheduled on healthy nodes if their current node encounters an issue. So if an application requires a quorum of instances to always be available for management decisions to be made, you don’t want an update process to update that ability. So pod disruption budgets can be used to define how many replicas in your deployment can be taken down during an update or node upgrade.

So for example, if you have five replicas in your deployment, you can define as pod distribution of four to only permit one replica from being deleted or rescheduled at any particular time with pod resource limits a best practice to define pod disruption budgets on applications that require a minimum number of applicants to always be present. All this is configured using what’s known as a YAML file, which is basically a text file within which you can configure all the various components of your clusters and how it needs to look. Once we have our nodes, we need to be able to manage the packages or applications on them. A common approach to managing applications or Kubernetes is something called Helm. You can build and use existing helm charts. Charts contain a packaged version of the application code and Kubernetes Jamal manifests that are then used to deploy resources.

These helm charts can be stored locally or often in a remote repository such as Azure Container Registry. Helm Chart repo to use helm, a server component called Tiller is installed in your Kubernetes cluster. The Tiller manages the installations of charts within the cluster. The Helm client itself is installed locally on your computer or can be used from within the Azure Cloud shell. You can search or create helm charts with the client and then install them on your Kubernetes cluster. Finally, we’ll look at namespaces. So Kubernetes resources such as Pods and Deployments are logically grouped into a namespace. These groupings provide a logical divide on an AKS cluster and restrict access to create, view or manage resources. You can create namespaces to separate business groups. For example, users can only interact with resources within their assigned namespaces. When you create an APS cluster, the following namespaces are automatically available.

First, we have the default. This namespace is where Pods and Deployments are created by default, when no other is provided in small environments, you can deploy applications directly into the default namespace without creating any additional logical separations. When you interact with the Kubernetes API, such as with the Cube CTL commands, the default namespaces is always used when none of the risk specified. Next, we have a cube system namespace.

This namespace is where core resources exist, such as network features such as DNS and Proxies, or the Kubernetes Dashboard. For managing the cluster, you typically don’t deploy your own applications into this namespace, and then finally you get a Kuben public. This namespace is typically not used, but be used for resource to be visible across the cluster and therefore be viewed by any users.

  1. Kubernetes Walkthrough

In this lecture, we’re going to go ahead and create a Kubernetes cluster and deploy an application to it. Creating the cluster is quite straightforward. The first thing you want to do is just go ahead and click Create a Resource. As usual, at the moment, the Kubernetes service is actually in one of the quick starts. If it doesn’t appear there, simply search for Kubernetes and select Kubernetes Service and then go and click Create. We’ll be given the usual things that we need to fill in. So first of all, we want to create a resource group. I’ll just call this RSG Kubernetes. We need to give it to Cluster name. So I’m going to call this cloud Guru AKS. What region do we want it here? So I’m going to go ahead and put this in UK South. Choose a version. So by default you’re going to get the most stable AKS one. You can choose if you want a specific version or older version. For this demonstration, I’m just going to go for the default. We can then set a DNS prefix name. So this will help us when we then want to start to browse to our cluster. And then we can select the node size.

So the node size is obviously the size of the underlying VMs that we want and also the node counts. So by default it’s going to spin up three nodes of this size. So for the purposes of this demonstration, I’m going to wipe that right down to one and then go ahead next to scale. I’m just going to accept the defaults there and let’s go to authentication. So what will happen when we spin a power cluster? Is it’s going to create a service principle, ie. And app registration? So by default it will actually go and create that for us. In some scenarios you might not have as much control or you might not want it to control that, in which case you can go to Configure Service Principle, click Use Existing and then enter an registration, client ID and the client secret there. But for this demonstration, we’re just going to tell it to create one. We’re also going to tell it to enable RBAC. So that means we can control access to our cluster using our directory.

Next we’ll go to networking. So networking, we’ve got a few different options here. The first is we can have a private cluster. So by default when we spin a power cluster, we’ll get it a public IP address. So in the background it actually creates an internal VNet with some internal addresses. But then we get a load balancer with public IP on it so we can access it from the outside world. This might not be what you want. You might only want to access this from an internal network. So for example, if you’re running in a hybrid scenario, in which case so you would enable the private cluster. But again, we’re just going to keep the default disabled. Next. Http routing. And now you see actually these little eyes of everything. And we can see here the Http routing option is deployed to your cluster when you’re wanting publicly accessible DNS names. Now, because that’s what we want, we’re going to go and actually click yes. A cluster always includes a load balancer.

By default, that’s standard and the networking configuration. Again, we can leave it as basic. If we do go to Advanced, it basically gives us a lot more control over how the networking is done. Now by default, the wizard will set up all the networking for us. However, sometimes that basic networking sets up is not what you want. In which case this is how you can go and override everything. However, we’re just going to go and stick to Basic. Next we’ll go to monitoring. It’s going to ask to send your monitoring to a log. Analytics workspace. Again, you would normally want to do this so you can monitor it. So now I’m just going to hit no because I’m not too bothered. Next tags review and create. Once it’s past the validation, just go ahead and click Create. Once your deployment is complete, we can go to the resource and see the AKS details of what is being deployed.

If we go to node pools, see here our provisioned node pool. If we wanted to add extra node pools, we can do so within here. We can also scale the existing node pool. So for example, we can add additional nodes. We can also delete a node pool and of course, upgrade the Kubernetes version. What we want to do now is go to a command prompt. For this bit, we need to make sure the Azure CLI is installed. So if you’ve not done that, you’ll need to revisit one of the earlier lectures where we walk through that. But assuming you’ve got the same, what we need to do is attach your Kubernetes cluster to your AZ credentials. So we do that.

We type Azkset credentials, then we tell it, say Resource Group and give it the name of the resource group that we created our cluster in. If you can’t remember some of the details, you can go to the overview page. And we also need to tell it the name Cloud Guruks and press Enter. Like that because I’ve spelled that wrong. Guru. So now we’ve merged those credentials, we can now start to use the cubectl commands for controlling our cluster. So we can say Cube CTL and the first thing we want to do is get nodes CTL. We can see there our raging pool is running. As we saw earlier, we’re going to be pulling an image from our Azure container service. However, in order to do that, we need to be able to give the Azure AKS service itself access to be able to pull the images from the container registry.

So how do we do that? Well, when we created this, you’ll remember one of the options was to create or use an existing app registration or service principle. We told it to create one. So the first thing we need to do is go and get the details of what it’s created. So if you go to the usual active directory, go to App Registrations and then sometimes it will display on here so we can see the Cloud guru AKS SP. If it doesn’t appear there, you can click switch between owned and all applications until you find the one you want. So we need to copy that.

Now we’re going to go to our container registry that we created earlier, which is that one there. And we’re going to go to Access Control. Then we’re going to say add role assignment. So the role we want is ACR Pull so that we can pull containers. Then I’m going to paste the username that we’ve created in there and then click Save. So once that’s added, go back to our Kubernetes cluster and then we can start working, figuring out the deployment. So what we need to do now is we need to create a YAML file that will define our deployment and this will define things such as where we need to get the image from, which will be our container register and what ports to expose and basically how to run the service. So we saw an example of a YAML file earlier on, but what I’m going to do is I’m going to get an example, one from my own GitHub repository. So in the command prompt, I am going to perform a git clone https githop. com and we want to go to complete Cloudguruaz.

Okay, what I’m going to do now is go into Visual Studio code or just any text editor really to open one of the files that’s in there. So here we have an example YAML file which I’ve called Azure BB. What we’re going to do is pull the bulletin image that we were running earlier. So let’s have a look through the YAML file. First of all, we always set an API version and the first thing we need to do is define a deployment. So we have this kind deployment, we have some metadata here and we’re just giving it a name which is deployment. We’re then telling it we’re going to only have one replica of this. We’re going to match a label of bulletin that’s used when we’re trying to get it. And again we’re going to then define a template for what our deployment is going to look like.

So again we’re going to give it a label. So the app label is going to be bulletin. The container that we want is bulletin and perhaps more importantly, the container image path that we want. So what we need to do here is put in here which our container name. So our container registry, which in our case is Cloudguru. So we will replace that with cloud guru. And we need to make sure we have this Azure Cr IO. And then it’s the name of the image with the version tag that we had on the end. We now need to define what port we’re exposing. Now this is important. When we created this image earlier, we defined it with a port of 80, 80 rather than 80. And in fact, if we go to our original app here, see when we defined our docker file, we told it that we would expose H 80. And because the image that we uploaded is based on that, it’s quite important that we know that it’s a non standard port.

We then define some limits and requests for our resources saying we want two, five, five CPU with a limit of 500, don’t actually need this environment, we can get rid of that. Then we define our service. Now again, so the metadata, the name is bulletin, which as you can see, we’re using just bulletin throughout consistency and we’re setting the spectator for lord balancer. So that’s going to give it a load balanced IP when this gets built. And it means that as we build out, if we build out additional nodes, those nodes will go behind the load balancer. And then finally we have this selector here. So just make sure that’s saved. All you should really need to do is update that with your image name. That is of course, when you uploaded your image to contain Surface, you called it bulletin. Let’s just minimize that. So there we’ve got our YAML file.

So what we want to do now is we want to apply this YAML file to our cluster. So we’re going to use a cubectl command and we want to use the cubectl apply command, going to say the minus F to force it. And then we’re going to give it the name of our YAML file, which is it should be B YAML. And then we’re going to hit Enter. Okay, so it’s saying it’s sent it and created it. So what we can do now is check to see the status. The first thing I’m going to do is run a cube CTL and I’m going to get pods. So I want to see what pods are now running. We can see here our bulletin. There is Pod is there, but the status is pending. So we really need to wait for that to actually get to a running state.

So eventually you may get a status of container creating. It starts as it’s pulling the container from the registry and spin it up. And then finally we want this status running. So the next thing we need to do is understand what it’s running on. So again, we can run another command here. We’re going to say cube CTL and we’re going to get the running service. We’re going to say get service. And we need to pass in the name of the service, which in this case was bulletin. Then we’re going to put a half and half and watch because we want to see what that service is doing. So we can see here our running service bulletin. We can see it’s a type of a load bouncer.

So we’ve got here the internal IP address that it uses on the non addressable network. And then we’ve got the external IP and I’m going to go and copy that there. Then again, as you can see, we’re exposing port eight. And here it’s telling us what the internal port is to go to various services. The one we want is that information there. So let’s open up a new browser. We’re going to paste in that external address and we need to put in the fact that it’s port 88. And then, as you can see, that is now running our bulletin board app that we deployed earlier. If you want, you can shut down your cluster or we can delete it completely. If we have a look at our resource groups, you’ll see our Kubernetes resource group, and in here we can see our Kubernetes service. However, behind that there’s a number of virtual machines. So where are they?

Well, if we look, we’ve got this resource group here. Now, we didn’t create this. This was created automatically for us. And if we have a look into this resource group, we’ll now see a number of resources. So we’ve got a public IP address network security group, a root table, virtual networks, that’s the internal one, and our virtual machine scale sets. Let’s go into that. And then we can see various details about this and it is effectively a virtual machine. However, it doesn’t appear in the virtual machines view. So it’s important to realize that you get to it by looking through here. It’s actually a virtual machine scale set.

So if you want to keep your Kubernetes cluster set up but don’t want to be paying for it on an ongoing basis because you’ll be paying charges for that, which is around about $80 to $100 a month, what we can do is we can say, de allocate. And just like when we shut down our virtual machines, that will stop us from paying Compute on the service.

However, we are still paying for our public IP addresses and we are still paying for the storage. But they are the cheapest part of this. It’s the subject of Kubernetes is a very big subject and far more than we can go into on this call. However, for the exam, it’s just important to understand how Kubernetes clusters generally work and the basic concepts of how you spin one up.

  1. Azure Functions Introduction

Azure function apps are used for developing small pieces of code or functions that each serve a specific purpose. Functions offer you the ability to write your code using your favorite development language, including C sharp, F sharp, node, JS, Java, PHP, or Python. A common example that is often used will be be to take a function that uploads an image and resizes it. Functions can be triggered in a variety of ways and when you create a function app, you can choose from a list of pre built templates. These include a Http trigger, so this triggers the execution of your code using a Http request, ie. It may be exposed to an API endpoint that you would call from an application timer.

 Triggers these execute batch tasks on a predefined schedule that you can set. GitHub Web Hook These respond to events that occur in your GitHub repositories. Generic web hooks also process web hook Http requests from any service that supports web hooks. Cosmodb Triggers These process your cosmodb documents when they are added or updated in the collection in the Cosmodb nor SQL database. Similarly, blob triggers processes your storage blobs when they are added to containers. For example, this might be used in the function that we would use for an image resizing queue triggers again respond to messages as they arrive in Azure Storage queue, event Hub triggers, Service Bus queue triggers and Service Bus. Topic triggers again also hook into the relevant service in Azure and are fired when other applications interact with them. So as well as triggers, functions use something called bindings to ease integration with those other services. A binding is a way to connect a function to another component that can be used as either input or output. Common available bindings include Cosmodb event hubs, event grid notification, hubs service bus, Azure storage on premises using the Azure service bus, or even Tweeted or SMS messages. By combining triggers and bindings, we can quickly and easily create functions that respond to many different kinds of events and perform some sort of process and then create an output.

So if we take the image resize example, what we would do is bind an Azure storage blob to a function using the blob trigger. Then when a user uploads an image to that blob container, the function would use that input and will run, which will resize the image that was uploaded to the blob container and then save the resize image to another container, in this case the output container. One of the big advantages of functions is that you don’t need to worry about all the underlying infrastructure and you can easily scale them. But there are two ways this is achieved and it depends on the kind of pricing plan you choose. These are consumption plan or an app service plan. We’ll take the app service plan first. So the app service plan is how most of the services you’ve used so far are run and are paid for. That means that they run effectively inside a virtual machine and therefore when defining the app service plan you have to say how much I’m to use.

You don’t however in this case have access to the underlying VM pretty much the same way that web apps again are based on CPU and Ram and work on a virtual machine, but that you don’t actually have access to. The important point is that you pay for the specified of CPU and Ram that you define and although this can be set to auto rescale ie. To increase additional resources based on predefined settings, you are always paying in a minimum account for that underlying VM. The app service plan however does provide some very important features over the alternative consumption plans and therefore they should be considered when you have existing underutilized VMs that are already running other app service instances. Your function apps run continuously or nearly continuously. In this case an app service plan might be more cost effective if you need more CPU or memory options than is provided by the default consumption plan. If your cause needs to long runner then the maximum execution time which for a consumption plan is ten minutes if you require features that are only available on an app service plan.

So this is often things like when you need an app service environment for isolation or if you need VPN or VNet connectivity to either on premise or VNet networks. It also allows you obviously to define larger VM sizes. Finally, if you want to run your app on Linux, or you want to provide a custom image on which to run your functions, again you would use an app service plan. If you do run an app service plan you should enable the always on settings so that your functions apps run correctly. On an app service plan the function runtime can often go idle after a few minutes of inactivity so only Http triggers would in this case wake up functions. Always on is available only on an app service plan and it will mean that that timeout won’t actively happen. On a consumption plan the platform activates functions automatically and so you don’t need to worry about it. So conversely consumption plans are considered serverless.

This means that you never need to worry about the underlying resources. The Azure platform will always ensure there are sufficient resources to run your apps. So because of this you don’t pay for or define CPU and Ram. Instead you pay for executions and the time that those executions take to run. This is ideal for scenarios when you’ll have idle periods and very busy periods as you’re only paying for when your code is being executed and if they aren’t being executed then you’re not being billed at all. However, it’s important to know that function execution times out after the configurable timeout period. The default timeout period for functions is five minutes, however this value can be increased for the function app up to a maximum of ten minutes by changing the appropriate property in a special file called the host JSON file on either a consumption plan or an app service plan.

A function app requires a general Azure storage account which supports Azure Blob queues files and table storage. This is because functions rely on app storage for operations such as managing triggers and logging execution functions. But some storage accounts don’t support queues and tables. These accounts, which only support Blob, only storage accounts and Premium Storage, which only supports VHDs, are filtered out from your existing storage account selections when you’re creating a function app. We’ll see that in practice. When we go through an example, there’s a few best practices that you should follow when building and architecting serverless solutions using Azure functions. First, we need to avoid long running functions.

So, as we already said earlier, large long running functions can cause timeout issues. The function can become large due to too many dependencies. Importing dependencies can also cause increased load times that result in an expected timeout. Dependencies can be loaded both explicitly and implicitly. So, in other words, a single module loaded by your code may have its own reference modules. So in your code, you may define certain explicit dependencies using your using statement. However, those modules themselves might also have dependencies on other modules and they’re the implicit ones. So wherever possible, refactor large functions into small function sets that work together and return faster responses. We also need to consider cross function communication durable functions which we’ll cover later. And Azure logic apps are built to manage state transitions and communication between multiple functions.

Or, if you’re not using either of these to integrate with multiple functions, use storage queues instead for that cross function communication the main reason storage queues are cheaper and much easier provision. You could of course also use service bus topics which are useful if you require message filtering before processing, and event hubs, which are useful to support high volume communications. Functions should be stateless. Where possible, associate any required state information with your data. So, for example, an order being processed would likely to have an associated state member such as processing, packaging, shipping, et cetera. A function could process an order based on that state while the function itself remains stateless.

You need to write your functions defensively. Assume your function could encounter an unexpected problem. Design your functions with the ability to continue from a previous stage point during the next execution. Try to share and manage connections. Reuse connections to external resources where possible. Don’t ever mix test and production code within the same function app. Functions within a function app share resources. For example, memory is shared if you’re using a function app in production, don’t add test related functions and resources to it. It can cause unexpected overhead during production code execution. And be careful what you load into your production function apps. Memories averaged across each function in the app asynchronous programming is a recommended best practice.

However, always avoid referencing the result property or calling weight methods on a task instance. This approach can lead to thread exhaustion. Some triggers, like event hub enable receiving batch messages in a single invocation batching messages has much better performance, and you can configure the max batch size in filehost JSON. Finally, configure host behaviors to better handle configuration concurrency. So again, in the host JSON file that comes with every function app will allow you to configure the host runtime and triggering behaviors. In addition to batching behaviors, you can manage concurrency for a number of triggers. Often, adjusting the values in these options can help each instance scale appropriately for the demands of the invoked functions.