Practice Exams:

AZ-303 Microsoft Azure Architect Technologies – Implement and Manage Data Platforms (10-15%) Part 2

  1. Azure SQL Db Introduction

Azure SQL Database is a general purpose relational database service managed by Microsoft Azure. It supports structures such as relational data. JSON spatial and XML. SQL Database delivers dynamically, scalable performance within two different purchasing models the Vcore based purchasing model and a DTU based purchasing. SQL Database also provides options such as Column Store Indexes for extreme analytic analysis and reporting, and in memory’oltp for extreme transactional processing.

As a Path service, Microsoft handles all patching and updating of the SQL codebase seamlessly and abstracts away management of all the underlying infrastructure. The new SQL Database provides different deployment options when creating them.

The first is a single database with its own set of resources managed by your logical server a pool database, an elastic pool with a shared set of resources which again are managed by a logical server or as part of a collection of databases known as Managed instance that contains system and user databases and shared sets of resources. SQL database shares its code base with the Maxsoft SQL Server database engine with Maxwell’s Cloud first strategy.

The newest capabilities of SQL Server release. First to SQL Azure Database and then to SQL Server itself. This approach means that you always get the latest and newest SQL Server capabilities in the Azure Path service. So just said SQL Database offers a couple of purchasing models and it also within those offers a couple of service tiers. The VCO based purchasing model lets you choose the number of V cores and amount of memory and the amount of storage that you want to use.

The Vcore based purchasing model also allows to use Azure hybrid benefits for SQL Server to gain additional savings. This basically means if you have existing SQL licenses or an enterprise agreement with Microsoft, you can leverage this to get cheaper costs. The DTU based purchasing model offers a blend of compute, Memory and IR resources in the three service tiers to support light to heavy database workloads. Compute sizes within each tier provide a different mix of these resources to which you can add additional storage resources.

Finally, they’ve recently introduced a serverless model which automatically scales compute based on workload demand and builds for the amount of compute used per second. The Serverless Compute tier also automatically pauses databases during inactive periods when only storage is billed and automatically assumes the databases when activity returns. Azure Databases also within these offers three service tiers and you would pick the Service tier depending on the type of application that you’re building.

First we have the general purpose or the standard tier. This Service tier is designed for common workloads and offers a budget orientated balance, compute and storage options. Next up, we get the business critical or the premium tier. This is designed for OLTP applications with high transaction rates and requiring low latency input output. It offers the highest resilience to failures by using several isolated replicas at the top end. We then get the hyperscale tier. This is designed for very large OLTP databases and the ability to autoscale storage and scale compute fluidly. For many businesses and applications, being able to create a single database and dial up performance on demand is enough, especially if usage patterns are relatively predictable.

However, unpredictable usage patterns can make it hard to manage your costs and your business model, so elastic pools are designed to solve this problem. You allocate performance resources to a Pool rather than an individual database, and you pay for the collective performance resources of the Pool rather than for a single database performance. With elastic pools, you don’t need to focus on dialing database performance up and down.

As demand for resources fluctuate, the Pool database is consumed the performance resources of the elastic Pool as needed. Pool databases consume but don’t exceed the limits of the Pool, so your cost remains predictable even if an individual database usage doesn’t. You can add and remove databases to the Pool, scaling your app from a handful of databases to thousands or within a budget that you can control.

You can also control the minimum and maximum resource available to individual databases in the Pool to ensure that no database in the Pool uses all the resources, and that every Pool databases has a guaranteed minimum amount of resources available as a fully managed platform. As a service. Database Engine Azure SQL provides a number of benefits over a traditional SQL Server running on a VM.

Azure SQL handles most of the database management functions, such as upgrading, patching, backups, and monitoring without any user involvement. Azure SQL Database is always running on the latest stable version of SQL Server database Engine and Patched OS. On top of this, Azure SQL implements high availability. This ensures 99% availability, and backups are also automated and stored in a Georedundent storage account with a number of different retention options.

The databases themselves can also be set to be replicated across regions and configured with Auto Failover Group so that disruption to any one region won’t affect your solutions. You can, of course, have your SQL Server workloads running in a hosted infrastructure environment or running as a hosted service. The key questions that you need to ask when deciding between PaaS or IAS options is do you want to manage your databases, apply patches, take backups, and so on? Or do you want to delegate all these operations to Azure? You also need to ask what compatibility level you need.

Legacy solutions may not support SQL Azure, especially if they rely on traditional SQL Server capabilities such as message brokers or nonstandard queries. In these cases, you may need to take an IAS approach and perform a lift and shift until you’ve had time and resources to re architect your solution to take advantages of the stability and management features of the platform as a service.

As mentioned, azure SQL Supports Georeplication Failover groups. Active Georeplication is an Azure SQL Database feature that allows you to create a readable secondary database in other regions or within a region. However, active geo replication is not supported by managed instances. For geographic failover of managed instances, you just use auto failover groups.

Active georeupplication is designed as a business continuity solution that allows the application to perform quick disaster recovery of individual databases in the case of a regional disaster or large scale outage. If geo replication is enabled, the application can initiate failover to a secondary database in a different region. Up to four secondaries are supported in the same or different regions, and the secondaries can also be used for read only access queries. However, the failover must be initiated manually by the application or a user.

After failover, the new primary has a different connection endpoint, which means your applications need to be aware of this possibility. Alternatively, auto failover groups can be combined with geo replication to not only make this happen without user intervention, but also supply a single connection endpoint that will direct traffic to the active instance, making a failover transparent to your apps. In the next lecture, we’ll go through the process of setting up some of these options.

  1. Azure SQL Db Walkthrough

In this lecture, we’re going to step through the process of creating and managing a SQL Pals database. Let’s go to Create a resource, and again, it’s in the popular items, but if it doesn’t appear there from Search for SQL and we want Azure SQL, and then go ahead and click Create. So we have a number of different ways we can do this. We can create single databases, SQL managed instances, or even SQL virtual machines. We’re going to go for single database. As usual, we need to go and configure some basic information. So I’m going to create a resource group. I’m going to give the database a name. And we also need to select a server.

Now, because we’ve not created a server before, we also need to create this. So click Create new observer and give it a name. We then need to set a SQL admin login and finally a location. Next we tell it whether we want to use elastic pools or not. We’re going to say no. And finally configure the compute and Storage. Now it defaults to a Gen Five Vcore pricing model, which is actually quite expensive. So make sure you go in and change that to the sizing you want. So what I want to do is use a basic minimum DTU that’s only $5 a month, which is fine for the tests that we’re going to be doing. Once we’ve set that, click Apply. Next. Go to networking.

As with other similar services, we can set various different network connectivity. I’m going to go for the default, which basically blocks all access, set the database collation, and we can also set the data source. So we can create a blank database. We can create a database from an existing backup file, or we can create a base using a sample data set. I’m going to go for Non to create a blank database. And once we’re happy, click Create.

Okay, so once that’s been created, I want to go to the actual SQL Server first and not the database. So by default in our Quick menu, we only get SQL databases. Let’s go to all services and do a search for SQL Server and choose SQL Servers. This shows the SQL Server we’ve just created. So don’t forget, this is not a real physical server, it’s more a logical representation of a server. But through here we can set some of the more advanced features. So the first thing we can do is we can manage the backups. So by default, all backups have a seven day retention. That’s enough for some people. But sometimes if you want to increase that, we can select it and go and configure with the retention.

And we can change it to various options up to 35 days. That means we get a point in time backup for any point between now and 35 days in the past. But do be aware that the higher you have the retention, the more it will cost because of storage. You can also configure long term retentions if you actually need to keep databases for longer than that. Another option is by default when we create our SQL Server, we can only log into it using our SQL authentication settings.

So that was the SQL admin and password reset. If you do want to hook it into your Active Directory, then you can use the Set admin here and pick an Active Directory account. You can only set one Active Directory account at a time though. Next, let’s actually have a look at our databases. We can see our database here and if we click on that, it’ll take us through to the configuration of the database itself. First, let’s look at some of these options. We’ve got copy, restore, export and set firewall.

So Copy is a very useful tool if you’ve got a database and you want to perform some tests on it. What we can actually do is take a copy of the database, choose the destination server, set the pricing tier appropriately, then click OK. And that will actually create as a full copy of the database, including any data that’s inside it. Another useful feature is the export. So Export effectively takes a point in time backup of your database. So this can be very useful if you know you’re about to do some breaking changes and you want a point in time backup before you go and made them. It backs up to one of your storage accounts.

So simply go through and select a storage account, create a container to put it in, and enter the username and password details that you set and then click OK. That will now actually perform a database export to our storage account and we’ll come back to that later. An if important thing that we need to do is set the firewall. So by default we don’t allow any access to anything other than Azure Services. And if we wanted to connect, say from our local laptop to be able to create some tables and manage them, we need to open the firewalls to allow our laptop through. So go ahead and click set server firewall. We have these options here.

So this is an important one to deny public network access. If you want to completely block all public network access, we need to do that. So that will prevent mean that only people within internal network ranges can access it. But even then you actually have to define which subnets you would want to give access to. Again, you can tell it to allow Azure SQL resources to access the database. So if you are using a web app and the web app is going to connect to the SQL Server, we need to set that to yes to allow that access through. What we want to do now is just create some tables. What I’m going to do is add my client IP, that adds my client IP address and I’m going to go and click Save.

That will now allow me to connect through. So what I want to do now is going to SQL Studio Manager for example, and connect that database. So if we go to here which is show connection strings, we can see how we need to connect. So this would be a full connection string that you might want to put in an application. But what I just want first is the actual name there. So I’m just going to copy that and I’m going to go into SQL Studio Manager and I’m going to connect to a database engine, copy those details in, I’m going to set the authentication to SQL Server authentication and enter the details that I created. We can tell to remember the password future use and go ahead and click Connect.

Now we can see all our databases, we’ve got our do list and at the moment there’s no actual data in it. So what I’m going to do is go ahead and create a new table and I’m going to do this virtual query. So I’ve made sure I’ve got my database selected here. Enter the query that’s just going to create as a simple table and insert some data into it. If we refresh these tables we can now see our items and I can see them in there. There’s also within the portal itself a query editor which is a way to access the tables without having to go through SQL Studio Manager. Again, you need to sign in using your credentials that you created for the SQL Server. You can see our tables here and then we can issue TSQL commands to query our database.

When we created this we set it on a smaller database size. If you do need to up the configuration we can go to the configure option and then we can scale our database up to a higher tier accordingly. And in fact you can scale up and down as many times as you want, whenever you want. Do be aware that sometimes during the scaling process what’s actually happening is in the background the database is being copied to a new database which means there can sometimes be a bit of a lag in some functionality. While that’s happening, next thing we’re going to do is look at replication. So at the moment we have one database and it’s on this one server, which means if this server went down so we’re in UK South, if there was connectivity issues to that server we’d lose access to the database.

So what we can do is create a Geo replication of the database. So if we go into the database itself and go to Georperication, this allows us to add replications and failover groups. So by default we’ve just got one primary copy. What I want to do is I want to create a copy of it in UK West. So this comes up with the configuration screen to tell us where to put the second big copy. And effectively what we need to do is create a second SQL Server database. So what we need to do is go and create a new server and again we need to because it’s a completely separate server, we need to enter some login credentials. We again set the sizing and then click OK so that will submit a deployment and in the background that’s basically creating a new SQL Server and then it will copy the database across to it. Once that’s complete, it now appears as a secondary replica. Note that the status is actually readable so that’s quite important. Let’s see what that effectively means. So let’s go to the UK West server first of all and the server name is UK West. So we just need to confirm that what I’m going to do in SQL Manager. I’m going to create a new connection and this time it’s Cloud West. Again enter my login details. Because it’s a new server, we need to go and actually add our details to that server.

So let’s just go into their Firewalls and virtual networks and again add my client IP connect again. So now I can again see this database. From this database I can query the data in it. So as you can see, that’s an exact copy. However, what we’re going to try and do now is insert a record into this database. So this is again speed to do this database on our replica I’m going to update these details and hit execute. So now we say fail to update database because it’s a read only replica. Obviously if we did that against the other database that would work fine as we saw earlier. So what we might want to do sometimes is failover the SQL Server. By default, failover doesn’t happen automatically, it’s more of a manual process. All we’ve essentially done is created a read replica.

So what we need to do is again on the SQL Server, on the primary server we need to go to failover groups and we need to add a failover group. We need to tell it which is the second reservoir. So in this case it’s the one in the UK West set the read write failover policy. So we’re going to automatic you can actually set it to manual and then tell it which databases to us. So if you have multiple databases you might not want to replicate them all or have them in the failover group.

So I’ll go ahead and click select and then click Create. So once that’s finished, our failover group pairs here and it shows us which is the primary secondary service and in fact if we go into it, we get to see the details so we can say again the primary server is there and the secondary is there. What it also gives us is a different endpoint listener because if we’ve got automatic failover and we had our applications pointing to that server there. Then when a failover occurred, then all our connections would break. Therefore, instead of connecting to individual servers, we connect to this failover point and show that in action.

Again, let’s go back up here and create a new database engine connection and this time change the server name to the failover listener. It’s important that your logins are the same across them. Otherwise, again, if you have a failover event and you’ve got a different username and password on each one, you’re not going to be able to connect anymore.

We’ll go ahead and click Connect. So now here we can see our databases, but by connecting to that, we don’t actually know which region it’s in. And in fact, it doesn’t matter to us because it means if one region went down, it will automatically redirect to the other just to test that practice, just to show that in action. So on this connection here, we tried to create an item in the UK West database and it wouldn’t let it because it was in read only.

If we go over and initiate a failover, once the failover is complete, we’ll see on the status screen that the UK West server is now the primary. And if we go back into here and reissue this statement. So we’re now trying to insert a record against the west database and that now allows us to add the record. And if we go back to our shared listener and check the database, we can see the new record has been added. It’s important to understand with replication and failover, that you effectively have two separate databases and two separate servers.

Therefore, your costs for those databases will be doubled. Before we move off from SQL Server, I’m going to show you one more piece of functionality we can do. On my local server, I’m going to create myself a new database. In that database, I’m going to create a single table. So we’ve got our table there. What I’m going to do is so this database is running locally on my computer, but what I can do is go to tasks and deploy database to Microsoft Azure. So we need to tell it which connection to use. I’m going to go directly against the west database. We can set what addition we want to use. So again, pricing tier here and click Next, confirm the details and finish.

This will then automatically perform a schema extraction and full backup of our database, copy it up into Azure Photos, and then restore all that information into a new database within Azure. Once that’s finished in our server, we can now see the new database. And in fact, if we go over to the portal, we can now see the new database that’s being copied.

Note that that new database is not going to automatically copy across to the other server because you have to specifically replicate each database in turn before we finish up what I’m going to do now is clear down the resources I’ve created by, first of all, deleting the failover group. Then once that’s done, I’m going to go through and delete each of the servers. You can leave them there if you want to, for your own practice. However, don’t forget that you will be incurring charges.