Google Professional Cloud Network Engineer – Configuring Network Services: Load Balancer, CDN , DNS Part 2
- 3.1 Configuring load balancing- SSL, TCP , INTERNAL LB
TCP and SSL load Balancer these are external load balancer and we call it as a SSL proxy or TCP proxy load balancer as well. So typically why you will use it. When you have SSL or TCP traffic and you do not have Http or Https on top of it, then only you will use SSL proxy or TCP load balancer SSL proxy intended for nonsttp traffic for Http traffic you just use Https load balancer SSL load balancer proxy at global load balancing layer. So this is again a global load balancer. SSL connections are terminated at the global layer and then proxy to our balance to the instances across the back end via SSL or the TCP in the back end while choosing to send the traffic or unencrypted TCP global load Balancer the instances enable you to offload SSL. So typically what it happens is your SSL configuration happens here in the load balancer and then the internal communication is plain, simple and then you don’t have to worry about those configuring those inside it. That is for SSL components usually it has got a hill checking back end services, SSL certificate and key and global forwarding rules and this is same as of Http load balancer you can think of. It is just addition that SSL certificate for SSL benefits. It has intelligent routing, better utilization of virtual machine instances, certificate management, security pack patching and it supports all these ports rejection or invalidation.
It rejects the invalid Http request or response negotiate http two and SPDY 3. 1 you want to actually go ahead and look at it if at all it is required. I am not sure it is required right now. Spread and spread the request load evenly among all the instances http load balances each request separately whereas SSL sent all bytes from the same SSL or TCP connection to the same back end instance and that is actually a connection on right? And for Http it is like each and every request is a separate and you need to put forward session affinity to have those requests. Then I’m not going into details of the demo for SSL load balancer I just walk you through some of the terminologies which we have it. So to create an SSL or TCP load balancer you just go to Network Services and Load balancer and create Load balancer and that’s where you choose either TCP or the internal and in TCP or SSL or TCP, right? This SSL proxy or TCP proxy those are like external and you can have it either single zone or multi zone. So what is that I want to do? It is from Internet to my VM only between my VM.
So if I say only between my VM, that is an internal load balancer. That is you can treat it right whether it is external facing, internet facing to VM then you can actually choose this one. You can select multiple region or single region. And again, you need to provide the name TCP and back end services. And again, this is like TCP. You have to select the region. I’ll go us west one and existing. Instance group or existing instances. You can actually directly give the direct instances. I can just go ahead and create one example for you guys to understand. It just looks at this is a part of instance group and I think not. It deleted. Okay, I’m just creating one instance and I’m not bothered about anything else but this traffic. Okay, I can actually add additional thing here and attach it the only thing which you need to make sure that you have enabled additional network tags to enable TCP traffic. Okay, I’m not doing it because I am not going to create it. I just wanted to show you that when you create an instance, it will appear there. In the load balancer. So create load balancer. Start.
Okay, continue. I would say TCP or whatever SSL. Right. Load balancer region, whatever region and select any existing instances. Let me see where I have created where I created the instance. I think it’s US east one region. So US. East one. Because that’s how it will look at the instance and pool failure. He’ll check. You want to put it session affinity. You still want to give it front end configuration, same Tcplb. And then you can either choose premium or standard. What do we mean by premium? I should have actually explained it. This will actually take all the traffic all the way near to your user using Google Fiber standard means this can take traffic anywhere using public internet as well and then you can just give port here and then done and then you can just create a click and it will be created. I do not have anything installed and I am not intending right now to do full demo for SSL but that’s in a nutshell.
SSL load balancer is internal load balancer so regional internal load balancer load balancers between your internal to your virtual VPC internal request stay internal to your VPC network region likely resulting in lower latency internal load balancer work atomo VPC custom VPC network legacy network internal load balancing can be also implemented in regionally managed instance group this allows auto scale So in nutshell what we are saying here is it is an internal load balancing internal to your VPC. So the traffic stays within GCP Google cloud platform and it uses fiber if at all. It has to use the fiber optic network connections restrictions internal to GCP.
You cannot have hop to IP manually configured route. You cannot send traffic through VPN tunnels to the load balancer so it is within the GCP ecosystem limit. Maximum number of 50 internal load balancer forwarding rule is allowed per network maximum 250 back end is allowed per internal load balancer forwarding rule. So you need to make sure that you understand it. So the way it works is typically if you look at any internal load balancer in large corporation or your own, there is a client request or client instance get hop onto load balancer and this is physical device and then the request is routed back to the back end. But in case of Google load balancer, internal load balancer it is just proxying. So once the connection is established it is talking to the back end services seamlessly and there is no hop here in internal load balancer.
So if you look at typical way how it works, you can have actually regional back end services and this is the client instance talking wants to talk to us central one A instance back end instance or one B and then it is using the load balancer to go back selection algorithm here by default internal LBUS five Tuple hash client IP, client port, destination IP, destination port and protocol. Whether it is TCP or UDP, if you want to control how the traffic direct to the back end you can use the following apinity so hash three Tuple and hash two Tuple client IP, destination IP and protocol or just client IP and destination IP health check you can still go ahead and configure TCP health check, SSL health check and http health check and this is what we saw already. So multiple health checks are available for you and depends on the load balancer which you use it you can choose the health check.
An internal load balancer. If you look at this overall architecture, this is the internal load. Balancer this is within you can actually already see this is within your GCP and not talking to any other external services external load balancer in the other word balances load between system based on incoming IP protocol data such as address port and type network load balancing users forward and this is common for all your load balancer configurations it is just given for your understanding and not just to repeat it because many of you will just refer to the slides when you study those topics for the exam and you should have this ready. And that’s why I have actually this repeated theory everywhere.
Network load balancer is a pass through load balancer and it does not have any hop to the connection terminologies again. Load distribution algorithm, target pool session, ability health checking, firewall rules and network and load balancing some of the concepts on connection training. So usually you can enable connection training and this is not critical for your exam, guys, but just for your understanding, okay? So if you enable connection training on the back end service to ensure that minimal intervention of your user when the instance is removed from the instance group, either manually or by the auto scaler to enable connection ready, you can set time out duration during which the back end service persists. That existing session being handled by the endpoint on the instance that will be removed, the back end service preserves the session until the time of duration has lapsed.
Okay? And we have seen and configured these numbers as well. Typically, you can have the duration between one to 3600 seconds. Load balancer pricing. It is very straightforward and very plain, simple pricing. So for all reasons, all load balancer you have 0. 0 25, which is like cents 2. 5 cents fire rules included and $0. 10 actually 2. 5 cents fire rules included and additional rule and that will be processed by the load balancer will be 0. 8 cents per hour pergb auto scaling. We saw the concepts of auto scaling usually instance group. That’s where we configure auto scaling. This allows applications to gracefully handle the traffic and reduce the cost. When the resources require lower resources. You just define auto scaling property in auto scaling performance auto scaling based on the load. And there are different parameters request per second air the CPU utilization. The terminology which we use in auto scaling is instance template, instance group and auto scaling policy which is attached to instance group.
So the policies which we can use for auto scaling is CPU utilization, load balancing, serving capacity Stackdriver monitoring matrices. You can actually trigger auto scaling based on cloud pub sub queuing workload. Some of the nodes auto scaling only work with managed instance group. And managed instance groups are not supported. And we saw this already in compute service. Do not use compute engine auto scaling with managed instance group that are owned by Google container Engine and Container Engine manage its own auto scaling.
So we just need to make sure that we are not doing anything tweak there. Okay? Usually, if you want to understand what are the instances created by compute engine, it is GKE as a prefix. You can find it for all those instance group or the instances as well. We do not have much of demo left right now for load balancer. Guys, whatever we saw, those are the demos for the load balancer. If you have any question, any questions related to any of the demo or theory part, let me know. Otherwise, you can actually move to the next service. Thank you.
- 3.2 Configuring Cloud CDN.
Google platform network services. Cloud CDN. CDN means content delivery network and that’s where you use Google’s Edge points or the CDN pop locations to cache your content. Let’s go ahead and get into it. So Cloud CDN is a part of networking service. We have already seen the load balance and we are going to use load balancer in our example to enable the cloud CDN on it and we are going to see the logs. So what is Cloud CDN? Cloud CDN is low attendance, low cost content delivery using Google Cloud platform using its network, you can do logging, you can have VM and storage as a back end from which you can do CDN or caching. You can enable or invalidate the links and it supports any cast IP addresses. So if you look at we saw this in our introduction section, these are the fiber optic cables which Google has laid and this my map might have changed already. We may have additional cables which Google has laid but this is the fiber optic network which owned by Google.
So all these blue lines, this is owned by Google and green lines, that’s where they have investment on these cables, right? And you see the dots and dots are like locations. There are even other dots as well as like pop locations. But these black dots, these dots supports CDN location. So if you look at in India there are three locations or in South Africa, one is in Johnsburg, it supports the CDN or caching in this particular location. So overall throughout the world they have 80 plus pop locations. Those supports. CDN content. Caching. Okay, so what is CDN? CDN is Google Cloud CDN leverage globally distributed edge caches to accelerate content delivery for website applications or Google Compute engine. Acadian lowers the network latency of load origins and reduced the serving cost.
Once you have set up Http or S load balancer you simply enable the CDN with a single checkbox and we are going to see it. So typically the way CDN works is there are end users, they are accessing some of the instances, whether it is a VM instance or cloud storage bucket, the static information gets cached here in the serial location which is used by the end user without say reaching back to the back end services. So that’s the CRM. It is caching in the pop location. That’s what you can think of high level features caching at pop locations. So Google has around 80 plus pop locations around the world using which you can cache the retail global reach SSL with no extra cost and it has got seamless integration with load balancers. You have any cost IP support, you can do invalidation, you can do logins and you can define the origins in CDN. So let us look at how it works in Google cloud platform. If you look at there are the customers from across the globe accessing your services or the back end services right via Http load balancer. What happens is any customer actually trying to access some of the static content it will be checked against whether that particular information is available in nearest CDN pop location.
If that information is available and not yet invalidated, then the customer can see that information and that is inertial a caching layer and using which you will have very fast access to that static content. Let us see this in action. How does it work? So, step one, say as an example this particular consumer request for snowman Jpg, okay? And that snowman jpg exists somewhere, whether it is in VM or in cloud storage. That request went to the pop location. Pop location sees that there is no snowman right now available in the pop location. So it will check the nearest pop location. What if it is near to nearer than the back end to check if the snowman is available there? Then it says there is no snowman here. Then the request will get forwarded via Http load balancer to the back end and back end gets the data, gets the snowman, store it in local cache and that is what it is called the term as a cash fill and then the content is sold back to the consumer.
What happens next is if at all the customer from the same region, that’s what the assumption is, is trying to retrieve a snowman or the web page containing snowman. What it will do is it will get the snowman, which is already cached without going back to the back end and deliver that snowman here to the new consumer. And that is what it is called as a cache. Hit when next time someone gets it someone tried to access it snowman then you will have snowman is available and then it is served.
So all of that you can think of is the way customers are accessing the data and the data is static data and you can cache it. Consider another case where there is a request goes to another pop location which supports Syrian for say, same snowman. Okay, what will happen is it will check again the similar fashion if there is no one available in the nearest cache, yeah, it is found and then it will be filled. So the cache will be filled here from the nearest pop location without going back to the back end and the content is sold to the consumer. And this is what it is called as a cache to cache fill. So Expiration Time and Validation Request for Caches each cache entry in the cloud CDN cache has expiration time defined by cache control and these are the Http parameters cache Control max H and or Expires headers. Okay? If more than one is present, the Cash Control S max takes the precedence over Cash Control max H and Cash Control max HTTL Max takes precedence over expires. So you need to make sure that if your data is there in your VM instances or cloud storage bucket, you want to cache it in serial locations.
But at the same time, you need to make sure that if there are any changes or updates to that particular data, the front end users are coming back to get those changes. So typically if you look at how the Syrian works in nutshell, when there is an expiration right, expiration happened right or maxi happens right, there is any other request, it checks for last modified and that is an e tag. The request is sent to back if modified since or if none matches to that particular request, then it will give you the data back again like snowman again back if it is modified since that particular time. If it is not modified since the time which it has cached, it will send three or four that is not modified response back to the seriousness. So that’s how you can avoid again cache filling for the same content. So how the caching is happening, right? Because you need to understand if at all there is a caching, there has to be some mechanism because you don’t want to cache each and everything that’s there in the web server, not really a dynamic content, right? And that’s where cache key comes in a play, right? If you look at this particular example httpsample coinages cat jpg, this is the key in which the data will be cached, this image will be cached, right? So if any new request which will look like this one, we’ll go to cat jpg and get the data out.
You can have exact matches, you can have file path and file name always be a part of that particular exact match but at the same time it determines which protocol you are using host query string as well, whether to include it or omit it. So when enable Sirian caching happens automatically for cachable data, like with the keys Http headers to indicates that which response to be cached and which should not. It is possible to predict whether a particular request will be served out of cache or not. The data location settings of other cloud platform services. Data may be stored and serving location outside the region or zone of your own origin service. And this is very important considerations if at all you’re making decision to use CDN, right? So if at all there are http caching browser caching, right, that is happening outside Google Platform, google Cloud Platform cannot control that or invalidate that data.
So cache invalidation. Typically once an object is cached, it is normally remaining in the cache until it expires or it is evicted out of a room for the new content you control the expiration time through the normal Web server configuration. Sometimes you might want to remove an object from the cache prior to its normal expiration time and that’s where you can use cache invalidation. You can check below information for invalidations like cat, jpg or pictures you want to invalidate all of that using the pattern or from single host you want to invalidate everything, right? Everything is changed for that particular domain. But you want to look at the limitations which caches has because you cannot fire a continuous cache invalidation request for the content. And there are some limitations.
And the limitations could be like the rate in which you are invalidating an object or invalidation is like you want to invalidate for the whole example. com, or you want to invalidate for specific content. So you need to make sure that all of these are considerations while you are deciding invalidating or use of invalidating a particular cache. Majority of this is done jurisdiction of development work or planning work and not really day in day out operations on the cloud platform. So there are some pricings which I have put forward on the CDN.
You can go through it and this continuously changes. You just need to keep watching. If at all you are really interested in understanding the price, do not refer this. Just look at Google cloud pricing because this might have already changed. We can get into detailed demo but let me go ahead and share some thoughts where you can actually go and enable cloud CDN. So if you go to Network and Network Services that’s where you can find Cloud Serian Content Delivery network you can go ahead and add origins I do not have anything right now but you can go to load Balancer create load balancer backend configuration and you say create back end service. I don’t have any back end service right now. But this is where you can actually go ahead and enable Cloud CDN. But if you create or you want to use an existing load balancer, you can either change the load balancer or go and add origin and choose the load balancer which you have it for CDN. Okay, we are going to get into details if required for cloud CDN but if you’re you have any questions on theory for cloud CDN, let me know, otherwise you can move to the next lecture. Thank you.