Google Professional Cloud Network Engineer – Implementing a GCP Virtual Private Cloud (VPC) Part 2
- 2.2 Configuring routing
Configure routing on Google cloud platform. So let’s understand what is routes. Routes define path for the network traffic to reach from source to destination. It is just a direction how source can reach to the destination with associated definitely a protocols in it cloud router. This is the router you can and think of it’s Sdn software defined network router on Google cloud platform and it’s a control plane device. It is not typically your hardware device traditionally which sits in your data center and takes the traffic. So it has got a control plane as well as the data plane in it. So we are going to look at a cloud router as well. We have already seen this as a hybrid network but there are some bits and pieces which you need to add it in as a part of the syllabus.
So if you look at the syllabus we have configure internal static and dynamic routing. Second one is the routing policies using tags and priorities and third one is the configuring nat instances. Let’s go one by one internal static and dynamic routing. So typically there are multiple types of routes which some of those we have already seen. That is the first one is the default route and the route to the internet gateway. This is created by default when you create a network VPC. And if I go back to this one you can actually see let me go to the network and in network you can go to routes you can see the route to the internet gateway is already created for you so you don’t have to create it again. And it is all destination IP addresses like open to the world and this is to the internet gateway. Second one is subnet route.
And if you look at all these other routes, these are the routes which are related to are linked to each and every subnet subnet in the VPC. And if you look at the destination IP, I’ll take this particular example ten 152, dot zero dot zero slash 20 if I take this 1152, ten 152 and if I go to VPC network you can find there is astrolia southeast subnet and there is route actually defined. To reach this particular subnet gateway takes one particular IP address, that is the first IP address and additional two addresses again taken by the network and all other IP addresses from that one is available for you to assign it. But that way routes are actually created for each and every sub thing in the network and there are static route which you can go ahead and create. If I go here and I can create a route, I can actually define a different route here and that is the static route, dynamic route, that is the dynamic route and there are multiple aspects to the dynamic route and we are going to talk more about dynamic route as a cloud router. But if I have to explain what is the dynamic route.
If I go to say default network and I can say dynamic routing mode and it says if I choose the regional one, cloud routes will learn the routes only in the region in which they are created and this is regional specific dynamic route and the global dynamic route. So it is for all the regions. So the dynamic route is you don’t define them in a static way the way it is defined it for your subnets. It is dynamic in nature, whenever you add the different subnetworks it gets published automatically and you don’t have to do anything else there. So that is the dynamic route more in terms of static routing, any route is configured in default or custom routes, that is dynamic route. If you look at all those routes created by default here by default, by virtual network, all of these are like static routing. Static route can use any of the static route next hop and you can manage it manually.
Dynamic routing. We are going to see the example which we already saw and this is a part of Cloud Router. And we are going to see that in here as well as a part of Syllabus. Not much as important here besides whatever we saw already dynamic routing or the routes are managed by one or more Cloud Routers. So Cloud Router should be there to manage it. Their destination always represents an IP range outside your VPC network and their next offer always BGP address pair and if you look at there is a BGP link and this BGP link is available for you to publish the routes if there are any changes.
As an example if you have added say Legal and Customer as a department and you have created subnetwork inside your data center that will get dynamically published or announced by using Cloud Router and that is the dynamic routing, we do not configure them manually as a static routing. Or you can think of if one of your network gets additional subnetwork and then you want to publish that to your own data center right network and that is published via BGP link and you do not need to manage it statically. It applies to VPN and Cloud interconnects and that’s the dynamic routing.
Let’s look at some of the routing policies routing priorities routing applicability let’s look at how routing priorities impact the routes. A typical routing applicability is as follows the first one is system generated routes applies to all instances VPC network custom static route can apply to all instances or specific instances depending on tag attribute of the route. Dynamic routes applies to instance based on dynamic routing mode of VPC network routing Orders subnet routes are considered first and the high priority one if pocket does not fit in the destination for subnet route GCP look for another route with most specific destination. If more than one route has same most specific destination GCP consider priority of the routes. If none of applicable or matches the specific destination, then GCP drops the packet. And for us, more important part is if more than one route has right.
Most specific destination, GCP considers priority of a route. So it is based on BGP Med multiplex exit discriminator. Lower the number, indicate the higher the priorities. For example, if a route priority value as 100 higher priority than one that has a priority of 200. So the more the priority, more the number is, the lower the priority is. Route and network Tax you can apply tags. It will be applied to an instance that have at least one of the listed tags in the routes. If you don’t specify the tag, GCP applies to the route to all the instances in the network. Before we understand next hop, let’s go ahead and look at. So let’s go ahead and create a route. I can say my route and your network. Let me select Auto network which was created for earlier exercise. And let me give the route to priority 1000. Let me say wave traffic default. I’m just putting anything here.
Okay? So the route is getting created here. So this route is created so with the destination IP addresses this particular range priority. This particular instance tag is Web Traffic and next stop is Internet Gateway. If I create instance with this particular tag, then this route gets applied. So you can think of that right as a priority of the tags which gets applied to the instance. Applicable instances. Currently there are no instances right now which will show it here. But whatever you create in Auto network, they will be able to see this route going back again to the theory. So static routes. So next stop, what is next stop? Next stop is actually where that route should look at if it needs to choose the path, right? And that’s the next stop for static routes. And it forced actually the traffic to the next stop. So in our case, I mentioned that it is the Internet gateway and that will be like next stop for us. If you don’t specify tag, GCP applies the routes to all the instances in their network.
Okay? Cloud Routers and we saw that cloud router is the router which does the BGP announcement. If there are network changes, what will happen if we know the cloud router gets failed, right? Think of it, it is as the network or Sdn network or software defined network. As such, it is not a data plane. So if it gets failed, if the data is currently following through or going from one source to the destination, it will not be impacted much. Okay? And it has got a property of graceful restart. So if at all it gets failed, it will get restarted automatically. So you don’t have to worry about the cloud router failures. You can create a high availability using multiple routers if at all you want to have it so that if at all say the link from your own onpremise router goes down to the cloud router then you will have multiple ways to connect to your Google cloud platform resources.
Here are list of parameters which you pass it while you create the routes name and description, network destination, IP range which we saw priority next up and the network tag if at all you want to put it. There are different you can think of the next stop. Next stop addresses. Next up, gateways. Next stop instances. You can put all of those or VPN tunnels. And if you look at our routes, if I go back and if I click on Create route you can find this next hop and you will have the gateway instance IP address or VPN turl and network tags tag. As we said, if at all you apply the tag to a particular instance, you will. This route will get applied to that particular tags. You.
- 2.3 Configuring and maintaining Google Kubernetes Engine clusters – Part1
Configuring and maintaining Google Kubernetes engine cluster. Let’s go ahead and get into details of how Google Kubernetes engine uses Google network. If you look at the sea levels, the points that we have for this particular exam is VPC native cluster using LS IP cluster with shared VPC private cluster clusters, cluster network policies adding authorized network for cluster master access. Before we get into individual items, let’s go ahead and understand what are type of cluster which we can use it. There are especially two type of clusters which we can use it out of Google Cloud platform.
The first one is route based and the second one is alias IP based. We do have the third cluster which is private cluster and we are going to talk about it. But mainly there are these two type of clusters which are used before we get into Kubernetes in details, let’s understand in a high level what is the Kubernetes engine. We already saw this. I’m just taking you in a high level just to understand how networking will impact. If you look at Kubernetes cluster, it has got a master and there are multiple nodes which uses Kubernetes which actually gets deployed the pods and the nodes run nodes run these pods and pods is actually your containers where it runs the business processes.
If you look at Pod, Pod uses the storage as well as the IP address. And if you can look at this, all these containers inside the Pod use single IP address and single namespace. And that is very important for us to understand more on networking, how containers uses. So back into business, let’s understand how networking or IP addressing work for Kubernetes engine. If you look at the kind of Kubernetes engine as a cluster, there are clients which connects from outside world to a particular node to access a particular Pod, right? And that is where you need to have IP address for your node to locate the service or the Pod. And then you have IP addresses which is for a particular Pod. And this is a different address space which you can configure and use it.
And all those Pod addressing will be used by different container services. And those containers will have different port numbers if at all the service is exposed outside. Typically, if you look at how the client will use, this is again very typical example here, considering Pod as a client as well. So Pod will connect to the service and this is the configuration which is there in the master and service has got an IP address space and then this service talks to different pods service. You can think of service as an abstraction layer using which you can directly hit Pod. And the only you should know as a client is the service. You don’t need to understand where is the Pod and how it is located.
If there are multiple pods in Kubernetes cluster, service actually abstract the way out and give single endpoint to the client. So you will need to have the service endpoint for your service or for your backend ports. So let’s understand the first one that is your route based clusters. And this is very typical. Or you can think of default the cluster which is available on Google Cloud platform or Kubernetes in general, right? So the network is required definitely for node as we saw service and Pods. And these are three entities which needs a network.
And this is the only thing which you need to understand in the context of Kubernetes and how it is different for route based versus IPLs based. We are going to see that. So the clusters that we are going to see is what is route based les IP based with shared VPC and what do we mean by private clusters. But let’s go ahead and get into what is route based. Even though this is not for Syllabus, but this is default networking which is available for Kubernetes engine. So how this particular cluster uses a networking on Google Cloud platform. It is based on the GCP routes. It’s a typical routing method which will be used by the cluster. You have networking nodes. It comes from primary subnet. Wherever you are deploying your cluster, the node address will come from the primary subnet.
The service will have 20 address space and the pods will have the cluster IP address space. The remaining address space which will be used by the Pod. As an example here, if you look at the details of any container cluster IPV four, Cider 10560, and this is your cluster IP out of which your service IP will be 1059, 240, and this will be used by your purely service and remaining will be used by Pod. And the node is from your primary subnet. Let us go ahead and create one. So let me just go ahead and create my default cluster. So it says create.
I have not put forward create here. Let me just reduce the name otherwise it will be too much for me to type after one. My default cluster, container cluster create, okay, clusters. So it is creating a default cluster for me. I’ll pause the video for some time. It’s taking the status and it should be available here as well. It is a part of Compute service Kubernetes engine. Earlier they renamed it to GKE, but now I think they started using Kubernetes engine directly because Kubernetes is so popular nowadays that as your Microsoft, as your Amazon also started using Kubernetes clusters as offering.
So instance is created now. The cluster is created now. Let us go ahead and just go ahead and describe it’s, not direct one cluster, describe. And if you look at the networking and that is very important for us. Okay, so it says cluster IPV four side range is ten, 410 and 14 mask. So this is your cluster IP range. You need to look for let me go ahead and just filter it. Service IP range. Let me just go ahead. So it says cluster IPV, four Cider ranges 24. This is the cluster IP and this is your service IP addresspace. Okay, the node IP view for Cider range is slash 24. If you look at node details, get nodes.
There are these three nodes, three different nodes that are available for us to use it. And for each of these nodes you will have routes defined. So this one is one node. This one is the first node, second node and third node. Okay? So all of those nodes you will have routes defined and this particular route will come from or this nodes IP address will come from the zone. I have a default zone which is US central one. And this particular IP address is coming from the US central zone. So we saw how addressing is done for nodes. That is from your local subnet service. We have IP address range for service and we have IPR test range for nodes. Let’s move forward. So that’s route based we saw we have cluster IP range and then we saw service IP range, right?
And we saw that node IP address is coming from the local subnet. Now let’s look at the cluster native clusters using alias IP. And this is the only difference wherein you provide your IP range. And that IP range could be anything, any range which is valid Google Cloud Platform address space. And you can just apply it while creating the cluster itself. And you will have the cluster with that IP address range. We are going to see the example here, but let’s look at some of the benefits. The benefits are IPR natively routable to any GCP network including peering Pod. IPS are reserved in advance so we don’t have any problems at the time of execution.
Antispooping chapter by network to avoid communication outside the cluster. IP address can be announced by the cloud router itself because it is valid address range or Cider range in Google Cloud platform does not necessarily come from your subnet. Alias IP address can directly be accessed by service without the Nat gateway. There are some limitations. You cannot currently migrate VPC native cluster to one that uses advanced route. You cannot currently migrate existing cluster that use routes for Pod routing to the cluster that use les IP addresses. You cannot use legacy network with VPC native clusters. Let’s go ahead and get into details of the demo. Or example for this one, I have a sample request which is already created. I’ll just copy that over and use it to create the next cluster. And then we’ll compare the addressing how it works.
Okay, white say g cloud container. I am saying okay, let me give one bye one and my third range. So it is getting created. We’ll wait for some time. I’ll pause the video. So we have some error here. It says this particular address range is not available right now. We are already using it. Let me go ahead and change that parameter to something else. I can put this in line. Okay, it gives us an error and it says already exist. Let me just change the name my cluster. So it says, budget exhausted five attempts requested. Sides of this one not available. Even that is also not available. What do we mean by okay, let me just change this to okay, this is errord out my cluster two. Okay, I need to delete this. You cannot create these many clusters. I just reduced the cluster size as well and deleted the other ones which were not used. So it looks like the cluster is ready now. Let’s go ahead and describe the cluster. So this is my create command. Let me just put describe and grape IPV four. And you will find here the cluster IPV four cider range 100, which we have given it in the command itself. If you look at my command, we have given this particular IP address range, the cluster Cider block. So all of this is the same. The service IP though, you can actually find we have given this range as well, right? This particular and you can actually choose to give 20 or 19 or 16 or whatever you want to have it.
But you need to keep a considerable amount of service IP addresses. Depends on your requirement. The node IPV port sider size is 24. Okay, so this is the IP addressing and this is what you control without actually relying on the routers to control your addressing space. Let me go back and see the routes you do not have actually, the routes created for your address range. It is by default, it is there and which will be used. But this was, I think by default it was created earlier, but there is no route is created. If you look at the Lear cluster, my default cluster, there are routes created for each and every nodes. I reduce the number of nodes to just one node. And that’s why you see only one particular route there and the other two routes were deleted going back here.
So we saw we have given two secondary address ranges, pod addressing and the service addressing. The range management is either done by GKE or you can manage it by your own node. And size of the cluster depends on the size of secondary ranges. And this is very important, right? So how much range you want to use it, it is all depends on how you want to configure it. And you provide the address range or site range while creating the cluster. And that’s where it is, how many nodes you want to create it, how many parts you want to create it inside it, that is all depends on your configuration. So what we saw nodes, definitely the IP comes from the subnet range. If you look at the nodes.
If I go here again nodes. These node IP addresses, all of these come from the subnet range. If you look at if you go to compute engine so these three were created as a second. And all these IP addresses that will come from your default, your subnet. Okay, the cluster parts, it will come from cluster IPV four Cider or cluster secondary range name for the service. It will come from service IPV four Cider or service secondary range name. For our example, we have given cluster IPV four Cider and service IPV four cider.
- 2.3 Configuring and maintaining Google Kubernetes Engine clusters – Part 2
Clusters with shared VPC. We are not going to get into details of demo here because we have seen two clusters created, right? We’ll just go ahead and put forward some thoughts around the cluster and shared VPC how it will impact the clusters that use a shared VPC cannot use legacy network, it has to be alias IP. The later part, the later cluster which we created, you cannot convert existing cluster to shared VPC. Cluster quota does apply. There are some numbers which may get impacted like number of nodes, number of network which you are using it using the service account to create a cluster having necessary permissions. So as we understand right and we already saw this for share VPC there is a host project and then there is a service project or the service account.
So all that permission is actually driven through service account. That service account using which you are creating a cluster should have enough permission to create or use the network resources from the host project and that is very important, right? So that is the consideration you can think of when you create a cluster insured VPC. There is another point for us as a private cluster, what is private cluster? So private clusters ensure that the traffic stays within your internal RFC 1918 address space there, there is no external IP address range which is supplied to the pod or to the nodes and that is very important and that is what we call it as a private cluster. So cluster which is used internally within your GCP network, you can still go ahead and expose your master to outside world and keep all other items, all other nodes or pods inside your cluster and you can use the load balancer to reach the traffic to the internal services. But that is the private cluster. Private cluster concept itself is that you don’t want to have clustered access from outside world private cluster use. You can actually use Http load balancer or internal load balancer to accept the traffic. You can have VPC peering.
So like the nodes are in different VPC and master is in different VPC if at all that is the case, then in that case you can have VPC peering to connect those nodes together. Private google access. So typically if you look at if at all you don’t have access to the internet, then your cluster needs to have private connections to the Google services. That’s where you can use private Google access in the private clusters as well. There are some limitations for private clusters. Each private cluster you create uses unique VPC network peering. The size of RFC 1918 block for the cluster master must be 28. While GKE can detect overlap with the cluster master address block, but it cannot detect overlap within the shared VPC network. And this is a very important consideration when you create a private cluster in shared VPC and that’s it for private clusters.
Let’s move ahead and understand the next concept here the cluster network policy. So, if you look at the network policy, it is fine grained control you can think of or access permissions which you can enforce in your pod and services. If you look at the architecture here, your client accesses service and service is abstracted layer on top of your pods, right? Depends on scalability, auto, scaling and all that. Pods can be one or multiple, but service actually takes care of routing your traffic to the back end. In GKE or Kubernetes environment. This particular service also called as a deployment in simple world. But with network policy, what you can do is you can define the policy between which service can talk to what ports in the back end and that is possible using the network policies. It enables defense in depth example in case of multi application cluster, you can restrict communication between your application service and pods. You can define tenant per namespace and which means you can define a different namespace for different applications.
With the network policy, you can restrict communication between those namespaces. As an example here, consider this particular app is demo and you’ve created some namespace here for each and every application you can define one namespace and restrict the communication between your namespaces that you can do with the network policies. The simple way to enable the network policy is enable network policy. That is while creating your cluster. Or you can even go ahead and update the network policy using update add ons. Equal to network policy is equal to enabled or enable network policy is equal to node. So you can give the same parameter here as well the parameter which you use it while creating the cluster. But if you update the cluster using enable network policy it will recreate the node pools which is like deployment.
So it is just that particular consideration which you need to have it. So that is all about network policy. The last point that we have is adding authorized network for cluster master access. What do we mean by that? So if you have a cluster right, and that particular cluster used by only specific subnet range or Cider range, you can actually add that authorized network in your cluster. That way your access for that particular cluster is restricted to only those particular client or Cider ranges and that is your authorized network. So you can configure authorized network to whitelist specific Cider range and allow only IPRs from that particular range to access cluster master authorized network compatible with all type of cluster, whether it is private cluster, IP, LSIs or route based cluster. A cluster can have no more than 20 authorized network Cider ranges.
And this is you can think of limitation. You can enable authorized network of private cluster as well. How to enable you can actually just use this particular parameter like hype and hypo enable master authorized network and you can provide what is that particular Cider range is right. If you have multiple Cider ranges, you can provide multiple Cider ranges. While updating the cluster, you can use the update command cluster update. And you can give the same parameters. Enable master authorized network, master authorized network and Cider ranges. You can actually verify the command using verify if the Cider ranges are applied. And this is the described command. I can actually give an example or the demo for this one. Master authorized network config cider block this one and enable this. True, you can go ahead and disable as well. So no enable master authorized network and that’s it for adding authorized network.
So authorized network is typical you can think of. You are providing access for your cluster to a specific Cider blocks, and you can add as many as Cider blocks you want and you can go ahead and verify it, or you can go ahead and disable that particular master authorized cider block as well based on your need. That’s it for the networking on Kubernetes engine guys. If you have any questions on Kubernetes networking, let me know. Otherwise, you can move to the next lecture. Thank you.
- 2.4 Configuring and managing firewall rules.
Configure and manage firewall rules as per the syllabus. What is firewall rule? We already saw as an overview or high level what is Firewall Rule? In VPC? It is a kind of lock to your traffic in both the direction, whether it is egress or in grace the traffic coming into your virtual machine or traffic going out of your virtual machine as a syllabus. If I look at there are five different aspects which are asked or will be asked in your exam target network, tag service account, firewall rule priorities, network protocols, egress and ingress rules and firewall rule logs. Let’s take one by one, but before we jump onto the syllables, let’s understand what is Firewall Rule? So Firewall Rule that you allow or deny traffic to and from your virtual machine instances based on configuration you specify. While Firewall rules are defined at the network level, because you choose the network while creating a firewall rule, the connections are allowed or denied on per instance basis.
So even though it is defined at network level, the connections or the traffic is allowed or denied at per instance basis. GCP firewall Rules as an existing not only between your instance and other network, but you can define it individual instances within the same network. So you can actually apply the firewall rules within your network as well. Firewall rules only support IP view for traffic. Each firewall rule action is either allow or deny. Each firewall rule applies to either incoming or outgoing traffic which is ingress or egress traffic but cannot be both. So you cannot create a firewall Rule which will say define ingress and egress traffic in one particular rule. When you create a firewall rule you must select VPC network so it is attached to a VPC network. GCP firewall rules are stateful which means if the connections are or the data packets is sent from source to the destination with every ten minutes, it is considered as a stateful as stateful connection. Firewall Rule cannot allow traffic in one direction while denying the associated traffic in the return. So you cannot have a traffic which is allowed, say like SSH traffic allowed from A to B and you have denied traffic from B to A because there is no feedback or that is not the way the network works. GGB firewall Rule do not reassemble fragmented TCP packets there are some implied rules. Implied rules allow implied rules which is like egress rule for all destination which is internet gateway. This particular rule has got a lowest priority 65 535, so this is allowed. So all the traffic leaving your virtual machine is allowed by default unless and until you put forward the Firewall Rule which has got a higher priority and restrict some traffic. Imply deny ingress Rule an egress ingress rule whose action is deny the sources.
So all ingress traffic coming to your virtual machine is always denied even though it has got a lowest priority. These implied rules, you cannot remove it. But you can actually go ahead and change the priority of those rules. Because if at all you want to use these priority numbers to some other rules based on your network design. Pre populated rules for default rules default allow internal so all internal communication is always allowed like VM one within the same network can talk to VM two or VM three like that. Default allow SSH so this is rule again created and by default for you, you just need to attach this tags to your virtual machine instances default allow RDP 3389 port default allow ICMP. There are some always allowed and always denied traffic.
So if you look at what is always allowed traffic, the traffic which is DSCP traffic for DNS resolution traffic to get the instance metadata and NTP. NTP is network time protocol using which the clock for the VM gets synced based on the requirement. There is always block traffic, whether it is GRE, maybe hacking or someone trying to continuously hit your services protocol other than TCP, UDP and ICMP and IP or egress traffic on TCP with the port number 25 SMTP. So all of these traffic is always blocked. Components of firewall rule firewall rule has got a priority associated to it direction whether it is egress or ingress actions allow or deny target source in case of ingress in case of egress, it is destination protocol and port. Let’s go ahead and look at all these parameters. There is an additional attribute which is called enforcement status. Whether you want to enable the firewall right away or enable firewall, you can actually disable it for some time.
Let’s go ahead and look at the firewall rules. So if you look at as we said, there are some default firewall rules which are created by default. Let me go ahead and choose. So if you look at all these firewall rules like default allow http default allow https I have created this my firewall default allow ICMP internal default allow RDP, SSH and then I have some custom rules which are created that is default allow SSH custom http http web and firewall Http. I can give you an example here. I can create say my firewall web traffic. I can turn on the logs and these are the logs which using which we can see who is connecting your instances or connecting to back in your instances. And this note is very important, it is large number of logs because for each and every traffic connection you will have a log associated to that and it will log continuously into Stackdriver.
So you may incur major cost if at all you enable it. But for your sensitive traffic you can just go ahead and use it, all other times you can just switch it off, you can choose the network which network you want to connect to. I’m just leaving default as this. I can give priority here and the priority actually goes from zero to 65 535. You can provide the direction whether it is ingress coming into your VM or egress going out from your VM action, whether it is allowed or denied target. You can specify all instance in particular network specific tags. You want to actually attach it to a specific tag or specified service account and we are going to talk about it shortly. If at all you want to specify or use the tags then you can give say traffic web as a tag. So this whenever you attach a tag it will be applicable to that destination IP ranges. I can give say the whole world, right? And I can say allow TCP 80 comma 80 80 traffic so the firewall rule is getting created.
So firewall rule is created. You can actually go ahead and edit the firewall rule. So you can change the loggings, you can change the priority, but you cannot change the egress or the action direction and action because that’s the firewall is defined. And you can change the port or allow all you can change the destination IP ranges or you can choose different target altogether. So let me just save it so that’s the firewall rule you can create ingress and egress both. So let’s go ahead and see when we say ingress traffic, right? Ingress firewall rule, what is that?
So both of these rules has got either a law or deny as an action. But what it changes is, in case of ingress, it defines a target. And this target is whether you want to apply to all the instances inside the network or instances by service account or instance by network tag and the source from where actually that traffic is originated. And that’s where you want to say the traffic coming in from the world, I’m going to allow it. Or I want to allow traffic only from this particular IP Rangers or this particular subnet. Right. Or instances using a particular service account or instances using a particular network tag. So apply this particular firewall rule to those and then you can specify protocol and port. If you look at the egress rule, instead of source, it gets actually destination.
So your traffic is leaving your virtual machine, and that’s where you can specify the IP address or the IP ranges where you want to hit to. And again, target defines the source from which actually what you want to do. You want to actually apply this target, this rule for all the instances in your VM or by service account or by network tags. If I go back here and see so if you look at this one default allow SSH, it says all instances in the network, it will be applied to all the instances in the network. You can actually change it to the network tag.
And unless and until this particular network tag, this particular network tag is there, the traffic will not be allowed. Even though this is attached to, say, default network, it will not be allowed. Okay? So that’s the Firewall, ingress and egress rules, the priority as we said, the lower integer or the lower priority indicates the higher priority, default priority is 1000 and you can actually change that priority application based on the target if at all. There are same priorities, highest rule applicable for protocol and port, highest rule for allowing, say in grace over the lowest priority, for denying the rule for the same target, rule with deny actions override allow actions if the priority is same. So as an example, if your traffic is allowed from say network one to network two and at the same time I’m just saying the word because it is applied to an instance actually. But I’m just saying high level thought, right?
So if there are traffic allowed from network one to network two, you cannot have traffic denied from network two to network one or you can think of likewise the VM instances rule with the same priority and the same action have the same results. Let us go ahead and understand the service account and firewall rules because that is another topic which we have it and we have to cover it. So the instances, if you remember the instances which use a service account you can actually apply the service account rules or the permissions to an instances and application running inside the VM can have the permissions which are associated to the service account. So you can allow the Google cloud storage bucket creation, object creation, object modifications or even object deletions to a service account. And if that service account is used by the client which is inside that particular virtual machine, then it can do all those things right together. But if at all so the service account says only read the cloud storage bucket and not to modify any bucket or the objects, then that particular virtual machine instance will not be able to modify anything in Google cloud storage. Likewise, you can apply the same logic here and say that if at all service account A is used by a virtual machine, virtual machine one, then the permissions or the firewall rules, including the firewall rules you can think of is getting associated to that particular instance. So in Nurture, what we are saying is if say VM One wants to talk to VM two and when you define that particular communication and you attach it to the service account, if at all VM One is using a service account. That particular service account.
Then only VM one will be able to talk to VM two and that is based on the service account. So both ingress or egress rules, you can use service account to specify target for ingress rules, you can specify the source of incoming packets as a primary internal IP address of any VM in the network where VM uses particular service account. Service account must be created before the Firewall rule so you cannot have the future Service account for the Firewall Rule. Firewall rule that uses Service Account identify that apply VM using Service Account or when the Service account is attached to the new VM you can think of once your Firewall rule is created for a particular Service account it will be applicable immediately, right? If the virtual machine is created afterwards even though it is using the Service account, the new Service account it will be applicable likewise means whenever it is using a traffic. So it is applicable to existing VM instances using the Service account as well as the new VM instances which will be using the Service Account.
So which one to choose? Right? So whether you use Service Account or the network tag network tag is very high level, you can attach it to the VM or you can attach it to a different VM. You can think of service account is more stricter or granular control which you have it and that is what you can use it inside your virtual machine and that is very important aspect here even though the management using the network tag is very easy and managing Service Account is somewhat complex than the network tags because Service Account can be used for different purposes giving permissions or access to different cloud services and Network is one of the additional facility which will be used by the Service account. So if you want strict control, use Service Account.
If you do not have strict control then or you do not want to use strict control then you can just straight away go ahead and use a network tags. A network tag is an arbitrary attribute and Service Account represent an identity associated with an instance and that is very important. So here are some rules which you need to remember while using the Service account you cannot use target Service account and target tax together in Firewall rules changing so you cannot have say tax as well as the Service Account together. Changing Service Account for an instance requires stopping and restarting it. Adding or removing tax can be done while the instance is running. So this is an important actually distinction only one target service account can be specified per Firewall rule. More than one target tags can be specified in the single Firewall rule and this is another distinction which you have it only one Service account can be specified per ingress Firewall rule.
More than one source tag can be used or specified in Single Firewall rule and this is the distinction which you have it you identify instances by network tag. The Firewall rule applies to the primary internal IP address of an instance. Logging logging can be used for different purposes and we saw that there is a flag which you can use it. Either you can allow logging for the traffic or you can decide by default it is off. But if you can actually choose to make it on for sensitive traffic. So firewall rule logging you can use it for audit purpose, you can just use it for verification purpose, or you can analyze the log and understand your traffic, how the traffic is flowing from one to another. One options for every firewall rule only for TCP and UDP traffic and not ICMP and all other traffic. You cannot enable firewall logging for deferred rules or implied denial or implied allow rules.
Log entries are written from the perspective of virtual machine instances and this is very important connection record is created for traffic. Some of the considerations. If you look at logging and the type of virtual machine which you are using for f one micro maximum number of connections in 5 seconds, that is 100 connections. G one small, you can go up to 250 connections machine type with one to eight CPUs you can have 500 connections per vCPUs and machine types with more than eight vCPUs you can go up to 4000 connections which is you can think of per CPU is like 500 and you can actually multiply that to eight. But even if you increase the vCPUs beyond eight, you will not get more than 4000 connections logging. That’s it for a firewall rules. If you have any questions on firewall rules though it is theory or practical, let me know, otherwise you’ll can move to the next lecture. Thank you very much.