cert
cert-1
cert-2

Pass VMware 2V0-21.20 Exam in First Attempt Guaranteed!

cert-5
cert-6
2V0-21.20 Exam - Verified By Experts
2V0-21.20 Premium File

2V0-21.20 Premium File

$59.99
$65.99
  • Premium File 109 Questions & Answers. Last Update: Nov 13, 2024

Whats Included:

  • Latest Questions
  • 100% Accurate Answers
  • Fast Exam Updates
 
$65.99
$59.99
accept 10 downloads in the last 7 days
block-screenshots
2V0-21.20 Exam Screenshot #1
2V0-21.20 Exam Screenshot #2
2V0-21.20 Exam Screenshot #3
2V0-21.20 Exam Screenshot #4
PrepAway 2V0-21.20 Training Course Screenshot #1
PrepAway 2V0-21.20 Training Course Screenshot #2
PrepAway 2V0-21.20 Training Course Screenshot #3
PrepAway 2V0-21.20 Training Course Screenshot #4
PrepAway 2V0-21.20 Study Guide Screenshot #1
PrepAway 2V0-21.20 Study Guide Screenshot #2
PrepAway 2V0-21.20 Study Guide Screenshot #31
PrepAway 2V0-21.20 Study Guide Screenshot #4

Last Week Results!

students 83% students found the test questions almost same
10 Customers Passed VMware 2V0-21.20 Exam
Average Score In Actual Exam At Testing Centre
Questions came word for word from this dump
Free VCE Files
Exam Info
Download Free VMware 2V0-21.20 Exam Dumps, Practice Test
VMware 2V0-21.20 Practice Test Questions, VMware 2V0-21.20 Exam dumps

All VMware 2V0-21.20 certification exam dumps, study guide, training courses are Prepared by industry experts. PrepAway's ETE files povide the 2V0-21.20 Professional VMware vSphere 7.x practice test questions and answers & exam dumps, study guide and training courses help you study and pass hassle-free!

Managing Storage in vSphere 7

11. Independent Hardware iSCSI and ESXi 7

In this video, I'll explain an independent hardware Iscuzzy initiator. This is a solution that we can utilise within our ESXi hosts to provide connectivity to an iSCSI storage resource over an Ethernet network. So let's take a look at how this works. Have a Windows virtual machine on the left with a virtual Sneezy controller. And as that virtual machine generates storage commands, they flow out of the virtual storage controller and hit a physical storage adapter.

Now, in this case, the physical storage adapter is an independent hardware ISCSI initiator that has built-in Ethernet ports. So we don't require any sort of VMkernel port in this situation; the independent hardware is the Cozzy initiator, which is a complete hardware solution. We don't need to identify a software ESCOSA initiator identified.We don't need VM kernel ports. It's got its own built-in Ethernet ports that we can use to connect directly to the storage network. And the big benefit of the independent hardware Iscuzzy initiator is that it relieves a lot of the storage workload from the ESXi host. It makes many of these storage or operation tasks independent of the current performance of the CPUs of the ESXi host. Now, my storage has its own independent hardware that handles that workload.

12. Review: Introduction to Storage DRS (SDRS)

In this video, we'll learn about storage. distributed resource scheduler, or storage DRS. Storage DRS is based on data store clustering. So we've already learned a little bit about ESXi host clusters and how they can be used for things like high availability and DRS. A datastore cluster is very similar. It's just a logical grouping of data stores. And these data stores may contain files for many virtual machines. And as we add virtual machines, the data stores may potentially become unbalanced. So, for example, maybe some really storage-intensive workloads have been placed on Data Store One. Or maybe Data Store Two is running low on space. Storage DRS can automate load balancing across these data sources, and it uses Storage V Motion.

So just as a quick refresher here, we see an ESXi host with a virtual machine running on ESXi host one. If I want to carry out a Storage V motion, what I'm doing is taking all of the files that belong to that virtual machine and migrating them from one data store to another with no downtime. That's what storage virtualization is. So the virtual machine continues to run on the same host, but all of the underlying files are being moved from one storage location to another. That's what storage vs. motion is. Storage v. Motion is the underlying technology behind Storage DRS. So what Storage DRS is going to do is automatically move virtual machines around for the purposes of balancing capacity, usage, and latency. So, for example, if Data Store One is getting really full, or if Data Store One has a lot of storage-intensive virtual machines that are generating a lot of traffic, it may start moving virtual machines to other data stores to even out that workload. Now, there may be certain circumstances in which this can create some problems, right? So, for example, here we see two virtual machines: domain controller one and domain controller two.

And these are redundant. We don't want both of our domain controllers to be down at any time, no matter what. But if we turn over control to Storage DRS, it could potentially move those domain controllers to the same data store. And if that data store fails, we've now lost both of our domain controllers. So we can use something called an "anti-infinity rule" to ensure that those domain controllers are kept separate. We can even set up affinity or anti-infinity rules for individual virtual machine discs to keep themon separate data stores or to keep them together. But you probably don't have to do a whole lot of that. By default, if you create a virtual machine with multiple disks, all discs will be stored on the same data store.

So one of the handy things about creating a data store cluster is that you can put the data store in maintenance mode. So let's say the data store on the far left is on one storage device, and the other two data stores are on other storage devices. And maybe I need to take down a storage device, like maybe a fibre channel storage array, to do some sort of patching. Well, I can put that data store in maintenance mode. And what Storage DRS will do is automatically evacuate that datastore for me with no service interruption. So now I can go ahead and take that data store on the left down.I can reboot my fibre channel storage array, do whatever maintenance I need to do, and when it comes back, that storage load will be rebalanced according to Storage DRS recommendations.

So it definitely helps with things like maintenance. And the first step is to create a data store cluster. Now, you have to bear in mind that virtual machines are going to be moving around from one data store to another. So you want data stores that are going to act similarly, and that means data stores with similar performance characteristics and data stores with similar policies. For example, if one data store is replicated, we don't want a virtual machine automatically moving off of it to a data store that is not replicated. We can't mix NFS and VMFS data stores in a cluster, and we want to make sure that any shared storage within that cluster is available to as many hosts as possible. Now, when we're setting up a storage DRS cluster, it's very similar to a DRS cluster for our hosts in that we can choose the level of automation. In manual mode, when I create a storage DRScluster, it's not going to do anything automatically. What it's going to do is simply provide me with recommendations.

Say this virtual machine should move from this data store to this data store. That's going to give me the option of deciding whether or not I want to carry out those recommendations. In partially automated mode, when we create a new virtual machine, we'll create it on a cluster. When we create a virtual disk, for example, we'll just pick the cluster, and in partially automated mode, Storage DRS will pick the optimal data store for that virtual disk. So in partially automated mode, we're kind of halfway there. Storage DRS isn't going to automatically move anything anywhere. But when we're creating a new virtual machine, new virtual disc storage will decide where to put it. And then finally, we have a fully automated mode where our Storage DRS cluster will automatically use Storage V motion to migrate virtual machines from one data store to another.

Again, for the purposes of load balancing, capacity, and latency. So in this lesson, we learned about some of the following topics: We definitely gave you a very concise description of Storage DRS in our review. In this lesson, we learned about the following topics: We can have multiple data stores associated to create a data store cluster, and then we can monitor space usage and I/O latency across that data store cluster and carry out recommendations very similar to the DRS that we use with our hosts. We can carry out these recommendations and use Storage V Motion to migrate virtual machines around. We can always add more data stores to a data store cluster as we need to. And again, the underlying mechanism for these moves is storage velocity. And we can also configure some affinity or anti-affinity rules to keep virtual machines either together on the same data store or using an anti-affinity rule to keep them apart.

13. Demo: Create a Storage DRS (SDRS) Cluster in vSphere 7

So let's go to the storage area somewhere in the Vsphere client. And you can see here that I have three data stores. Now, when I enable Storage DRS, what I have to bear in mind is that virtual machines can potentially be migrated from any of these data stores to any of the other data stores in the cluster. So I want to choose three data stores that are essentially interchangeable. So if one of these data stores is replicated and the other is not, they're not a good fit for a Storage DRS cluster. If one of them is a really high-performance SSD and the other is built on 7200 RPM Cedar and is much slower, then again, it is not a good fit for Storage DRS. I want data stores that perform the same, that have the same underlying features that are essentially interchangeable, and that are as close to identical as they possibly can be. So I've got three data stores here, and they are very similar from a performance perspective. None of them are replicated. And so these are great candidates for a storage DRS cluster.

So I'm just going to right-click my training datacenter, and under Storage, I'm going to create a new data store cluster, and I'm just going to call it SDRs Demo, and I'm going to turn on Storage DRS. And at the moment, I'm going to choose Manual mode. So with Storage DRS in manual mode, what it's essentially going to do is analyse your data stores. It will determine if there is an imbalance between the data stores from a space consumption perspective. So it will analyse how much space is available on these data stores, and if one of them is really getting full, can we benefit from migrating virtual machines to a different data store? The other thing that Storage DRS will look at is OMB balance. So if I have a bunch of really storage-intensive virtual machines on data store one and data store two contains a bunch of VMs that don't really use a lot of storage throughput, we may be able to get a performance benefit by moving some VMs around and balancing out the amount of IO on those three data stores. So that's what Storage DRS is going to look for, right?

It's going to look for data stores that are getting filled up and move VMs to prevent that, and it's going to look for performance imbalances. If one data store is experiencing a lot of storage input and output activity, and we can make a performance improvement by migrating VMs to a different data store, it will analyse that as well. And in manual mode, what it's going to do is simply give me a set of recommendations. It's going to tell me what I should do to improve the balance of storage capacity and to improve the overall performance. But it's not going to automatically do anything. If I enable fully automated DRS, then I'm basically turning over control of the storage DRS. I'm telling Storage DRS, "Hey, if you can make a performance improvement by migrating a VM, go ahead and do it." Use Storage V motion to move virtual machines around however you see fit.

And so, from a space perspective, I can choose whether I want to do it in manual mode or fully automated. And space is really the most critical part of this because you definitely do not want to have a full data store. If my data store gets filled up and I have, let's say, 50 virtual machines stored on it, those virtual machines are going down. They're not going to continue to function. So that's one of the really great things about Storage DRS: if a data store gets to 80% full, it'll start moving VMs around to prevent that data store from filling up. And then, from an IO balance automation level, I'm going to choose fully automated there as well. And so if a performance improvement can be made by migrating VMs from one data store to another and equalising the storage policies across those data stores, now it can automatically move virtual machines to accomplish that rule enforcement.

If I have created anti-affinity or affinity rules and specified that certain VMs need to run on different data stores, for example, let's say I have two domain controllers, I may want to make sure that both of those domain controllers are not stored on the same data store. Like, for example, is Cuzzy Data Store one? If both domain controllers are on that datastore and that datastore fails, I've now lost both of my domain controllers. So an anti-affinity rule or an affinity rule are used to keep virtual machines together or keep them apart. And so I'm going to allow StorageDRS to move virtual machines around to make sure that those rules stay enforced. If I have defined storage policies, for example, and I want certain VMs stored on datastores with certain performance characteristics, that's something we can do with the storage policy.

You can say, "Oh, this VM needs to be stored on a data store that has replication enabled." Well, if the VM is stored on a datastore that does not comply with that storage policy, storage dress might be able to fix that. Storage Dress may be able to migrate that virtual machine to a data store that actually meets the requirements of that storage policy. And then finally, let's say we want to put a data store into maintenance mode. What is the VM evacuation automation level? Is Storage DRS going to automatically take all the VMs on that data store and spread them out across the other data stores within that data store cluster automatically? Or do I want to manually do it? So you can see, I've set all of these to be fully automated. What I've basically done is override the cluster-level settings.

So I really didn't need to do this. I could just make these all usecluster settings and have it fully automated. But maybe I want to say, "Hey, you know what? I don't want to automatically move VMs around to enforce my storage policies." I can say, "You know what? Let's not do that." Let's put that particular policy in place. Enforcement is going to be in manual mode. Alright? So let's click "next" here. And this first setting here is basically: do we want to monitor storage IO? Do we want to monitor to determine if storage latency is occurring and can potentially be improved? So if I uncheck this, we're no longer watching storage IO metrics for storage DRS recommendations. So I'm going to leave this checked, and I'm going to specify a threshold below which load balancing moves are not considered. So, for example, it's currently set to 15 milliseconds. I'm going to set it to 20. If I have a virtual machine that is operating with less than 20 milliseconds of latency to storage, it's not going to be considered for Storage DRS moves.

This is basically a way for me to say, "You know what? I expect to get at least 20 milliseconds or less of latency on my data source if my virtual machine is experiencing something above 20 milliseconds of latency." Now it's time for Storage DRS to start thinking about making moves there. And also the last setting here: when a data store reaches 80% full, that's when StorageDRS starts making recommendations or actually performing migrations. So I could adjust this down if I'd like and say, "You know what, if a data store gets 75% full, we're going to start moving VMs around." So I can kind of tweak my settings here and get them configured exactly the way I feel is appropriate in my environment. I can also say, instead of using a percentage of space utilization, if there's less than 50 gigs free on a data store, start migrating virtual machines so you can choose which one of those settings is ideal for you. And now I just need to pick the data stores that are going to be available in this SDRs cluster. So first I'm going to choose the hosts that are going to participate here, and then I'll click Next, and then I can see all of the data stores that are connected to those hosts.

Now, by default, it's only going to display datastores that are connected to all of these hosts. But if I want to, I could show all data stores, and then it's even going to show, like, local data stores. I don't want to pick these local data stores for my SDRs cluster because they're going to really limit the capabilities of SDRs to migrate VMs around. So I'm just going to stick with my three shared data stores here that are available to all three of those hosts. I'll click Next, and I will click Finish, and I've successfully created a Storage DRS cluster. Now, let's take a look at how things function a little bit differently. Now, I've got this virtual machine here. I'm going to write like this virtual machine, and I'm going to choose migrate.

And I want to move this virtual machine to a different storage device. I want to move it to a different data store. I'm going to click next, and this is going to show me all of the data stores that are compatible with this virtual machine that this virtual machine can actually reach and notice. What it's showing me here It's not showing me "Isczy data store one," "Isczy data store two," or "Iscuzzy data store three." It's not showing me any of those. It's simply showing me this. SDRs demo storage DRS cluster So now I don't pick an individual data store anymore. What's going to happen is that I'm just going to pick the cluster. And when this storage migration completes, what's going to happen is that this virtual machine is actually going to be stored on one of the three data stores in that cluster. I don't pick which one of the Storage Disks is going to figure that out for me. So if I click here on virtual machines, I can analyse for each of these data stores which virtual machines are located in which data store.

And so when this migration completes, my VM will be present on one of these three data stores. But I don't choose which one of the Storage DRS servers is going to perform the placement of that virtual machine onto one of the data stores in this cluster. And here we are at the summary screen. So I can see things like the total amount of used and free capacity on this data store. I can see the basics of my storage settings here. I can see if there are any faults with the myStorage DRS cluster that I need to address. And I'm having a problem here. I can't do any load balancing. There's not enough information yet. So that's because I don't have anything running on that storage DRS cluster. So that's how we can take multiple data stores and place them in a Storage Drift cluster and use Storage Drift to load balance based on available capacity and IO latency.

14. vSAN vs. Traditional Storage Arrays

And so let's start with the very basics of this. Here we see a diagram where, on the left and in purple, we've got our virtual machine. And just like memory, just like a CPU, the virtual machine doesn't actually have physical storage hardware. It doesn't have physical CPUs, and it doesn't have physical memory. It's accessing a shared resource. And for storage, traditionally, we've had something called a VMFS data store or maybe an NFS data store.

So from the perspective of my Windows virtual machine here on the left, this VM needs to see the storage hardware. We'll give it a virtual scuzzy controller, which will basically trick Windows into thinking that it actually has a physical storage adapter by giving it a driver for a virtual scuzzy controller. So here we see the virtual scuzzy controller that's being presented to our Windows operating system. Now, we've successfully fooled Windows into thinking it actually has physical storage hardware. And so as the operating system needs to read data from disk and write data to disk, Windows is going to issue fuzzy commands and user storage commands as it attempts to read and write data to and from disk.

So here comes the scuzzy command. And we can see here that it's flowing out of the virtual machine. And as it kind of flows out of the virtual machine, it's going to follow this path until it hits the hypervisor. The hypervisor is the first spot it hits, and the hypervisor is ESXi. This is the actual hypervisor software running on a physical host. The host has some kind of physical storage adapter. Maybe it's a fibre channel HBA, maybe it's an Ethernet port where scuzzy traffic can flow—who knows? But basically, the storage command leaves the VM. It gets prepared for transmission over the network, and then physical transmission over some sort of network happens again. It could be Ethernet; it could be fibre channel. It doesn't really matter either way. It's being transmitted across some physical network to a dedicated physical storage array with lungs. And on one of those, we've created a data store. And this is where the virtual machines' files reside.

Like VMDK and VMX, all the files that make up my virtual machine exist here on this data store. So essentially, what we end up with is a physical storage array with a physical LUN. We create a VMFS data store on it. And then, as virtual machines need to read from or write to disk, the SCSI commands are sent across the storage network so that they can be processed against the storage on that actual physical storage system. And one of the key aspects or services that this allows us to accomplish is the creation of shared storage. So shared storage is required for things like high availability Vmotion DRS. So, for example, in this slide we see a virtual machine, and my virtual machine is running on host one. There's my VM. So the VM is running on host one. And maybe I need to take down host one for some reason. Maybe I need to do a memory upgrade on it. So I want to take this virtual machine, which is currently running. I want to migrate it to another host while it is still running.

I don't want any downtime for my virtual machine, so I'm going to use Vmotion. But the virtual machine still needs to be able to access its set of files. This virtual machine is leveraging a VMX of MDK V. Swap all these important files that it needs over here on this data store. Those files still need to be available once the virtual machine is VMotioned to a different ESXi host. But fortunately, in this case, we can see that both ESXi hosts have a physical connection to the storage system. They can both access this data store. So regardless of where the VM is running, its files are accessible from ESXi host one. Its files are also accessible from ESXi host two. This enables us to take the virtual machine and migrate it to host two. And at that point, the VM can still access important files like its virtual disk. So shared storage is essential, and it's required for features like high availability, fault tolerance, and DRS. We've got to have shared storage for all of those features. And this is how we've traditionally done it. We take a bunch of hosts, hook them up to a storage network, and we can reach our storage array from that storage network.

Okay. So how about VCAN? How is vSAN different from a traditional VMFS data store? Well, in this case, a virtual machine is broken down into a series of objects, and those objects are stored on the local physical storage of ESXi hosts. So, for example, in our diagram here, we see VM One, and VM One is currently running on host ESXi 1. VM One is being broken down into a series of objects. One of those objects is the VMDK. There are other objects like the VMXfile and things like that. Let's just focus on the VMDK, though. Here's exactly what's going to happen: Rather than creating a VMFS data store on a physical storage device, we are instead going to locate this VM One VMDK object on the local physical storage of another ESXi host. So when VM One goes to read and write data from its virtual disk, those reads and writes are captured, and a VM kernel port is used to push them over a physical network, where eventually those storage operations hit a VM kernel port on the other host and reach their destination object.

That's how VCN works. We're actually leveraging the local physical storage of these ESXi hosts to create the illusion of shared storage. And by the way, all of those changes—everything that happens on that virtual disk—aren't just happening on one host. They're being mirrored to the second host over here. That way, just in case ESXi 2 fails, we don't lose all of the data for our virtual machine. So the reality is that, underneath the surface, the solutions really aren't that different. After all, what I have in both cases is a virtual machine sending storage commands. Those storage commands are interpreted by the hypervisor and sent over a physical network until they reach the actual files that make up that particular virtual machine.

So the end result is, does Virtual Machine One even know whether it's on VCN? Does it know whether it's on VMFS? Nope. It has no clue. Now, the big difference here is that, rather than purchasing a physical, dedicated storage array, I'm just putting physical storage inside of my ESXi hosts in the form of what we call disc groups. But because I've got this network interconnecting all of my hosts, I have all the features of shared storage. So, for example, if I want to take virtual machine one here and do a V motion and migrate its host ESXi three, well, guess what? It can still reach this VMDK file that's on ESXi Two. So vSAN is great in that regard because it supports all of these features that require shared storage, like high availability, fault tolerance, and DRS.

Those are all supported by vSAN. So now I don't have to actually purchase a dedicated physical storage array. I can just bake storage directly into my ESXi hosts that are in the form of a cluster. And so the VMs don't know the difference. The VMs are accessing their storage through a virtual storage controller. They can't tell whether they're running on a physical storage array, whether they're running on virtual sand, or whether they just have regular old local hard disks. The operating system has no clue. With the storage array, there is typically some sort of storage network between the ESXi host and the storage hardware.

Now, that might not be the case if you're using local storage on your ESXi host, but most of the time, people don't really do that because they require the features of shared storage. So, typically, shared storage is leveraged so that we can access all these great VSphere features. So what vSAN does is leverage the underlying physical capacity of your ESXi hosts and create this one big shared data store that all of those virtual machines can access.

VMware 2V0-21.20 practice test questions and answers, training course, study guide are uploaded in ETE Files format by real users. Study and Pass 2V0-21.20 Professional VMware vSphere 7.x certification exam dumps & practice test questions and answers are to help students.

Get Unlimited Access to All Premium Files Details
Why customers love us?
93% Career Advancement Reports
92% experienced career promotions, with an average salary increase of 53%
93% mentioned that the mock exams were as beneficial as the real tests
97% would recommend PrepAway to their colleagues
What do our customers say?

The resources provided for the VMware certification exam were exceptional. The exam dumps and video courses offered clear and concise explanations of each topic. I felt thoroughly prepared for the 2V0-21.20 test and passed with ease.

Studying for the VMware certification exam was a breeze with the comprehensive materials from this site. The detailed study guides and accurate exam dumps helped me understand every concept. I aced the 2V0-21.20 exam on my first try!

I was impressed with the quality of the 2V0-21.20 preparation materials for the VMware certification exam. The video courses were engaging, and the study guides covered all the essential topics. These resources made a significant difference in my study routine and overall performance. I went into the exam feeling confident and well-prepared.

The 2V0-21.20 materials for the VMware certification exam were invaluable. They provided detailed, concise explanations for each topic, helping me grasp the entire syllabus. After studying with these resources, I was able to tackle the final test questions confidently and successfully.

Thanks to the comprehensive study guides and video courses, I aced the 2V0-21.20 exam. The exam dumps were spot on and helped me understand the types of questions to expect. The certification exam was much less intimidating thanks to their excellent prep materials. So, I highly recommend their services for anyone preparing for this certification exam.

Achieving my VMware certification was a seamless experience. The detailed study guide and practice questions ensured I was fully prepared for 2V0-21.20. The customer support was responsive and helpful throughout my journey. Highly recommend their services for anyone preparing for their certification test.

I couldn't be happier with my certification results! The study materials were comprehensive and easy to understand, making my preparation for the 2V0-21.20 stress-free. Using these resources, I was able to pass my exam on the first attempt. They are a must-have for anyone serious about advancing their career.

The practice exams were incredibly helpful in familiarizing me with the actual test format. I felt confident and well-prepared going into my 2V0-21.20 certification exam. The support and guidance provided were top-notch. I couldn't have obtained my VMware certification without these amazing tools!

The materials provided for the 2V0-21.20 were comprehensive and very well-structured. The practice tests were particularly useful in building my confidence and understanding the exam format. After using these materials, I felt well-prepared and was able to solve all the questions on the final test with ease. Passing the certification exam was a huge relief! I feel much more competent in my role. Thank you!

The certification prep was excellent. The content was up-to-date and aligned perfectly with the exam requirements. I appreciated the clear explanations and real-world examples that made complex topics easier to grasp. I passed 2V0-21.20 successfully. It was a game-changer for my career in IT!