After 3 weeks of planning to post this I finally get to put the finishing touches on it and get it posted.
It’s midnight here and I’m talking with my wife on the phone after wrapping up my install for the day and my professional services for one of our clients this evening… Strangely enough most of today revolved around networking. This evening it was doing some network switching and routing with physical switches. Earlier today it was working with distributed virtual switches.
These virtual distributed switches (vDS) are the topics of my post today. Today I was at a site preparing for an SRM deployment next week. It’s a moderate size site (for what I normally work with anyway) with about 10 ESXi hosts at each site and a high speed link between production and DR. The way the site was originally setup DR and Production ESXi hosts were each in their own Data Center within the same vCenter. They have vDS for all of the networks in the environment.
My task today was to split the two sites into separate vCenters. Yesterday we updated all of the hosts to ESXi 5.0, spent a lot of time watching paint dry, and talking about the new features of vSphere 5. So for the most part this sounds like a pretty easy task to split a site and create a vCenter at production and DR. There are some caveats on this that I want to share with the rest of you though.
We started by building a new vCenter in the DR cluster, relatively straight forward, get all the pieces installed and operating, create a datacenter, cluster, and you’re ready to add hosts. We then put one of the ESXi hosts from the DR cluster in maintenance mode and then dropped it out of the pre-existing production vCenter and add it to our new DR vCenter. We then recreate the dVswitch in our new datacenter. All of this is still fairly mundane. One of the engineers I’m working with hasn’t had a lot of exposure to VMware so we spent some time doing things manually instead of with scripts. (It’s the important knowledge transfer customers are after.)
All of this goes well, and we have some hosts in our cluster. It’s time to bring over the vCenter we created earlier in the day. If you were doing this on just standard vSwitches you would just shutdown the VM, browse the data store, and then bring it back up. No real issues there. Those who have spent time in datastores know this routine well, I’m sure (if not let me know and I’ll blog about it).
That won’t quite work with vDS’s and a vCenter… How do you move it over across a vDS in different vCenter’s? Let’s start with what happens when you shut down your DR vCenter, your vDS’s disappear from your list of networks you can plug VM’s into. Why are vDS’s not listed for your network adapters, you may ask? Because the vDS is a construct of the vCenter with hidden standard vSwitch’s on the ESXi hosts. This is why the vDS keeps working but you can’t change networking if the vCenter goes off line.
If you try to move the vCenter from one cluster to another you will probably see a message about unable to find any valid adapters when you click the drop down for your network adapters in your VM. What’s an IT director supposed to do? How do you get your vCenter into the cluster you just built?
To start, we will want to work with the vCenter where everything is working (in my case, on the production side with two clusters). With that done, pick one of the ESXi hosts that’s already in the cluster for the DR site. Now you want to drop one of your network adapters (You built for redundancy right?) from the vDS for your management network. Then you are going to create a standard vSwitch. Call it temp and use the vNic you dropped off the vDS management switch. Now shut down your DR vCenter running on the production side. Move it over to the DR ESXi host we just moved. Now edit settings on the VM and hook it to the temp vSwitch. Power on the DR vCenter. Edit settings for the DR vCenter on last time and flip the network over to the management vSwitch. Poof, you’re done, it’s moved. Now remove the temp vSwitch and add the vNic back into the management vDS.
For those who want the steps formatted a bit differently:
- Build your vCenter for the DR site on a DR ESXi host.
- Add a different ESXi host (We’ll call it ESXi2) to your DR vCenter that you just built
- Create your vDS for the DR environment (I’m assuming you’ve already done this otherwise you probably wouldn’t be reading this article).
- Remove 1 vNic from your DR management network (because you built it with at least 2 vNics).
- Create a temp vSwitch (if management has a vLAN make sure to tag it on the vSwitch).
- Shutdown the DR vCenter VM.
- Add it to the inventory on to ESXi2 by browsing the datastore on the ESXi host.
- Edit setting for the VM and change the network adapter to point at the Temp vSwitch.
- Power on the DR vCenter VM.
- It will come up and now you will see all your vDS’s become available.
- Edit settings for the DR vCenter VM and move the network connection back to the original Management vDS. (You may get an error message at this point, you should be able to click ok and continue on.)
- Now remove the temp vSwitch.
- Add the vNic from the previous step back into the management vSwitch (so you are redundant again).
- Then you’re done.
Now I know some of my friends, maybe even you, will say this is a hard way to move a VM around. Just create a standard vSwitch and then import it into a vDS. That’s actually not the point of this post. This is just an alternative way to skin the same cat. It’s not the easiest way to do it, it may not even be the proper way to do this, but it’s a way to do it.
Some of you will also say why would you put VMware management on a vDS. In vSphere 5 it’s recommended by VMware in their partner training (as of this writing). Those of us who have been around the block know the dangers of this and if you don’t you probably shouldn’t be considering a vDS for your management.
Anyway the above is one more way to move the vCenter when working with vDS.
Tony