Thunderbird Email – Cannot Connect to SMTP Server

Bookmark and Share

I had an interesting problem I discovered this afternoon. I get email in a host of different ways, one of which is in Thunderbird, on one of my desktops. I went to send a friend an email and got an error saying “The mail server sent an incorrect greeting: cannot connect to SMTP server [long sting of numbers and colons], connect error 10060” And this is what the picture of what the error looks like.

Thunderbird IPv6 error messageI googled all over the place and couldn’t find anything of use on this error. I went in and checked all the settings for both my POP and SMTP servers. They all looked good. I checked my firewall logs to see if AVG was blocking Thunderbird’s attempts to send or receive  emails.  It wasn’t everything looked good.

That’s when I got to studying the numbers a bit more. They look like they form an IPv6 number. Guess what they are… and here is the simple fix if you run into this problem. (Do this at your own risk as it may cause other problems, especially if you use IPv6.)

On Windows 10:

  1. Type in ethernet on the start menu (change ethernet settings should appear)
  2. Click “Change adapter options
  3. That brings up your network adapters, right click on the network adapter and choose properties
  4. Uncheck Internet Protocol Version 6 (TCP/IPv6)
  5. Click OK
  6. Now try and see if your email is working

Network Settings, uncheck IPv6

Note: Do this at your own risk and if you are using IPv6 this will cause it not to work.

Permanent link to this article: https://www.wondernerd.net/thunderbird-email-cannot-connect-to-smtp-server/

Empowering CUDA Developers with Virtual Desktops (Part3)

Bookmark and Share

Woot!!! You’ve made it this far, or maybe you started here. In Part1 of this blog we looked at the problem of installing the NVIDIA CUDA Toolkit on a virtual machine (VM). In Part2 we looked at how to install the CUDA Toolkit on a VM. This post covers why installing the NVIDIA CUDA Toolkit on a Virtual Machine (VM) is a big deal for users and organizations. We’re going to let the genie out of the bottle to give us our wishes! This blog post answers the, “why would you want to do something like that?” question.

To answer the first part of that question about why I would do something like this is really simple. I like trying out new technology, but I don’t want to build everything out of physical architecture, that gets really expensive. Even if I have a single box where I multi-boot I wind up with several different hard drives and I can only run one thing at a time.

That’s where virtualization makes trying things like this out so very awesome. With virtualization, I get my wishes granted.

  1. I get the ability to have multiple types of projects running in my environment, for example I can run the same project in both Linux and Windows to see which I like better.
  2. I have mistake protection, since often times I’m fiddling with the Kernel when slipped keystroke and I can destroy my whole OS. I get this protection with VM snapshots. That means I can snapshot and roll back when I fat finger something.
  3. Lastly I can transport and share what I am able to create. Since a VM is just a set of files, I can export it and move it to a new system or even share it with someone else who wants to look at what I’ve created (its great for troubleshooting).

All three of these things are really easy to do with VM’s not so much with hardware. So I get my 3 wishes!

Genie from Aladdin holding up 3 fingers saying you get 3 wishes

Now the second part of that why would you want to virtualize the NVIDIA CUDA Toolkit or for that matter most GPU based computational workloads. I’m going to address this for both individuals and for organizations.

We should first address why individual users ought want virtualized GPU environments.

Developer Happiness

First off by going virtual it becomes super simple to keep up with new technology as its released. For example NVIDIA releases a new GPU. Wouldn’t it be awesome to be able to have that new GPU in your system with out having to rebuild the entire host? You can, its pretty straight forward. If NVIDIA supports it in virtualized environments (the Volta GPUs are the only class of enterprise GPUs that are not currently supported) and VMware has it on it’s HCL then move the VM to a new host with the new GPU, change its vGPU profile, and if necessary update the driver in the VM. That’s it, no reason to start from scratch and reset everything.

Speaking of new VMs. Any developers out there wish they could have multiple development environments so they could work on one project per desktop, but can’t get approval for multiple systems, oh and they don’t want all those systems stacked up under their desk? Virtualization can address this in a few different ways. First if there is an unlimited budget (I’ve yet to find anyone with that) you can spin up as many identical development systems as you want and just move between them like tabs on your desktop. The second and more realistic way to do this is to be able to support one or two running development environments. When you are done with one shut it down, and that releases the resources back to the pool where they can be used to run other environments. This could be thought of as multi-booting a single box on steroids.

Four Computers

It’s great having access to all these extra VMs, but what happens when I’m done with a project, does the VM just vanish? Well it can if you want. Or, wouldn’t it be great if you could archive the desktop and save it like you do project files? You can do that! Since VMs are files, they can be archived, which means you can save a development environment for a given project. It’s no longer necessary to loose time rebuilding an environment  when you revisit an old project.

Wouldn’t that be cool to have versioning of your development environment, not just a version of code but the actual environment? Virtualizing a workstation can allow this with snapshots (note you cant snapshot a running VM with a vGPU in it at this time). Having a snapshot gives you the ability to move around to different points of time in your development environment. A great example of this, is the work I did for this set of blogs, I built a base VM, then snapshotted it as I went along. If something didn’t work I just revert back to my branch point and try again. This saved me hours of work in rebuilding my test VM.

This probably sounds great, but it sounds like a lot of extra work… Every time a new project starts I’d have to do the exact same things to setup a development environment all over again. You know, set global variables, install this package, recompile the kernel headers, etc. No one wants that! Which is another reason virtualizing rocks!!! You can setup a VM exactly like you want and use that for the template of any or all of your development VMs. Then whenever a new project is started, a workspace is already to go and there is no need to repeat all the standard installation tasks.

Organizational Happiness

I could keep going on with user scenarios, but I know IT admins are chomping at the bit to find out why this is good for their organization. I’d like to switch now to some of the reasons this is a big deal for organizations. Many of these build on the points from above.

Being able to deliver the CUDA Toolkit enables a lot of cool options for organizations. The one that springs to mind first is the enhanced security to the organization from virtualizing these developers. No longer is there an expensive system sitting under someone’s desk, the system has been moved into the data center. There is less chance of something wandering off.

The typical response to this is that this can already be done with physical systems and letting developers VNC/RDP/RDSH into the the machine. The host is in a secure area, same result. Which is true to an extent, however with virtualization it’s possible to secure the data at a very granular level. With VMware it’s possible to control what devices are allowed to connect to the VM. That means you can disable removable storage from the VM (and any other USB devices you want) and prevent users from copying data and walking out the door with valuable IP.

That may be all well and good but what keeps user’s from copying and pasting code or anything else from their development VM to their local device? That’s another cool feature of VMware Horizon, you can disable copy and paste capabilities for VMs. This helps keep digital assets safe where they are supposed to be, inside the organization.

This is one form of data protection, it would be great to protect developers desktop from damage. Above I talked about versioning and archiving the developers environment. This is another awesome advantage of virtualization (not specific to GPU based systems), data protection becomes so much easier. You can backup all the developers systems. That way when something gets removed or they corrupt their image you can just recover the last backup and move on.

Wouldn’t it be great if it were possible to automate delivery of all these systems for developers and not have to order a new system, configure it to organizational policy, and then take it to the developer? Not to mention the fact the developers all have their own “special” systems which are completely different from the rest of the organization and are never purchased in a standard way so there are no discounts to be applied… By virtualizing a developers system, all the sudden things become standardized. The developer can have a system that matches everyone else’s, there’s no need for “special” orders. That means you can standardize on systems in the datacenter too!

Why, you might ask? Because chances are, IT already orders something like a Dell R740 or Cisco C240 M4 and has special pricing for them. Thus the only significant variation is the GPUs that are being installed, which happen to be the same ones used for HPC, Machine Learning (ML), Deep Learning (DL), and VDI. That probably means it’s a standard, repeatable, order for IT to provide saving the organization time and money.

This also provides a great life cycle plan. The newest servers can start as HPC, ML, DL, and high end developer systems, then once they’ve aged a bit and people need the next big thing they can be migrated into less demanding roles such as less demanding developers or typical user VDI hosts. This allows the organization to realize additional financial advantages while keeping their developers outfitted with the latest hardware.

You may also hear the claim that performance won’t be on par with a traditional physical system. It actually should be pretty darn close. Because of the architecture used, the hypervisor consumes very few resources and has minimal impact on calls to hardware such as processors and GPUs. Because of this results will be similar between physical and virtual hosts.

I have two things left that I’ll cover, one is the perception that there will be tons of resources unused on the ESXi hosts that house these VMs (which is strangely contradictory to the point above). The typical rational goes, you have a developers system that consumes 80% of the resources of a physical host and you can’t put another VM or two on the same host as it might impact the performance of the developers system. Here’s a simple way around that, use shares. That’s right use shares for resources inside of VMware. The way a share works is when there is resource contention, which ever machine has the most shares will get priority on the resources! So if the developers VM has 10,000 shares and the secondary system has say 10 shares, guess which one will win contention arguments for resources? The developers system! And when there’s no contention both systems just keep right on trucking. (This is probably my favorite benefit that I’ve talked about in this blog.)Benefits of Virtualizing the CUDA Toolkit

The last Item I’ll cover for organizations is the sharing of resources… when a developers system sits at his or her desk it’s hard to share those resources with other developers when the system is not in use. By virtualizing developer environments, resources can be redistributed to other areas of the organization. Now instead of resources sitting idle under a desk, they can be part of a pool of resources that are shared. For example developers in the United States start shutting down there systems (or letting them go idle) around 5PM, developers in India start logging in at about the same time… wouldn’t it be great if both could share those resources? OR developers go idle at around 5 and the HPC, ML, or DL systems kick in and start using the idle resources to speed computing operations.

Hopefully these reasons resonate with both developers and organizations. By enabling developers with VMs configured to leverage GPUs significant benefits can be gained at both the individual and organizational level (a small sample in the graphic). The genie is now out of the lamp and it can fulfill the wishes of many.

Be on the look out for additional blogs on things I’ve learned from virtualizing the NVIDIA CUDA Toolkit.Hopefully you’ve enjoyed the blogs on this topic thus far and they have been helpful. If you have questions or comments please be sure to post them below or contact me directly.

 

Permanent link to this article: https://www.wondernerd.net/empowering-cuda-developers-with-virtual-desktops-part3/

Empowering CUDA Developers with Virtual Desktops (Part2)

Bookmark and Share

In this blog post we are looking at how to install the NVIDIA CUDA Toolkit and its basic setup. In the previous blog post we looked at the typical problem encountered when trying to install the CUDA Toolkit in a virtualized environment with a vGPU.  In the follow on to this post (Part 3) I will detail why this is substantial for both developers and organizations.

A quick review of part 1 before we get into the guts of how to deploy the NVIDIA CUDA Toolkit on a Linux VM. Below is the diagram from the previous post. We can’t use the CUDA Toolkit package manager (RPM/Deb) to deploy the toolkit as it installs a prepackaged driver. That means we have to install it using the run file for the CUDA Toolkit as it has some options around the GPU driver.

NVIDIA CUDA Toolkit Virtual Deployment Model

That said, how do we get the NVIDIA CUDA Toolkit installed on a virtual machine? It’s a multi-step process which is rooted in a correctly configured virtual machine (VM). That means we need properly built virtual infrastructure, specifically ESXi hosts with NVIDIA GPUs. So lets start there but before we do a couple of quick notes.

  • At the time of this writing (October 2017) VMware does not support Pascal vGPUs in Linux Desktops running on VMware Horizon. If you need to do this in a supported manner, you will want to use NVIDIA M6, M10, or M60 GPUs.
  • I built this on my lab environment, which you can read about here, note that my configuration is not supported by either VMware nor NVIDIA. It works for my purposes but, it’s not supported.

Now lets get you those three wishes and get started with setting up our hardware.

Physical Hardware

I’m going to assume we will be using currently (October 2017) supported GPU’s in our ESXi hosts. That means NVIDIA M6, M10, M60, P4, P40, P6, or P100. If you you are still using Kepler GPUs, K1 or K2, these steps should work but haven’t been tested and would be substantially unsupported as the Kepler GPUs have reached EOL. Other GPUs are not currently supported by VMware. You can check to see if your GPU is supported in the VMware HCL for Shared  Pass-Through Graphics (aka vGPU).

You will need to follow your hardware vendors instructions for installing the GPUs in physical servers (ESXi hosts) as hardware vendors are all a bit different, or you may be lucky and it came pre-installed.

That gets us to the installation of the virtual environment. I wont explain how to install ESXi or add the host to a vCenter instance (or even setup a vCenter environment if you are starting completely from scratch). There are plenty of posts that explain how to do this.

The Base Virtual Environment

For the next three sections I will be summarizing material I presented at the GPU Tech Conference in 2017 (Maxwell based cards) and at VMworld 2017 (Pascal based cards) on setting up and configuring vGPUs for a Linux environment. Also as of this writing (October 13, 2017) it should be noted that VMware does not support the use of Pascal GPUs for Linux virtual desktops, my build, using an NVIDIA P4 GPU, is unsupported. (If you need this support for Pascal GPUs with Linux VMs in your environment be sure to contact your VMware sales representative).

Now on to configuring the host.

  • If we are using the Maxwell GPUs (M6, M10, and M60) we first need to use GPUmodeSwitch and set the card to graphics mode (GTC slides 10 and 11). This is not necessary for Pascal series GPUs.
  • At this point we are able to install the Virtual GPU manager, also known as the VIB. To do this we want to upload the VIB to a datastore on the ESXi host and install it like any other VIB (GTC slide 12 \ VMworld slide 10)
  • If we are using Pascal based GPUs we will need to change the ECC mode to 0 on the GPU. (VMworld slide 11)
  • Next we want to set our graphics in the ESXi host to shared direct (VMworld slide 12)
  • We should then check and make sure the GPUs are not enabled for Passthrough (GTC slides 13)

That gets the basics of the virtual environment setup.

VMware Horizon

Now we need to have a quick side chat about virtual No virtual display console with VGPU. To see the display an alternative connection such as Horizon or RDSH must be used.desktops as compared to other VMs. To use these virtual desktops to their full capability we need an alternative display adapter to gain access to the VMs. We can’t just use the VMware Virtual Console to access the virtual machine. In a few cases it might be acceptable just to let users connect to a development system by SSH. However it will probably be more desirable to access a full GUI.

The simple reason that the VMware Virtual Console won’t work when a vGPU is used is when the virtual machine is configured to use the vGPU, the vGPU is not mapped back to the default console. Think of it like when you install a new GPU in a physical desktop you either plug in to the on board display adapter or you plug into the video card, the VMs cable is plugged into the on board adapter not the vGPU. (See diagram on the right)

So how does one display a desktop to developers? The best way, in my opinion (I work for Dell, major share holder of VMware), to do this is using VMware Horizon which provides virtual desktop infrastructure (VDI) for displaying desktops. Setup and configuration of VMware Horizon is beyond the scope of this blog. Needless to say using Horizon provides a lot of flexibility and power in the datacenter and we will be leveraging it for the purposes of this post.

vGPU Licensing

Because of the way we are using the GPU for virtual desktops we need to license the VMs. This is done through the NVIDIA GRID License Server. It can be setup on either a Linux OS or a Windows OS. The license server can be downloaded from NVIDIA in the same place you downloaded the VIB and guest OS driver.

Installation is straight forward and outlined in the GRID License Server Release Notes. I also detail the setup and licensing of VMs in GTC slides 15 to 26 and VMworld slides 14 to 19.

You have to setup and use NVIDIA Licensing for NVIDIA vGPUs. If you don’t and try to run a vGPU without proper licensing, the VM will not function correctly and you will get errors when you attempt to run CUDA applications. The one I get most frequently is the one below, CUDA Error code=46(cudaErrorDevicesUnavailable). It’s caused by the VM not having a license.

CUDA Error code=46(cudaErrorDevicesUnavailable))

Virtual Machine Virtual Hardware Configuration

At this point we can build a base virtual machine. After all we want these desktops to be something that are repeatable, quickly deployable, easily protectable, and when we are ready to reclaim resources disposable.

The first task for this is just to build a basic virtual machine. For my initial testing I built a CentOS 7 Linux VM. I patched, installed any base development tools such as gcc, and completed other standard OS deployment operations. Once we have prepared a base VM image we will shutdown the guest VM.

  • With the guest VM shutdown we will edit the virtual hardware settings for our VM. (GTC slides 28 and 29 \ VMworld slides 22 and 23)
    • We will use the new device drop down at the bottom of the edit settings screen to select a Shared PCI Device
    • Then we will select the desired vGPU Profile (How much of a vGPU a user can use)
    • Lastly we want to click the “Reserve all Memory” button (this is important to do otherwise the VM may not power on)
    • Then we can power on the VM
  • Inside the VM we are going to setup some networking and disable Nouveau. (GTC slides 32 and 33 \ VMworld slides 24 and 25)
  • At this point we can install the NVIDIA GPU drivers. (GTC slides 34 and 35 \ VMworld slides 26 and 27)
    • It’s important to note that the driver installed during this step must match the VIB installed in previous steps!
  • Once the NVIDIA driver is installed we will install the VMware Horizon Agent on the VM. (GTC slide 35 \ VMworld slide 27)
  • At this point we can reboot the VM
  • When the VM reboots the VMware Virtual Console of the VM will no longer function and you will need to access the VM using VMware Horizon, SSH, or some other console viewer.

At this point I recommend verifying that the VM functions correctly in VMware Horizon by adding it to a dedicated manual desktop pool. If it does we are ready for the next step.

Installing the NVIDIA CUDA Toolkit

Now I will get a little more detailed on the install since I haven’t done a session on setting this up yet. It is important to note that Changjiang’s blog on building a deep learning box is what actually helped me to figure out how to install the NVIDIA CUDA Toolkit on a virtual machine. As of this blog post I am using version 9.0.176 of the CUDA Toolkit. With that lets get started.

  1. Download the run file version of the NVIDIA CUDA Toolkit from the developer site – https://developer.nvidia.com/cuda-downloads
  2. Perform the pre-installation tasks in the CUDA Toolkit Documentation
    1. Verify that your VM shows it has an NVIDIA GPU in it with the command: lspci | grep -i nvidia
    2. Verify gcc is installed: gcc –version
    3. Install the Kernel Headers and Development packages (varies by OS)
  3. Now skip down to the Runfile Installation section
    1. Disable Nouveau if you haven’t already
    2. At this point drop into runlevel 3 (text mode) – when you do this the virtual console will be functional again until you exit the run level.
    3. As sudo you want to execute the run file: sudo sh ./cuda_<version>_linux.run
      1. Follow the prompts on screen
      2. When asked to install the driver GPU driver enter No (N), this is the most important part of this process.
        The reason for this is, if you select yes, the file will over write the already installed driver with the driver included in the package, which if you remember from earlier, the driver version in the VM has to match the VIB version.
      3. Finish answering the prompts and complete the installation of the run file
  4. At this point we can precede to the post-installation steps
    1. Add /usr/local/cuda-9.0/bin to the PATH variable:
      export PATH=/usr/local/cuda-9.0/bin${PATH:+:${PATH}}
    2. We then need to add the 64bit library to the the LD_LIBRARY_PATH variable:
      export LD_LIBRARY_PATH=/usr/local/cuda-9.0/lib64\${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
      (I’ve had issues with this variable entry staying in my CentOS VM, if you have issues running the examples following a reboot check and see if this variable is empty.)
    3. Install the writable samples
      cuda-install-samples-9.0.sh <dir>
      I typically put this in the /home/usr/ path (~)
    4. Make the samples:
      cd ~/NVIDIA_CUDA-9.0_Samples
      make
      This can take a while to run, you may want to do this over lunch
  5. Reboot your VM, if you did this via the console you will need to return to your VMware Horizon connection to the VM.
  6. Open up a console and change to the location of the files you built. Typically:
    cd ~/NVIDIA_CUDA-9.0_Samples/bin/x86_64/linux/release/

    1. Run deviceQuery
      ./deviceQuery
      Output will look something like this:

      It will not look exactly the same though.
    2. Run bandwidthTest
      ./bandwidthTest
      Output will look something like this:
      NVIDIA CUDA Toolkit bandwidthTestIt will not look exactly the same though.
    3. If you are curious what I got for all the different files in my example, you can review them here.
  7. At this point you are ready to use the NVIDIA CUDA Toolkit or install additional components such as TensorFlow.

To summarize the process (see picture below); we first installed the VIB on the our ESXi hosts with GPUs (1). Next we installed the NVIDIA GPU driver on our virtual machine (2). After which we installed the VMware Horizon Agent on the virtual machine (3). Lastly we used the run file install method to install the NVIDIA CUDA Toolkit on the VM (4). At this point we can finish customizing the VM and use it for our master image to deliver to users.

This concludes the installation of the NVIDIA CUDA Toolkit, the second in my multi-part blog series on installing the NVIDIA CUDA Toolkit on a VM. If you haven’t already be sure to read the previous blog post (part1) where we looked at the typical problems encountered when trying to install the CUDA Toolkit. Also be sure to read the next blog in this series about why this is a big deal for organizations and developers (part3).

If you have issues, run into any snags, or something else please either share in the comments or contact me.

Permanent link to this article: https://www.wondernerd.net/empowering-cuda-developers-with-virtual-desktops-part2/

Empowering CUDA Developers with Virtual Desktops (Part1)

Bookmark and Share

I’m very excited to share with everyone that I have successfully installed the NVIDIA CUDA Toolkit on a Linux CentOS 7 Virtual Desktop running on Horizon 7.1 in my home lab. (No, I’m not the first but its a big deal for me.) You might be thinking “big deal you’ve deployed an app on a virtual machine!” or “Why would anyone want to use virtual machines for CUDA development, they want/need a whole dedicated piece of hardware, besides it’s easier that way!”

When I hear objections like this, all I can think of is the Disney movie Aladdin where the Genie says “Master, I don’t think you quite realize what you’ve got here!”

Disney's Aladdin - Genie's introduction

In this series of blogs I’m going to let the genie out of the lamp. For this post I’m going to layout the problem of installing the CUDA Toolkit on a Horizon Virtual Machine. In the next post I will lay out how to install it on a virtual machine. The following post will then explain why a configuration like this is significant benefit to developers and companies. There may also be follow on parts that I haven’t even thought of yet, as I’m still excited about proving this works correctly.

With that let’s rub the lamp and see what we get.

Most people in the virtualization space think of virtual GPUs (vGPUs) as something that provides enhanced graphics capabilities to virtual desktops (using VDI) not something for processing of data. Those developing applications that use CUDA (deep learning, machine learning, big data, etc.) tend to think of GPUs as more of a tool for improved data processing performance. Both perceptions are barriers into virtualizing developer workloads.

Because of this CUDA developers don’t consider virtualization and virtualization teams don’t consider CUDA developers.The net result is each side writes the other off. Leading to no documentation on how to deliver a given product on a given platform (CUDA on a VM). You can see this in both the NVIDIA CUDA Toolkit and in vGPU technology.

Deploying vGPU technology has a fair amount of requirements and configuration steps (as you can see in my GTC 17 session on setting up vGPUs for Linux VMs). Many times you are required to use a specific set of matching drivers, one for the ESXi host (in VMware a VIB) and one for the VM (a driver). If those don’t match, the VM may fail to work correctly or at all.

With the CUDA Toolkit, the typical install path, in Linux OSs, is a package manager install (RPM/Deb) that is configured to deploy a specific GPU driver version. This driver version, to the best of my knowledge, has never matched the driver version for a vGPU. There is also no easy way to change that driver in the RPM or Deb file.Easy Button

And this is typically where the discussion ends… “My virtual machine can only run driver X.Y.Z” AND “The RPM deploys driver A.B.C” SO… “This just won’t work, make it a physical machine and lets move on.”

This is the point where we need an “easy button” to press and make the drivers magically match up so we can run the CUDA Toolkit on a VM.

I repeatedly tried installing the CUDA Toolkit  with the package manager installs, as I outlined above, on VMs, trying various combinations and orders to see if there was a magic way to get the package installer to accept and match the driver I was using. There’s not. There is another method though that will get your VM running the CUDA Toolkit without a much issue, we can use the runfile installation method.

You can see this in the image below. We have a VM with vGPU capabilities running on an ESXi host on the left. On the right we have two different deployment methods, package manager and run files. Package manager based installs won’t work on VMs as the driver installed as part of the package is not compatible with the on needed on the VM. However the run file deployment of the CUDA Toolkit will work as the GPU driver is variable. I’ll cover the steps of how to deploy the NVIDIA CUDA Toolkit  runfile in the next blog post,followed by how this unleashes a powerful genie for both developers and organizations in the third post of this series.

NVIDIA CUDA Toolkit Virtual Deployment Model

Permanent link to this article: https://www.wondernerd.net/empowering-cuda-developers-with-virtual-desktops-part1/

VMworld VMTN6636U: GPU Enabled Linux VDI

Bookmark and Share

It’s Wednesday of VMworld… Today I present my vBrownbag Tech Talk on GPU Enabled Linux VDI (VMTN6636U). I want to provide access to the slides used in this session. I will link the recording here as well once the session is posted.

Front slide of VMworld SessionVMTN6636U VMworld Session

It’s important to note with the material I’m covering in this session that at the time of the session what I am doing is completely not supported. The hardware I tested with is not on the HCL list for the GPUs and the GPUs have not yet been tested with Linux VDI by VMware. That said I’m also pleased to say that it works really well.

If you are looking for a supported way to deliver vGPU enabled Linux VDI please consider using the NVIDIA M60 GPU. I detailed this in my GPU Tech Conference session.

I would like to thank the NVIDIA and VMware program teams for reviewing the content in my session. There help with this project is invaluable.

Hopefully this material is helpful. Be sure to reach out if you have questions.

 

Permanent link to this article: https://www.wondernerd.net/vmworld-vmtn6636u-gpu-enabled-linux-vdi/

My NVIDIA GRID 5.0 Testing

Bookmark and Share

In my previous post I covered some of what is new with NVIDIA GRID 5.0 and the NVIDIA Pascal cards. In this post I’m going to cover some of the testing I’ve done with GRID 5.0 and the NVIDIA Tesla P4 GPU.

NVIDIA was nice enough to provide the NVIDIA GRID Community Advisors (NGCA) access to P4 GPUs and beta candidates of GRID 5.0 for testing in our home labs. This is something I couldn’t pass up. Which means I had to stand up a home lab.

Tesla P4 GPUIt’s interesting, this cutting edge frontier is one of the few spaces that still requires you to have physical equipment to do testing. I can’t go out and run this in a cloud somewhere and test it. So for the last month, my wonderful wife has tolerated the sound of servers running in our basement while I put GRID 5.0 through its paces.

That said let’s talk about my lab setup. Data for the post actually comes from two different labs, one was a work lab that has since been purposed. Currently I just have a home lab. (Skip ahead 5 paragraphs if you aren’t interested in my home lab hardware.) [add anchor link to skip ahead]

What am I running to do all this GRID 5.0 testing… maybe something like some Dell R730s loaded with NVIDIA P40s or P100s? Not really, my home lab, like most folks home labs, are ebay specials. That said it’s worth noting this entire setup CAN NOT be found on any HCL list for VMware or NVIDIA.

My Home lab consists of an R610 running a management environment (VSA, Jump Box, AD, connection server, Security Server, NVIDIA License Servers, etc.). It has dual E5620 Intel quad core processors. It also has 24GB of RAM and 5-146GB SAS drives in a RAID 5 configuration. All of the VMs are running on local storage. It works as a system just to support management.

This is connected to the world using a 1GB ZyXEL 16 port switch and an old D-Link router I had sitting around. Standard networking setup for 1GB ports, nothing really special to tell you about.

The other system in my arsenal is a Cisco C240-M3. It’s running the Intel E5-2640 procs at 2.50GHz with 6 cores each. The system has 64GB of memory in it and I have it loaded with 2-74GB SAS drives in a RAID1 for my boot volume and 3-146GB SAS drives in a RAID 5 for my storage volume. The nice thing about this sever is that it supports the NVIDIA K1 and K2 cards should I need them for testing. I picked this up on ebay for about $500  + drives (http://www.ebay.com/itm/Cisco-UCS-C240-M3S-v02-Server-2x-E5-2640-2-50-GHz-6-Core-64GB-Dual-PS-No-HDD/262840746028).

If you hadn’t guessed by now my testing was done using VMware Horizon 7.1 (build-5170113). My vCenter is version 6.5 (build 5705665) and the ESXi hosts are at version 6.5 (C240 is build 5310538 and the R610 is build 4887370).

This is where the standard stuff ends and the fun begins…

The P4 GPUs that NVIDIA was nice enough to provide the NGCA members with went in my C240-M3 host. I’m going to save the install and setup of the GPU for another post. The C240-M3 host only runs my test VMs so that I can avoid artifacts caused by other VMs that aren’t part of what I’m testing. With the P4 installed and configured in the C240-M3 I built some CentOS 7 VMs. NVIDIA X Server Settings Screen

To test my VMs I used GFXBench and Unigine-Heaven 4.0 for both I used the Linux versions of the software. I chose GFXBench because of its testing methodology, how it has test for several different GPU factors, and because they aren’t all in a single test. I also chose to test with Unigine because it’s what everyone else tests with and I want to make this information as relevant to as many of you as possible.

For my basic tests I’m kept the RAM in each VM at 8GB and vCPUs at 4 with a native screen resolution of 1920×1080 (16:9)@29hz.. (See NVIDIA X Server Settings screen shot.)

The table below shows the GFXBench results for each of its tests for each vGPU profile in VMware Horizon.

GFXBench test results*

Test P4-8Q P4-4Q P4-2Q P4-1Q
Car Chase 6983.29 Frames
(118.161 FPS)
6940.41 Frames
(117.435 FPS)
6953.41 Frames
(117.655 FPS)
6997.82 Frames
(118.406 FPS)
1080p Car Chase Off screen 9916.48 Frames
(167.791 FPS)
9924.63 Frames
(167.93 FPS)
9886.87 Frames
(167.292 FPS)
9935.8 Frames
(168.118 FPS)
1440p Manhattan 3.1.1 Off screen 8828.87 Frames
(142.401 FPS)
8796.17 Frames
(141.874 FPS)
8801.01 Frames
(141.952 FPS)
8802.01 Frames
(141.968 FPS)
Manhattan 3.1 11092.4 Frames
(178.91 FPS)
10915.6 Frames
(176.058 FPS)
11035.8 Frames
(177.997 FPS)
11068.1 Frames
(178.518 FPS)
1080p Manhattan 3.1 Off screen 13633. Frames
(219.889 FPS)
13623.2 Frames
(219.73 FPS)
13476.3 Frames
(217.359 FPS)
13559.1 Frames
(218.696 FPS)
Manhattan 12076.2 Frames
(194.778 FPS)
11640.7 Frames
(187.753 FPS)
11585.2 Frames
(186.858 FPS)
11774.7 Frames
(189.914 FPS)
1080p Manhattan Off screen 14835.3 Frames
(239.279 FPS)
14671.1 Frames
(236.63 FPS)
14473.8 Frames
(233.449 FPS)
14539.1 Frames
(234.502 FPS)
T-Rex 25533.7 Frames
(455.959 FPS)
24787.8 Frames
(442.639 FPS)
26636.3 Frames
(476.136 FPS)
24805.8 Frames
(442.96 FPS)
1080p T-Rex Off screen 42027 Frames
(750.482FPS)
41957.8 Frames
(749.246 FPS)
40691.8 Frames
(726.64 FPS)
41538.5 Frames
(741.759 FPS)
Tessellation 21398.3 Frames
(713.276 FPS)
21415.9 Frames
(713.862 FPS)
21883.7 Frames
(729.457 FPS)
22267.8 Frames
(742.259 FPS)
1080p Tessellation Off screen 88862.3 Frames
(1481.04 FPS)
88410.1 Frames
(1473.5 FPS)
88346.9 Frames
(1472.45 FPS)
88181 Frames
(1469.68 FPS)
ALU 2 18045 Frames
(601.5 FPS)
17631.8 Frames
(587.726 FPS)
17716.9 Frames
(590.564 FPS)
17850.1 Frames
(595.002 FPS)
1080p ALU 2 Off screen 62342.9 Frames
(1039.05 FPS)
62537.5 Frames
(1042.29 FPS)
62582.1 Frames
(1043.04 FPS)
62643.6 Frames
(1044.06 FPS)
Driver Overhead 2 2450.51 Frames
(81.6837 FPS)
 2679.11Frames
(89.3036 FPS)
2492 Frames
(83.0667 FPS)
2600.05 Frames
(86.6682 FPS)
1080p Driver Overhead 2 Off screen 5102.23 Frames
(85.0372 FPS)
5589.44 Frames
(93.1574 FPS)
5194.05 Frames
(86.5675 FPS)
5368.02 Frames
(89.4669 FPS)
Texturing 96098 MTexel/s
(63.5544 FPS)
100066 MTexel/s
(64.4044 FPS)
100061 MTexel/s
(64.9113 FPS)
100385 MTexel/s
(65.2536 FPS)
1080p Texturing Off screen 99278 MTexel/s
(95.8384 FPS)
99665 MTexel/s
(95.9874 FPS)
99797 MTexel/s
(98.0892 FPS)
99204 MTexel/s
(95.8811 FPS)
Render Quality 4541.54 mB PSNR
(866.644 FPS)
4541.54 mB PSNR
(873.211 FPS)
4541.54 mB PSNR
(1023.98 FPS)
4541.54 mB PSNR
(977.356 FPS)
Render Quality (High Precision) 4541.54 mB PSNR
(887.633 FPS)
4541.54 mB PSNR
(900.2 FPS)
4541.54 mB PSNR
(1059.25 FPS)
4541.54 mB PSNR
(1008.66 FPS)

 

Unigine Heaven was run with the settings defined in the table below.

Unigine Heaven 4.0 (Basic Edition) Settings

Preset Custom
API OpenGL (grayed out option)
Quality High
Tessellation Normal
Stereo 3D Disabled
Mulit-monitor Disabled
Anti-aliasing X2
Full Screen True
Resolution System

 

Testing results from Unigine Heaven

Unigine Heaven test results*

P4-8Q P4-4Q P4-2Q P4-1Q
FPS 28.3 28.2 28.1 28.2
Score 713 711 709 711
Min FPS 7.2 12.2 12.5 11.3
Max FPS 41.9 44.2 45.5 42.6
Mode 1920×1080 2xAA fullscreen 1920×1080 2xAA fullscreen 1920×1080 2xAA fullscreen 1920×1080 2xAA fullscreen

The above are the results of my testing. You probably noticed the asterisks (*) on the results. This is my caveat on these results. These aren’t the results you want to rely on for a production environment. I ran these tests once per profile, on non-HCL hardware, in a non-optimized configuration. These results may also be impacted by the fact that no other VMs were running on this host and thus consuming resources during the test. Your individual results may vary significantly. Please consider my test results as one point of data and not a complete answer to how a similar configuration will function in your environment. In short your mileage may vary.   

At the beginning of this blog I mentioned I had two labs I was using. Up to know you have heard about my P4 testing. As some of you may know I presented a session at the GPU Tech Conference with my good friend Trey Johnson on Getting started with GPUs for Linux Virtual Desktops on VMware Horizon. I ran those tests on a work lab environment. The material for that session was run on Cisco C240-M4s with the NVIDIA M60 GPUs and GRID 4.

I made one mistake that I am regretting, before the lab was repurposed I forgot to capture the full set of results from my testing. All I have are the maximum and minimums that were discussed during the session. At the same time the results do provide a good set of comparison points. In the table below are the tests results from the GTC session showing the highest and lowest tests results.

M60 with GRID 4 GFXBench Test Results (incomplete)*

Test M60-8Q M60-4Q
Texturing 44.6732 FPS 44.8432 FPS
Driver Overhead 2 61.3333 FPS 61.5149 FPS
1080p Texturing Off screen 90.7743 FPS 98.2536 FPS
1080p Tessellation Off screen 1212.87 FPS 1212.62 FPS

The same bit as above with the asterisks (*), these are single pass results from a non-optimized environment, your results may vary significantly.

These results correspond similarly to the results from the P4 GPU tests. You can see that the highest and lowest results for both the P4 GPU and the M60 are the same tests. I’ve put the relevant results in a side by side table for comparison below.

NVIDIA P4 GPU compared to M60 GPU testing with GFXBench (incomplete)*

Test P4-8Q (GRID 5) M60-8Q (GRID 4) P4-4Q (GRID 5) M60-4Q (GRID 4)
Texturing 63.5544 FPS 44.6732 FPS 64.4044 FPS 44.8432 FPS
Driver Overhead 2 81.6837 FPS 61.3333 FPS 89.3036 FPS 61.5149 FPS
1080p Texturing Off screen 95.8384 FPS 90.7743 FPS 95.9874 FPS 98.2536 FPS
1080p Tessellation Off screen 1481.04 FPS 1212.87 FPS 1473.5 FPS 1212.62 FPS

You might be getting tired of this by now… same bit as above with the asterisks (*), these are single pass results from non-optimized environments, your results may vary significantly.

As you can see from above the P4 exceeds the M60 in GFXBench tests in all but one instance (underlined in the table above). Thus showing comparable performance in single pass, non-optimized tests, between the NVIDIA P4 and NVIDIA M60 GPUs.

To put this in perspective NVIDIA basically provided half an M60 (power, slot space, etc.) in the P4 and met or exceded vGPU performance. Now if we think about what that means for servers… you can put GPUs in severs for EUC deployments that you couldn’t before (consult vendor documentation for limits and compatibility). This  makes it nice, when for instance you need to upgrade some lower end applications (for example Microsoft Office, Windows 10 (ok that’s an operating system you caught me), and the like) that take advantage of GPUs and the host for those desktops is a year or two old. Instead of a rip and replace add P4’s or P40’s to the servers (depending on support) and away you go.

I can’t remember who said it at the GPU Tech Conference this year but it really stood out to me, it went something like this. “We’ve entered a new age in the computer industry, an age where servers won’t be sold without a GPU for processing.” In my opinion, this latest release of GRID 5.0 and Pascal GPUs make vGPU based EUC accessible, for all but a couple of corner cases. Going forward adding GPUs should be a requirement for EUC deployments.

I hope this blog post was helpful. If you would like to find out more be sure to read these other great posts about NVIDIA GRID 5.0 from other NGCA members:

Permanent link to this article: https://www.wondernerd.net/my-nvidia-grid-5-0-testing/

Changing the GPU Virtualization Game

Bookmark and Share

Today NVIDIA announced the NVIDIA GRID August 2017 release (AKA GRID 5.0), it is a major change for the virtualization industry. Over the next few days I will be publishing a few blogs about the NVIDIA GRID vPC (Virtual PC) and Quadro vDWS (Virtual Data Center Workstation). This blog covers some of what’s new with GRID 5.0 and soon I will publish some details on testing I did using an NVIDIA P4 GPU with GRID 5.0. I will also have additional blogs that go live during VMworld, around GRID vPC and Quadro vDWS. These will be part of the material I cover in my vBrownBag Tech Talk on vGPU Enabled Linux VDI (VMTN6636U)

GRID 5.0 Announcement ImageNow let’s get into the details of whats new…

The first thing that’s worth noting is that NVIDIA GRID vPC is really a software solution for datacenters. You have NVIDIA Tesla (datacenter) GPUs and then you have NVIDIA GRID software (August 2017 release as of this writing), those two combined provide a powerful datacenter solution for Virtual Desktops (VDI). This means you have the ability to get more out of your NVIDIA GPUs and can continue to use them as new GRID software is released.Pascal Series GPUs

That gets us to the first big advance with GRID 5.0. It is now possible to use both Maxwell and Pascal GPUs in your virtual environments. This means continued support for legacy Maxwell (M60, M6, and M10) GPUs as well as support for Pascal (P40, P6, P4, and P100) GPUs. Thus allowing companies to maintain their previous investments longer.

Now what could I mean by maintaining investments longer? I buy a GPU and what its capable of is based on the hardware, right? Not exactly… When Citrix announced the ability to monitor GPUs from within Citrix Director (https://www.citrix.com/blogs/2017/06/01/monitoring-of-nvidia-gpus-in-citrix-director/) that was a capability delivered by GRID software. Thus as NVIDIA adds more monitoring functionality with newer versions of GRID, virtualization vendors, 3rd party app developers, and even you can take advantage of them to monitor GPUs. If a new monitoring feature is added and your support contract is current you can upgrade your GRID software to use the new feature. That’s right the second big advancement, in my opinion, with GRID 5.0 is decoupling hardware and software functionality. NVIDIA is providing the ability to have continual new features even on older GPUs.

NVIDIA GRID 5.0 Monitoring Features

You might be thinking this is great! I’ll just go in and download GRID 5 before my license expires and I’m all set! Not so fast, NVIDIA did a fair amount of work on GRID licensing. With GRID 5.0 you have to have valid licenses. If you don’t performance is degraded. I’ve even tested it on Linux based desktops and even Linux desktops have their GPU performance capped if not licensed properly.

One enhancement delivered with the power of both GRID 5.0 and Pascal cards is the ability to run CUDA based workloads across the vGPU profiles. In other words you can build VMs with various vGPU configurations that run CUDA based code such as some Adobe applications, ArcGIS, AutoCAD, etc. You might be thinking this was possible before, and you’re right sort of. You had to use a full GPU profile like the M60-8A. With the Pascal architecture NVIDIA added preemption capabilities to the GPU. This is a fancy way of saying some applications with CUDA capabilities can share the same GPU. Possibly even bringing HPC, ML, DL, and AI into the virtual realm, which I’ll dig into in another blog.

The next cool thing that gets unlocked with GRID 5.0 is actually a feature of the Pascal cards and not a GRID feature. For those who have setup previous generations of GPUs (Maxwell) for EUC/VDI environments, you are probably familiar with having to use the gpumodeswitch command to change a Maxwell GRID GPU from compute to graphics mode. For those not familiar with it, you had to install gpumodeswitch, make the change, reboot, uninstall gpumodeswitch, reboot, install the VIB, reboot, and finally everything is in place. Guess what, you don’t have to do that any longer. The Pascal GPUs are able to switch between compute and graphics without gpumodeswitch. This reduces the reboots and steps required to have a GPU up and running. I’ll cover this during my vBrownbag TechTalk at VMworld and its accompanying blog.

It should be noted that the removal of gpumodeswitch is a feature of the Pascal cards and not something that’s part of GRID 5.0. This means if you are using the M10, M6, or M60 GPU you will still need to use gpumodeswtich before you can use GRID 5.0.

YNVIDIA P6 GPUou may have noticed above I mentioned an NVIDIA GPU that you may not have heard of. The NVIDIA Pascal P6 GPU. NVIDIA announced the P6 GPU along with GRID 5.0, this is a GPU specifically for blade server deployments. With deployments that leverage blade technologies it’s possible to provide the latest GPU technology and maintain a consistent operational model in the datacenter.  Hence, if you use rack mount server there are GPUs for it, if you prefer blade servers, there are GPUs for it.

 

At this point I bet you’re wondering what the various vGPU profiles for the Pascal GPUs are. In the tables below I’ve assembled a list of the vGPU profiles for the P40, P6, and P4 GPUs.

 

NVIDIA GRID P40 vGPU Profiles (Link to P40 Data Sheet)

Name Max Instances FB Memory Display Heads Max X Res Max Y Res Frame Rate Limit License
P40-1Q 24 1024MiB 4 4096 2160 N/A GRID-Virtual-WS,2.0;
GRID-Virtual-WS-Ext, 2.0
P40-2Q 12 2048MiB 4 4096 2160 N/A GRID-Virtual-WS,2.0;
GRID-Virtual-WS-Ext, 2.0
P40-3Q 8 3072MiB 4 4096 2160 N/A GRID-Virtual-WS,2.0;
GRID-Virtual-WS-Ext, 2.0
P40-4Q 6 4096MiB 4 4096 2160 N/A GRID-Virtual-WS,2.0;
GRID-Virtual-WS-Ext, 2.0
P40-6Q 4 6144MiB 4 4096 2160 N/A GRID-Virtual-WS,2.0;
GRID-Virtual-WS-Ext, 2.0
P40-8Q 3 8192MiB 4 4096 2160 N/A GRID-Virtual-WS,2.0;
GRID-Virtual-WS-Ext, 2.0
P40-12Q 2 12288MiB 4 4096 2160 N/A GRID-Virtual-WS,2.0;
GRID-Virtual-WS-Ext, 2.0
P40-24Q 1 24576MiB 4 4096 2160 N/A GRID-Virtual-WS,2.0;
GRID-Virtual-WS-Ext, 2.0
P40-1A 8 1024MiB 1 1280 1024 N/A GRID-Virtual-Apps,3.0
P40-2A 4 2048MiB 1 1280 1024 N/A GRID-Virtual-Apps,3.0
P40-3A 2 3072MiB 1 1280 1024 N/A GRID-Virtual-Apps,3.0
P40-4A 1 4096MiB 1 1280 1024 N/A GRID-Virtual-Apps,3.0
P40-6A 4 6144MiB 1 1280 1024 N/A GRID-Virtual-Apps,3.0
P40-8A 3 8192MiB 1 1280 1024 N/A GRID-Virtual-Apps,3.0
P40-12A 2 12288MiB 1 1280 1024 N/A GRID-Virtual-Apps,3.0
P40-24A 1 24576MiB 1 1280 1024 N/A GRID-Virtual-Apps,3.0
P40-1B 24 1024MiB 4 2560 1600 N/A GRID-Virtual-PC,2.0;
GRID-Virtual-WS,2.0;
GRID-Virtual-WS-Ext, 2.0

NVIDIA GRID P6 vGPU Profiles (P6 Data Sheet)

Name Max Instances FB Memory Display Heads Max X Res Max Y Res Frame Rate Limit License
P6-1Q 16 1024MiB 4 4096 2160 N/A GRID-Virtual-WS,2.0;
GRID-Virtual-WS-Ext, 2.0
P6-2Q 8 2048MiB 4 4096 2160 N/A GRID-Virtual-WS,2.0;
GRID-Virtual-WS-Ext, 2.0
P6-4Q 4 4096MiB 4 4096 2160 N/A GRID-Virtual-WS,2.0;
GRID-Virtual-WS-Ext, 2.0
P6-8Q 2 8192MiB 4 4096 2160 N/A GRID-Virtual-WS,2.0;
GRID-Virtual-WS-Ext, 2.0
P6-16Q 1 16384MIB 4 4096 2160 N/A GRID-Virtual-WS,2.0;
GRID-Virtual-WS-Ext, 2.0
P6-1A 16 1024MiB 1 1280 1024 N/A GRID-Virtual-Apps,3.0
P6-2A 8 2048MiB 1 1280 1024 N/A GRID-Virtual-Apps,3.0
P6-4A 4 4096MiB 1 1280 1024 N/A GRID-Virtual-Apps,3.0
P6-8A 2 8192MiB 1 1280 1024 N/A GRID-Virtual-Apps,3.0
P6-16A 1 16384MiB 1 1280 1024 N/A GRID-Virtual-Apps,3.0
P6-1B 16 1024MiB 4 2560 1600 N/A GRID-Virtual-PC,2.0;
GRID-Virtual-WS,2.0;
GRID-Virtual-WS-Ext, 2.0

NVIDIA GRID P4 vGPU Profiles (Link to P4 Data Sheet)

Name Max Instances FB Memory Display Heads Max X Res Max Y Res Frame Rate Limit License
P4-1Q 8 1024MiB 4 4096 2160 N/A GRID-Virtual-WS,2.0;
GRID-Virtual-WS-Ext, 2.0
P4-2Q 4 2048MiB 4 4096 2160 N/A GRID-Virtual-WS,2.0;
GRID-Virtual-WS-Ext, 2.0
P4-4Q 2 4096MiB 4 4096 2160 N/A GRID-Virtual-WS,2.0;
GRID-Virtual-WS-Ext, 2.0
P4-8Q 1 8192MiB 4 4096 2160 N/A GRID-Virtual-WS,2.0;
GRID-Virtual-WS-Ext, 2.0
P4-1A 8 1024MiB 1 1280 1024 N/A GRID-Virtual-Apps,3.0
P4-2A 4 2048MiB 1 1280 1024 N/A GRID-Virtual-Apps,3.0
P4-4A 2 4096MiB 1 1280 1024 N/A GRID-Virtual-Apps,3.0
P4-8A 1 8192MiB 1 1280 1024 N/A GRID-Virtual-Apps,3.0
P4-1B 8 1024MiB 4 2560 1600 N/A GRID-Virtual-PC,2.0;
GRID-Virtual-WS,2.0;
GRID-Virtual-WS-Ext, 2.0

As you can see NVIDIA GRID 5.0 and the Pascal GPUs provide a lot of power for your virtualized GPU needs. If you are interested in discovering more about GRID 5.0 be sure to visit the NVIDIA GRID site on NVIDIA.com. In my next blog I’ll cover some of the testing I’ve had a chance to do with the NVIDIA P4 GPU and GRID 5.0.

Permanent link to this article: https://www.wondernerd.net/changing-the-gpu-virtualization-game/

GTC17 Wrap Up Report

Bookmark and Share

Time for my yearly wrap up from GTC 2017. In my post I want to share the good and not so good highlights from my trip to GTC17. These are provided to give constructive feedback about both what is done right and some of the rough areas attendees (me) experienced.

GTC is fast becoming my favorite conference to attend (it used to be VMworld). The reason for this is that the conference is run by nerd for nerds. That means for the most part it’s nerd friendly. A good thing in this nerds book.

An example of this nerdvana is they don’t have loud bands and music or some other form of noise blasting from every corner of the building. It is possible to have real and valuable conversations without having to yell. I’m pleased to say I still have my voice at the end of the week.

It’s also nice that they keep the DJ in one place during the party on Wednsday evening. Which also brings me to my first set of things I didn’t particularly care for. I really enjoy the Tech Musem, but I think we are outgrowing it quickly. It was a bit cramped this year and this made it difficult to move around.

The basement is always a great place to hang out and meet new friends. But when you came up from the basement you were meet by a big mess. There was a security guard at the escalators going up to the second floor only letting a few go at a time. This created massive congestion in the only place to transition from one space to another.

Upstairs it was also problematic with part of the exhibit space being closed down, space was restricted. I’m sure there was a reason for all of this, it was just a little frustrating as an attendee.

The great thing about the party was the food. The desert forest did not disipoint they were all wonderful. The bartenders were also great and did a good job with the drinks.

Another area where GTC excels is their on site meals. The meals are flavorful and it doesn’t take forever to get them. It does have a bit of a cattle pen feel, especially the first day when it was nothing but a long line of wait staff directing people one way or another. That was remedied by the second day. Kudos for that.

I will say I found it odd that for two days we had sandwiches (Monday and Wednesday). I think there would be other alternatives to repeating almost the same menu. The south hall was cool to be in. It was very spacious. The only problem was getting to it. You had to go down a fairly steep flight of stairs to get to it. I was worried someone might fall going down them.

One of my friends with bad knees also had a problem with getting to and from the meals. There is not an easy way down to the south hall for those with disabilities. If you don’t feel good about going down or up steep steps you get a long walk to get your food and return to the conference.

I loved that beverages were out constantly. This made it easy to get something to drink. It would be nice if the cola choices were a bit broader than Sprit, Coke, and Diet Coke. The sweets wall was great again this year and many enjoyed it.

Inside the show floor, it was great having all the startups and innovators. I always learn so much talking with them and this year I only saw a single booth without anyone in it! It was great!

I do wish they would reconfigure the food/drink stations a bit in the exhibit hall. There placement made moving down the isles a little difficult. It also increased the possibility of wearing the food. I still love the drink ticket system. After having been at a few conferences where someone had a bit too much and needed a security escort out at the end of the night this seems to take care of it.

One of the things that I was miffed about was the conference t-shirts. Up until Wednesday evening, right before the exhibit hall closed, I didn’t think we got a t-shirt this year. In fact I went to the store and bought a shirt. Which it appears many others did as well (they were almost sold out of the exact same ones we got on Wednesday). It would have been nice if there were big signs or even a note in the back pack saying that shirts would be avalible following the keynote on Wednesday. It also would have saved me $10. I do think the shirts are pretty darn cool though, so it’s not horable having two of them. It reminds a lot of a Dr. Who / Futurama cross over shirt I have.

Conference materials. Bag, Drink Tickets, I am AI t-shirt (separate from registration), 6 generation t-shirt (the one that came with registration), papers, and name badge.

Another thing that left me with a similar feeling to the shirts was wrist bands for the Wednesday party. All that was said in the guide was that they would be in the exhibit hall on Wednesday evening. They didn’t say where we could find them.

I would have liked to see them use the apps notification system to push out a notice to all attendees about where to pickup wrist bands. Or it would be even cooler if they could put some of the NVIDIA technology to work. Last year they had a system that did facial recognition and enabled you to get a drink. Wouldn’t it be cool if, when you registered it snagged a picture of you and did recognition of you to let you into the party? I know there are problems with this, so it would require some thinking.

The sessions this year were outstanding. I have a few things I would change on them but they are minor. I would love to see a couple rows of tables put in either the front or the back of the room. Many people at GTC are taking notes or in my case using Twitter to take notes and share tables would make that so much easier than balancing a phone and tablet on your legs. It would also be great to mix up the sessions and bring more interaction. So in my case all the virtualization nerds hung out in room 231 the whole week and rarely ventured to other areas. At the same time it’s nice that all the sessions in your field are in the same general area. So it’s a tough call.

A few people pointed this out to me. It’s becoming imperative to add introductory courses and executive courses to GTC. AI/Deep Learning/Machine Learning are the next wave of business systems. Right now the conference addresses the “detailed how” to do these things. But as this becomes main stream professionals will want to get their feet wet and bring it into their organization. Right now there are very few sessions that speak to the simple how or even more important speak to the business decision makers. Why should executives invest in these ideas and how do they relate to their bottom line?

I think this is an awesome oppertunity for those looking to submit sessions for GTC in the coming years. Simply put make it consumable by executives and those just getting started with GPUs, regardless of application.

I would also like to see something like the vBrownBag Tech Talks at the show. Give people 15 minutes to share what they are working on with out the pressure of a full session.

Registration was simple and easy this year as in years past. The bags they provide are again of great quality and should last a long time. I saw several bags this week from previous GTCs (2014, 15, & 16) that were all still holding up well and looked like they were being used. I don’t know where the GTC events team gets them but other conferences should take note.

As a speaker I got to experience the additional fun of being a speaker. This is a great oppertunity for anyone who want to share their experiences. The speaker resources are phenomenal and they work with you on so many things. The AV teams are also fantastic making sure every thing is ready for you to speak even with only a few minutes between sessions.

GTC17 Speakers Gift

The speakers gift this year was very cool. It was a pen slide controller with laser pointer. I am looking forward to using this in presentations to come.

The app had a few more problems this year than last. It seems like on my S7 Edge every time there was an update pushed out I would have to uninstall and reinstall the app.

I wasn’t able to attend a dinner with strangers this year but I’m glad that it was back again this year. I know these dinners generate some of the best conversations of the week.

The posters were exceptional this year. The students who worked on them did a great job in developing their content and have given me so much to think about. Here is an example of one of the posters presented this year.

Social media at GTC is pretty darn good. I’d love to work with the social media team to help improve their overall impact. There are some very simple things that could be done to help increase the conversations on social channels.

Overall GTC17 was an outstanding event and all of the teams who worked so hard to put it on should be commended for their hard work. I think in the next couple of years this will be the next big conferance to attend in the IT industry. NVIDIA is well on its way to making them the best conferance at a insane scale.

All of the above, of course, is my personal opinion and perception. I provided it to help us all improve. If any of the GTC event teams would like clarification or more details on these please let me know. If you are a general reader and would like to add your two cents to the conversation please add it in the comments below.

Thank you to everyone who helped make GTC17 so very memorable.

Permanent link to this article: https://www.wondernerd.net/gtc17-wrap-up-report/

GTC17 Session S7349 Slides & Links

Bookmark and Share

This post is about GTC Session S7349, “Getting Started with GPUs for Linux Virtual Desktops on VMware Horizon.” Attached to the post is the slide deck presented in the session and below a list of links referenced in the session.

Trey and I appreciate the attendance and hope the information was helpful in getting you on your way with deploying Linux Desktops with GPUs. We also hope these slides help those who weren’t able to attend the session as well.

Please be sure to post in the comments if you have questions about getting your Linux VDI environment working. Building the lab environment that this session is based on took me 8 – twelve hour days to configure and trouble shoot. Once I knew where the problems were I was able to repeat the process quicker each time I did it.

GTC S7349 Slide Cover

 

Links from the session:

Permanent link to this article: https://www.wondernerd.net/gtc17-session-s7349-slides-links/

Long List of Links for Configuring Linux Desktop VMs

Bookmark and Share

This is a compilation of links that helped me prepare for GTC Session S7349. They aren’t in any particular order and may not be helpful as you are configuring your Linux Virtual Desktop VMs with GPU capabilities.

 

 

Permanent link to this article: https://www.wondernerd.net/long-list-of-links-for-configuring-linux-desktop-vms/