![]() ![]() Going into the Ubuntu instance, we have an older driver 1.6.0.0-k-NAPI. VMware VSphere Client With NVIDIA BlueField 2 DPU And ESXi 8.0 Host New VM UPT Not Activated Looking at the VM we just created, we will notice quickly that the UPT is not activated. VMware VSphere Client With NVIDIA BlueField 2 DPU And ESXi 8.0 Host Create VM Change To Monterey Overlay And Enable UPT Support All VM Memory Will Be Reserved We also will have this adapter connected to the NSX overlay. Here we can see the first network adapter does not have the option, but Network adapter 2 has the option to Use UPT Support. Now we can create a VM that we are going to call “STH Test”. VMware VSphere Client With NVIDIA BlueField 2 DPU And 3x ESXi 8.0 Hosts Dashboard With the NSX overlay complete, we can go back to our vSphere environment. VMware NSX Manager NVIDIA BlueField 2 DPU Host Transport Nodes Hosts ![]() If you are planning to deploy DPUs later in 2023 or 2024, but you want the nodes to participate that you are deploying in early 2023, then those nodes need to have BlueField-2 cards today, even if they are just being used as ConnectX-6 networking until the new capabilities are being used. If it does not, then we do not have the hardware capabilities to make the overlay work with the underlying hardware. One important aspect is that we need each node to have a BlueField-2 DPU for this to work. VMware NSX Manager NVIDIA BlueField 2 DPU TNP Monterey Enhanced Datapath Standard Here we can see that we have a NSX overlay that is being powered by the BlueField-2 DPUs. VMware NSX Manager NVIDIA BlueField 2 DPU Overlay DPUs in many ways make the NSX vision of managing networking off of the network switch more of a reality. While VMware initially announced Intel DPU support, and there are many other DPU vendors out there, VMware only has integrations for NVIDIA and AMD DPUs at this point. Something that NVIDIA does not have in its demo, but is a big differentiation point, is that VMware is building a BlueField-specific integration into its environment to make this all work. VMware VSphere Client DSwitch With NVIDIA BlueField DPU Capability Configured VMware VSphere Client DSwitch NSX Offload Compatible Adapters NVIDIA BlueField 2 DPUĪs we can see, the switch now has BlueField capabilities. The BlueField-2 DPUs are listed as compatible. VMware VSphere Client DSwitch NSX Offload Incompatible Adapters Since we are making a DPU-enabled virtual switch, the other NICs are listed as incompatible. ![]() The Dell PowerEdge R750’s have other NICs. VMware VSphere Client DSwitch NSX Offload Assign Uplink There are fairly standard steps like adding our three hosts. Ours is a bit different since we are configuring the switch with the BlueField DPUs. One of the first things we need to do is to create a Distributed Switch. As a quick note, the PowerEdge R760 review on STH will be live fairly soon. VMware VSphere Client With NVIDIA BlueField 2 DPU And ESXi 8.0 HostĪs you can see, the nodes provided are Dell PowerEdge R750’s with Intel Xeon Gold 6354 CPUs, and most importantly under the DPU category of hardware, we have the NVIDIA BlueField-2. ![]() Still, the hope was by STH doing this, we could get folks to see what the solution looks like. There are limited spaces due to how much hardware is tied up in each environment. If you are looking at deploying a DPU-based vSphere solution, you can request access to the environment and go through this demo. The demo environment we are using today is part of NVIDIA LaunchPad. The challenge with UPT is that we need the hardware and software stack to support it, but in this BlueField-2 DPU environment, we finally have that. With UPT (Uniform Pass Through) we get the ability to do migrations with higher performance levels close to pass-through. For performance-oriented networking, many use pass-through NICs that provide performance, but present challenges to migration. That provides the ability to do things like live migrations but at the cost of performance. Traditionally, one can use the vmxnet3 driver for networking and get all of the benefits of having the hypervisor manage the network stack. Many VMware administrators are familiar with one of the biggest basic networking challenges in a virtualized environment. Using DPUs Hands-on Lab with the NVIDIA BlueField-2 DPU and VMware vSphere Demo ![]()
0 Comments
Leave a Reply. |