Layer 2 is no longer the default or Cisco recommended mode when deploying the Nexus 1000v switch. You can find more background information and details on how to do a Layer 3 install in my other post vSphere vSwitches and N1KV L3 Mode.
Management VLAN: Is not used to exchange data between the VSM and VEM, it is used to establish and maintain the connection between the VSM and vCenter and admin config (cli). Needs to be on a routable VLAN that can communicate with the vCenter Server.
Control VLAN: Is the control plane, provides the HA communication between VSMs and serves as the heartbeat/configuration channel to the VEMs (pushes config to VEMs). Usually, if the VSM is not detecting a ESXi host VEM, a Layer 1/2 issue is blocking the communication on this VLAN specifically.
Packet VLAN: Is used for any data plane traffic that needs to be processed by the control plane. This typically means CDP, LACP and IGMP traffic transiting a VEM.
To be able to communicate all VSMs and VEMs need to be on the same Control and Packet VLAN, hence being L2-adjacent. Can use the same VLAN for both, but ideally in production should be separate VLANs.
System VLANs will always remain in a forwarding state so forward traffic even before a VEM is programmed by the VSM. This is why certain system profiles require them (Control/Packet in L2 or the VMK in L3) as these VLANs need to be forwarding IN ORDER for the VEM to talk to the VSM.
The system vlan cmd must be added on control, packet and uplink port-profiles.
1. Deploy the OVA download from Cisco, I used Nexus1000v.5.2.1.SV3.2.1
Choose a name, location, Nexus 1000V Installer, ESX host and datastore. Next choose networks for control, packet (irrelevant if in L3 mode) and management.
Specify the following properties for the VSM: VSM domain ID (can be anything you want, for HA use same ID primary and secondary), Admin password, Management IP address, Management IP subnet mask, Management IP gateway. All of these can be changed after the install.
2. Load the VSM plug-in extension (xml) into vCentre.
This holds the ext-key and certificate that allow the VSM to talk to vCentre and link it to the N7KV.
Go to the IP of the VSM in a browser and download the cisco_nexus_1000v_extension.xml. Log into the vCentre using the vSphere client, in top menu, plug-ins >> Manage Plug-Ins, right click new plug-in, choose the downloaded xml file and click Register Plug-In.
The extension-key in the plug-in should be the same as that on the VMS.
VMware Update Manager (VUM) can automatically select the correct VEM software to be installed when the host is added to the DVS, but if you run Linux based vCSA vCentre cant install VUM. Therefore need to manually do in the cli.
Download the VEM software from the VSM (http://vsm_ip) and copy to ESX host
Make sure VEM is not already installed, if not install it.
esxcli software vib install -v /tmp/[name_cisco-vem].vib
If you get the following error, move a copy of the file to /var/log/vmware and run the cmd again.
(‘Cisco_bootbank_cisco-vem-v300-esx_5.2.1.3.2.1.0-6.0.1.vib’, ”, “[Errno 4] IOError: urlopen error [Errno 2] No such file or directory: ‘/var/log/vmware/Cisco_bootbank_cisco-vem-v300-esx_5.2.1.3.2.1.0-6.0.1.vib'”)
url = Cisco_bootbank_cisco-vem-v300-esx_5.2.1.3.2.1.0-6.0.1.vib
Please refer to the log file for more details.
Or if get the below error add -f to the end of the cmd to force it.
VIB Realtek_bootbank_r8152_2.06.0-4 violates extensibility rule checks: [u'(line 30: col 0) Element vib failed to validate content’]
Please refer to the log file for more details.
vem status –v Check the status
5. On the 1000v change the Server Virtualization Switch (SVS) domain to L2 mode and set the VLANs to be used by the control and packet interfaces.
domain id 1 Not needed as already configured
control vlan 999
packet vlan 999
svs mode L2 show svs domain
The hostname is what the DVS is called in vCentre, if you change this it is automatically updated within vCentre.
6. Configure the connection between the VSM and vCentre.
Once connected it will create the N7K1 within networking on the vCentre with a name that matches the VSM hostname.
remote ip address 10.10.10.7 vCentre IP address
protocol vmware-vim
vmware dvs datacenter-name DC1 Must be exact match of name in vCentre
connect
show svs connections Show settings and state of the connection
7. Create VLANs and vethernet port-profiles for the control and packet interface. Must also have the system vlan cmd or else when you add the ESXi host the VEM module wont show on the VMS.
name control_packet
port-profile type vethernet control_packet
vmware port-group
switchport mode access
switchport access vlan 999
no shutdown
state enabled
system vlan 999 The control and packet VLAN
8. Create the uplink port-profile type Ethernet.
This is a trunk containing all of the VLANs (must create the L2 VLANs) you want to trunk to ESXi hosts. This will be used to link the physical uplinks to the N1KV.
switchport mode trunk
switchport trunk allowed vlan 10,20,30
vmware port-group [name] Add name for it to show this name show in vCentre
no shutdown
state enabled If disable removes it from vCentre
system vlan 999 Is the control and packet VLANsshow run port-profile sys-uplink
show port-profile status
9. Assign ESXi Hosts and uplinks to the N1KV. Must do it with the vSphere software client due to bug with the web client.
Right click N1kV, Add and manage hosts >> Add hosts. Select the ESXi host and physical adapters and choose the relevant Port-profile. It will only show Port-profiles of the type Ethernet.
For the VEM module to show up on the VSM you must have L2 adjacency between the VEM and VSM. If they are on different ESX hosts ensure that control and packet VLANs are configured properly on all devices (switches) inbetween VEM and VSM.
Check on the ESX host:
vem status –v Shows the ports on N1KV, so uplink, control & packet
vemcmd show port vlans Shows ports and vlans used (T for trunk, A for Access)
vemcmd show trunk See VLANs allowed on the trunk to the VEM
To check on the VMS:
show module vem missing
show svs neighbors See all VEMs and VSMs and the primary MAC
10. Create HA pair (optional).
For HA first change the role on the original N7KV from standalone to primary, the default is standalone (if ever change to secondary needs a reboot).
Deploy the secondary device OVA in the same way but under Select configuration choose Nexus 1000v Secondary, and under settings only need to configure domain ID and password. Once deployed you can only log into secondary from console and can’t see any of the config.
show module
reload module [1 | 2] Reload active or standby supervisor (VSM)
system switchover Force switchover, will drop a few pings
show running-config diff Show difference from running to start config
Troubleshoot
On the ESX host make sure that it can talk to the VSM, it will advise about the connection and config.
Can get MAC address by looking on vCentre for the MAC of first NIC on the VSM, or run this cmd.
If don’t see the VEM on the VSM try stopping and starting the VSM, is non disruptive.
vem start
vem status Ensure the VEM is loaded running