Copyright(c) 2016 - 2024 Intel Corporation
This release includes the native icen VMware ESX Driver for Intel(R) Ethernet
Controllers E810-C and E810-XXV families
Driver version: 1.15.2.0
Supported ESXi release: 8.0
===================================================================================
=====================================
Contents:
---------
- Important Notes
- Supported Hardware
- Supported Features
- New Features
- New Hardware
- Bug Fixes
- Known Issues and Workarounds
- Command Line Parameters
===================================================================================
=====================================
Important Notes:
----------------
- Firmware Recovery Mode
A device will enter Firmware Recovery mode if it detects a problem that requires
the firmware to be reprogrammed.
In this mode, the device will not pass traffic or allow any configuration.
NVMUpdate tool can be used to recover the
device's firmware.
- Firmware Rollback Mode
When a device is in firmware rollback mode, it might have reduced functionality.
Usually a device enters firmware
rollback mode when a firmware update does not complete correctly. Rebooting or
power cycling the system might allow
the device to use the previous firmware image. Reapply the firmware update to
regain full device functionality. Use
the appropriate NVM Update Package to update the device's firmware. After
updating device firmware, an A/C power
cycle is recommended.
- Dynamic Device Personalization (DDP)
Adapters based on the Intel(R) Ethernet Controller 800 Series require a Dynamic
Device Personalization (DDP) package
to enable advanced features (such as dynamic tunneling). ESX driver embeds the
DDP package in the driver itself and
external DDP packages can be uploaded with Intnetcli Tool.
- Safe Mode
Adapters based on the Intel(R) Ethernet Controller 800 Series require a Dynamic
Device Personalization (DDP) package
to enable advanced and performance features. Device might go into Safe Mode if
the driver detects a missing,
incompatible or corrupted DDP package. Safe Mode supports basic traffic and
minimal functionalities, such as
updating the firmware.
- SR-IOV Virtual Function (VF) Creation:
VMware vSphere Hypervisor (ESXi) 7.0 ignores VF creation using "max_vfs" module
parameter if the VFs are created
using VMware vSphere Hypervisor (ESXi) 7.0 WebGUI. See ESXi 7.0 release notes -
https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-esxi-vcenter-server-70-
release-notes.html
For VMware vSphere Hypervisor (ESXi) 7.0 please ensure all Virtual Functions
(VFs) are powered off before changing
the number of VFs via the GUI.
- Intel(R) Ethernet 800 Series Network Adapters and Controllers support maximum of
2048 hardware queues and 2048 MSI-X
interrupts per device. These resources are allocated across various features such
as NetQueue (VMDQ), SR-IOV, DCB
when enabled by the PF driver. In case of resource contention among these
features, the PF driver might have to
recalculate the number of supported virtual functions (VFs) with the requested
configuration. The driver will log
this recalculated configuration in the vmkernel log.
Below is an example that illustrates driver behavior on a single port Intel(R)
Ethernet 800 Series Network Adapter:
In this scenario, VMDQ is enabled and is requesting 16 queues. LLDP engine in the
firmware is disabled. 256 VFs with
16 Queues per VF are requested. Below is the ESXi console command -
$esxcli system module parameters set -m icen -p "VMDQ = 16, LLDP = 0, max_vfs =
256, NumQPsPerVF = 16"
ESXi host must be rebooted for the module parameters to take effect. Upon
successful reboot, icen driver will load
with 16 queues allocated for NetQueue feature. Since LLDP engine is disabled in
the firmware, SW DCBx will be active
and will use additional Transmit queues to support various traffic classes. SR-
IOV will be enabled with 16 queues and
required MSI-X interrupts per VF. The total number of VFs that can be enabled is
directly impacted due to the
hardware limits mentioned above. In this case, the driver will reduce the number
of VFs supported and will print an
informational message in the vmkernel log. "119 VFs each with 16 queue pairs
enabled due to resource limitation." is
an example of such a message.
Additional dependencies:
The above example is for a single port Intel(R) Ethernet 800 Series Network
Adapter. The number of VFs supported per
port may vary depending on the port count of Intel(R) Ethernet 800 Series
Network Adapter. In addition to Intel(R)
Ethernet 800 Series Network Adapter resource limitations mentioned above,
supporting the desired configuration is
also dependent on the MSI-X interrupts and CPU cores availability by the host
server.
- Validation Configuration Maximums
Please consult the Validation Configurations Maximums section of the Intel(R)
Ethernet Controller E810 Feature
Support Matrix document for details of various maximum values for features
validated such as SR-IOV, VMDq, etc.
This document is available at intel.com.
- Trusted Virtual Function
Setting a Virtual Function (VF) to be trusted using the Intel extended
esxcli tool (intnetcli) allows the VF to request unicast/multicast
promiscuous mode. Additionally, a trusted mode VF can request more MAC
addresses and VLANs, subject to hardware limitations. Using intnetcli,
it is required to set a VF to the desired mode every time after rebooting a
VM or host since ESXi kernel may assign a different VF to the VM after
reboot. It is possible to set all VFs trusted, persistently between VM or
host reboot/power cycle by setting 'trust_all_vfs' module parameter.
To enable trusted virtual function use:
esxcfg-module -s trust_all_vfs=1 icen, or
esxcli system module parameters set -a -m icen -p trust_all_vfs=1
To disable trusted virtual function (default setting) use:
esxcfg-module -s trust_all_vfs=0 icen, or
esxcli system module parameters set -a -m icen -p trust_all_vfs=0
To get trusted mode settings use:
esxcli intnet sriovnic vf get -v <vfId> -n <vmnicX>
e.g., esxcli intnet sriovnic vf get -v 0 -n vmnic2
To enable/disable trusted mode use:
esxcli intnet sriovnic vf set -v <vfId> -n <vmnicX> -s <0|1> -t <0|1>
e.g., esxcli intnet sriovnic vf set -v 0 -n vmnic2 -s 0 -t 1
For more detailed information please refer to Intnetcli Release Notes.
Trusted mode is needed to:
- modify MAC address on VF (untrusted VF can only add other MAC filter, but it
cannot change the default one),
- add more than 8 VLAN filters (untrusted VF can only add up to 8 VLAN filters),
- add more than 16 other MAC addresses (untrusted VF can only add up to 16 MAC
filters),
- turn off the MAC Anti-Spoofing,
- use promiscuous mode.
Supported Hardware:
--------------------
- Intel(R) Ethernet Controllers E810-CAM1
- Intel(R) Ethernet Controllers E810-CAM2
- Intel(R) Ethernet Controllers E810-XXVAM2
- Intel(R) Ethernet Network Adapter E810-XXV-4
- Intel(R) Ethernet Network Adapter E810-C-Q2T
- Intel(R) Ethernet Network Adapter E810-XXV-4T
- Intel(R) Ethernet Network Adapter E810-C-Q1
- Intel(R) Ethernet Network Adapter E810-C-Q1 for OCP 3.0
- Intel(R) Ethernet Network Adapter E810-L-Q2 for OCP 3.0
- Intel(R) Ethernet Network Adapter E810-XXV-2 for OCP 3.0
- Intel(R) Ethernet Network Adapter E810-XXV-4 for OCP 3.0
- Intel(R) Ethernet 100G 2P E810-C-stg Adapter
- Intel(R) Ethernet 100G 2P E810-C-st Adapter
- Intel(R) Ethernet Connection E823-C
- Intel(R) Ethernet Connection E823-L
- Intel(R) Ethernet Connection E822-C
- Intel(R) Ethernet Connection E822-L
Native Mode Supported Features:
-------------------------------
- Rx, Tx, TSO Checksum Offload
- Jumbo Frame (9k max)
- Netqueue (VMDQ)
- SR-IOV
- VxLAN Offload and RxFilter
- Geneve Offload and RxFilter
- Hardware VLAN filtering
- Rx Hardware VLAN stripping
- Tx Hardware VLAN inserting
- Interrupt Moderation
- Link Auto-negotiation
- Link Flow Control
- Admin Link State
- Get uplink stats
- Firmware Recovery Mode
- Firmware Rollback Mode
- VLAN Tag Stripping Control for VF drivers
- Malicious Driver Detection (MDD)
- Forward Error Correction (FEC)
- Enable/Disable Firmware LLDP Engine
- Dump Optical Module Information
- Wake On LAN (WOL)
- Dynamic Device Personalization (DDP)
- Safe Mode
- Trusted Virtual Function
- Selectable Scheduler Topology
- Runtime DDP package loading
- Firmware Debug Dump
- Data Center Bridging (DCB)
- NetQueue Receive Side Scaling (RSS)
- Default Queue Receive Side Scaling (DRSS)
- Dump all FW Debug Clusters
- Floating VEB
- Precision Time Protocol (PTP)
- QinQ for VF
- SyncE for E810 devices
- GPS/GNSS module support
- SyncE for CVL LOM devices
ENS Polling and Interrupt mode supported features
--------------------------------------------------
- Tx/Rx burst
- TCP Checksum Offload
- TSO (IPv4 and IPv6)
- Jumbo Frame (9k max)
- Netqueue (VMDq)
- Geneve Offload and RxFilter
- Rx Hardware VLAN stripping
- Tx Hardware VLAN inserting
- Link Auto-negotiation
- Firmware Recovery Mode
- Firmware Rollback Mode
- Get/Set link state (Force PHY power up/down)
- Get uplink stats
- Dump Optical Module Information
- Dynamic Device Personalization (DDP)
- Safe Mode
- Non-Queue Pair support
- Link Layer Discovery Protocol (LLDP)
- Runtime DDP package loading
- VxLAN Offload and RxFilter
- ENS Interrupt Mode
- SR-IOV
- Forward Error Correction (FEC)
- Default Queue Receive Side Scaling (DRSS)
- QinQ support for VF
- NetQ RSS
New Features:
-------------
- None
New Hardware Supported:
-----------------------
- None
Bug Fixes / Enhancements:
-------------------------
- Updated Health Status message.
- Fixed secondary queues usage for fragmented IP packets.
- Added per-queue stats for Native mode.
- Fixed Broadcast packets forwarding rules to avoid ARP loopback issue for VMDQ
traffic.
Known Issues and Workarounds:
-----------------------------
- Changing NUMA Node index in NSX-T environment can cause traffic loss for adapters
associated with ENS mode driver.
Workaround: Roll back NUMA Node index configuration or upgrade to the latest
ESXi 8.0 and NSX-T 4.0 version.
- VMware ESX 7.0 operating system might experience a kernel panic (also known as
PSOD) during NVMUpdate process.
Issue will occur if installed RDMA driver is older than 1.3.4.23
Workaround: Update RDMA driver before NVMUpdate process or disable RDMA in icen
module parameters and perform platform reboot.
- A VLAN tag is not inserted automatically when DCB PFC is enabled on an interface.
Workaround: Since PFC for icen is VLAN-based, create a VLAN tag for DCB to be
fully operational.
- In ESXi system, the iDRAC may not correctly report driver version.
Workaround: Upgrade the iDRAC version to 4.20.20.20 or newer.
- E810 Adapter might not achieve line rate using micro-benchmarking tools such as
iperf, netperf, etc.
Workaround: None
- ESXi host might not shut down when receiving heavy traffic.
Workaround: It is strongly recommended stopping all network traffic and placing
ESX host in Maintenance mode
before restarting or shutting down ESX Server.
- VF MTU size can be changed from within the VM even if a host administrator
doesn't allow VM MTU changes.
Workaround: None
- Rebooting a Red Hat 8.2 Linux VF VM multiple times might cause traffic to stop on
that VF.
Workaround: None
- If SRIOV is enabled and VMDQ loopback is disabled from intnetcli VF-VMXNET3
traffic is disallowed.
To allow VF-VMXNET3 traffic VMDQ loopback must be enabled.
Workaround: None
- There is a very small chance (<1%) that under heavy traffic after DDP rollback
operation the DDP package cannot be loaded again.
Workaround: Repeat DDP rollback operation and try to load DDP package again.
- Removed 'Receive Length Error' metric from being reported to the OS. A hardware
issue on CVL was causing erroneous incrementation of the metric.
Workaround: Metric can still be accessed in the NIC private stats.
- This driver has RDMA interface compatible with irdman-1.5.0.0 or later. Using
older irdman driver can lead to interface version mismatch warning.
Workaround: None
- If NetQ RSS is enabled for ENS mode then PF reset can cause an issue with
starting NetQ RSS engine.
Workaround: None
Command Line Parameters:
------------------------
Ethtool is not supported for native driver.
Please use esxcli or esxcfg-* to set or get the driver information, for example:
- Get the driver supported module parameters
esxcli system module parameters list -m icen
- Set a driver module parameter (clearing other parameter settings)
esxcli system module parameters set -m icen -p VMDQ=4
- Set a driver module parameter (other parameter settings left unchanged)
esxcli system module parameters set -m icen -a -p VMDQ=4
- Get the driver info
esxcli network nic get -n vmnicX
- Get an uplink stats
esxcli network nic stats get -n vmnicX
The extended esxcli tool allows users to set device specific configurations, for
example:
- Dump Optical Module Information
esxcli intnet module read -n vmnicX
- Disable VMDQ VSIs loopback
esxcli intnet misc vmdqlb set -l 0 -n vmnicX
- Enable VMDQ VSIs loopback
esxcli intnet misc vmdqlb set -l 1 -n vmnicX
- Get VMDQ VSIs loopback status
esxcli intnet misc vmdqlb get -n vmnicX
Features Supported in the Intnetcli Tool:
-----------------------------------------
- Enable/Disable link privileges to operate on the link administratively
- Dump Optical Module Information
- FEC Configuration
- Enable/Disable Firmware LLDP Engine
- RSS Configuration
- Upload external DDP package
- Configure SRIOV
- Setup outer VLAN TPID for QinQ
- Get GNSS module information
- Get NIC temperature sensor readings
- Floating VEB
- Enable/Disable VMDQ VSIs loopback
The tool is available at the following link:
https://downloadcenter.intel.com/download/28479
===================================================================================
=====================================
Previously Released Versions:
-----------------------------
- Driver Version 1.14.2.0
Hardware Supported: Intel(R) Ethernet Controllers E810-CAM1, E810-CAM2, E810-
XXVAM2, E810-C-Q1, E810-C-Q1 for OCP 3.0,
E810-L-Q2 for OCP 3.0, E822-C, E822-L, E823-C, E823-L
Supported ESXi releases: 7.0, 7.0U2, 8.0
New Features:
- Floating VEB
- Disable VMDQ loopback by default and add an option to intnetcli to change
this setting
- Multivib component with icen and irdman drivers
- Forward Error Correction (FEC) for ENS mode
- NetQ RSS for ENS mode
- Display information whether GNSS module exists in the NIC
- SyncE for CVL LOM devices
- Added module parameters pause_rx and pause_tx for Link Flow Control and
enable it by default
New Hardware Supported:
- None
Bug Fixes:
- Added a warning message when RDMA critical error occurs.
- When NIC port was in autonegotiation mode and many DOWN/UP sequences was
applied by the link partner
then NIC port remained randomly in DOWN state. That behavior has been fixed.
- Resolved race condition during VFLR between OS calls to quiesce a VF and PF
driver attempts to set the VF active state.
This led to various VF configuration issues, such as add RSS, delete RSS,
add MAC.
- Resolved VLAN filtering configuration issue where PF driver configures
double VLAN when VF driver is supporting only single VLAN.
- Fixed RDMA Queue Pairs and Memory Regions being visible after bringing the
NIC down by optimizing QOS routines to only used when necessary.
- Resolved PTP TX Timestamp timeout issues related to scheduler policy.
- Fixed sporadic error when setting VXLAN/Geneve RX Queue filters.
- Driver Version 1.13.2.0
Hardware Supported: Intel(R) Ethernet Controllers E810-CAM1, E810-CAM2, E810-
XXVAM2, E810-C-Q1, E810-C-Q1 for OCP 3.0,
E810-L-Q2 for OCP 3.0, E822-C, E822-L, E823-C, E823-L
Supported ESXi releases: 7.0, 7.0U2, 8.0
New Features:
- Driver initialization delay until PHY FW is loaded
- QinQ support for VF in ENS mode
- Management Transaction cluster for FW Cluster Dump
- Dump all FW Debug Clusters
New Hardware Supported:
- None
Bug Fixes:
- Fixed unloading the driver after removing NSX configuration with overlay.
- Fixed Debug Dump file format to be loaded properly by DPDT tool.
- Added dmesg logs on enabling/disabling LLDP on the NIC.
- Fixed displaying FEC status in driver logs to be aligned with 'esxcli intnet
fec list' command.
- Fixed loading last SMA pins settings after rebooting the host.
- Driver Version 1.12.9.0
Hardware Supported: Intel(R) Ethernet Controllers E810-CAM1, E810-CAM2, E810-
XXVAM2, E810-C-Q1, E810-C-Q1 for OCP 3.0,
E810-L-Q2 for OCP 3.0, E822-C, E822-L, E823-C, E823-L
Supported ESXi releases: 7.0, 7.0U2, 8.0
New Features:
- None
New Hardware Supported:
- None
Bug Fixes:
- Fixed getting PTP Tx timestamp from Hardware.
- Driver Version 1.12.6.0
Hardware Supported: Intel(R) Ethernet Controllers E810-CAM1, E810-CAM2, E810-
XXVAM2, E810-C-Q1, E810-C-Q1 for OCP 3.0,
E810-L-Q2 for OCP 3.0
Supported ESXi releases: 7.0U2, 8.0
New Features:
- None
New Hardware Supported:
- None
Bug Fixes:
- Fixed an issue that caused the network adapter to send an unexpectedly high
number of PFC pause frames when the ESXi host became unresponsive.
- Driver Version 1.12.5.0
Hardware Supported: Intel(R) Ethernet Controllers E810-CAM1, E810-CAM2, E810-
XXVAM2, E810-C-Q1, E810-C-Q1 for OCP 3.0,
E810-L-Q2 for OCP 3.0
Supported ESXi releases: 7.0, 7.0U2, 8.0
New Features:
- GPS/GNSS module support
New Hardware Supported:
- None
Bug Fixes:
- Allow the trusted VF to receive all the packets from different VFs in
promiscuous mode.
- Encapsulated packets with inner packet padding are reported as packets with
incorrect checksum.
Driver collects these statistics and passes to the networking stack, which
resulted in high pNIC error alarm raised by the OS.
When receiving an indication of a checksum error, driver reports to the
networking stack that the checksum was not offloaded,
and the networking stack performs the checksum validation.
Therefore, there is no effect on the packet reception other than a slight
increase in CPU usage.
Driver will no longer report checksum error counts to the OS, but they can
still be observed with driver private statistics.
- Fixed an issue where Firmware logs were printed to system log even after
disabling the Firmware log level.
- Reduced turnaround time to provide the PTP TX timestamp value.
- In high TX traffic, packet transmission hung due to not properly restarting
the TX queues.
Due to this, queue resource starvation triggers, and hence PF moves to a
reset state.
Fixed the issue by properly restarting the queues.
- Fixed an issue where ESXi Core Dump Collector (netdump) failed to receive
the generated core dump.
- Fixed an issue where adding multiple Geneve filter configurations failed.
- Driver Version 1.11.3.0
Hardware Supported: Intel(R) Ethernet Controllers E810-CAM1, E810-CAM2, E810-
XXVAM2, E810-C-Q1, E810-C-Q1 for OCP 3.0,
E810-L-Q2 for OCP 3.0
Supported ESXi releases: 7.0, 7.0U2, 8.0
New Features:
- SyncE support for E810 devices
New Hardware Supported:
- None
Bug Fixes:
- Fixed PTP Tx timestamp process on E822 and E823 devices.
- Fixed TSO packet transmission which could in some scenarios cause transmit
hang on the adapter.
- Loading external DDP package or setting QinQ configuration could cause
Windows VF malfunction.
Fixed by improving VF reset routine in PF driver.
- Driver Version 1.10.5.0
Hardware Supported: Intel(R) Ethernet Controllers E810-CAM1, E810-CAM2, E810-
XXVAM2, E810-C-Q1, E810-C-Q1 for OCP 3.0,
E810-L-Q2 for OCP 3.0
Supported ESXi releases: 6.5, 6.7, 7.0, 7.0U2
New Features:
- Firmware Debug Dump
- Allowed No-FEC mode in link auto-negotiation
- Low Latency PTP timestamp
New Hardware Supported:
- None
Bug Fixes:
- Fixed restoring a connection after issuing link state up operation while
network cable is disconnected
- Fixed PSOD when exceeding number of 32 PFs
- Driver Version 1.9.8.0
Hardware Supported: Intel(R) Ethernet Controllers E810-CAM1, E810-CAM2, E810-
XXVAM2, E810-C-Q1, E810-C-Q1 for OCP 3.0,
E810-L-Q2 for OCP 3.0 E822-C, E822-L, E823-C, E823-L
Supported ESXi releases: 7.0U2
New Features:
- Following features have been added for E822 and E823 devices
- Support for ENS mode
- ENS mode Default Queue Receive Side Scaling (DRSS)
- Trusted VF support
- Support SR-IOV in ENS mode
- Support QinQ for VF
New Hardware Supported:
- Intel(R) Ethernet Connection 25G 4P E823-C LOM
Bug Fixes:
- None
- Driver Version 1.9.5.0
Hardware Supported: Intel(R) Ethernet Controllers E810-CAM1, E810-CAM2, E810-
XXVAM2, E810-C-Q1, E810-C-Q1 for OCP 3.0,
E810-L-Q2 for OCP 3.0
Supported ESXi releases: 6.5, 6.7, 7.0, 7.0U2
New Features:
- Support for Selectable Scheduler Topology
- Support for runtime DDP package loading
- Support SR-IOV in ENS mode
- Support QinQ for VF
New Hardware Supported:
- Intel(R) Ethernet 100G 2P E810-C-stg Adapter
- Intel(R) Ethernet 100G 2P E810-C-st Adapter
Bug Fixes:
- Fixed a case where changing VF MTU under heavy traffic caused VF DOWN on VM
- Fixed an issue which caused lack of communication after VF reset configured
with Distributed Switch
- Fixed PSOD during link up event in ENS mode
- Added improvements for FEC configuration to disallow unsupported modes
- Fixed Transmission Selection Algorithm for DCB in CEE mode
- Driver Version 1.6.6.0
Hardware Supported: Intel(R) Ethernet Connection E822-C, E822-L, E823-C, E823-L
Supported ESXi releases: 7.0
New Features:
- Added support for E822 and E823 devices
New Hardware Supported:
- Intel(R) Ethernet Connection E823-C for backplane
- Intel(R) Ethernet Connection E823-C for QSFP
- Intel(R) Ethernet Connection E823-C for SFP
- Intel(R) Ethernet Connection E823-C/X557-AT 10GBASE-T
- Intel(R) Ethernet Connection E823-C 1GbE
- Intel(R) Ethernet Connection E823-L for backplane
- Intel(R) Ethernet Connection E823-L for SFP
- Intel(R) Ethernet Connection E823-L/X557-AT 10GBASE-T
- Intel(R) Ethernet Connection E823-L 1GbE
- Intel(R) Ethernet Connection E823-L for QSFP
- Intel(R) Ethernet Connection E822-C for backplane
- Intel(R) Ethernet Connection E822-C for QSFP
- Intel(R) Ethernet Connection E822-C for SFP
- Intel(R) Ethernet Connection E822-C/X557-AT 10GBASE-T
- Intel(R) Ethernet Connection E822-C 1GbE
- Intel(R) Ethernet Connection E822-L for backplane
- Intel(R) Ethernet Connection E822-L for SFP
- Intel(R) Ethernet Connection E822-L/X557-AT 10GBASE-T
- Intel(R) Ethernet Connection E822-L 1GbE
Bug Fixes:
- None
- Driver Version 1.8.5.0
Hardware Supported: Intel(R) Ethernet Controllers E810-CAM1, E810-CAM2, E810-
XXVAM2, E810-C-Q1, E810-C-Q1 for OCP 3.0,
E810-L-Q2 for OCP 3.0
Supported ESXi releases: 6.5, 6.7, 7.0, 7.0U2
New Features:
- Trusted VF support
- GPIO pins support for VF drivers
- 1PPS support for VF drivers
New Hardware Supported:
- Intel(R) Ethernet Network Adapter E810-XXV-4
- Intel(R) Ethernet Network Adapter E810-C-Q2T
- Intel(R) Ethernet Network Adapter E810-XXV-4T
- Intel(R) Ethernet Network Adapter E810-XXV-2 for OCP 3.0
- Intel(R) Ethernet Network Adapter E810-XXV-4 for OCP 3.0
Bug Fixes:
- Fixed an issue in which, under heavy traffic, a PF reset might not
reconfigure Geneve filters causing Geneve traffic to be confined to a single queue.
- Fixed Virtual Functions (VFs) attached to a Windows based Virtual Machine
(VM) could fail to receive some multicast packets after a Physical Function (PF)
reset.
- Fixed an issue with some VMDq queues being incorrectly configured during ENS
mode driver initialization on ESXi 6.7U3. This resulted in traffic loss in the VM
associated with the NetQueue.
- Fixed link going down after configuring over 256 VFs on a single PF when DCB
was configured.
- Driver Version 1.7.5.0
Hardware Supported: Intel(R) Ethernet Controllers E810-CAM1, E810-CAM2, E810-
XXVAM2, E810-C-Q1, E810-C-Q1 for OCP 3.0,
E810-L-Q2 for OCP 3.0
Supported ESXi releases: 6.5, 6.7, 7.0, 7.0U2
New Features:
- ENS mode Default Queue Receive Side Scaling (DRSS)
New Hardware Supported:
- None
Bug Fixes:
- Installing the latest firmware will properly show current firmware version
after upgrading.
- DCB status is reporting incorrect IEEE PFC status (PFC Enabled : False) even
after enabling PFC.
- Driver Version 1.6.5.0
Hardware Supported: Intel(R) Ethernet Controllers E810-CAM1, E810-CAM2, E810-
XXVAM2, E810-C-Q1, E810-C-Q1 for OCP 3.0,
E810-L-Q2 for OCP 3.0
Supported ESXi releases: 6.5, 6.7, 7.0, 7.0U2
New Features:
- Precision Time Protocol (PTP)
Bug Fixes:
- Setting VLAN Guest Tagging (VGT) after booting with Virtual Switch Tagging
(VST) causes VGT to fail
- Configuring a VF with asymmetric queues fails
- In some situations the ESX driver might report "Failed to get link default
override, Error: ICE_ERR_DOES_NOT_EXIST"
when there is no issue.
- Enhanced Firmware diagnostics and logging
- Driver Version 1.5.5.0
Hardware Supported: Intel(R) Ethernet Controllers E810-CAM1, E810-CAM2, E810-
XXVAM2, E810-C-Q1, E810-C-Q1 for OCP 3.0,
E810-L-Q2 for OCP 3.0
Supported ESXi releases: 6.5, 6.7, 7.0
New Features:
- ENS mode VxLAN Offload and RxFilter
- ENS mode Link Layer Discovery Protocol (LLDP)
- Native mode NetQueue Receive Side Scaling (RSS)
- Native mode Default Queue Receive Side Scaling (DRSS)
Bug Fixes:
- Race condition in DCB IEEE that caused low performance and would randomly
cause a system crash
- TX traffic issue and NULL pointer warnings when LLDP was enabled
- Driver Version 1.4.2.0
Hardware Supported: Intel(R) Ethernet Controllers E810-CAM1, E810-CAM2, E810-
XXVAM2
Supported ESXi releases: 6.5, 6.7, 7.0
New Features:
- ENS Non-Queue Pair support
- ENS Interrupt Mode
- Data Center Bridging (DCB)
- Increase maximum queue pairs per VF to 16
- PF mailbox overflow detection for potentially malicious VF
Bug Fixes:
- Unable to configure VFs from GUI on ESXi 7.0 when using ESXi 6.7 icen driver
- Changing link settings in ENS mode may cause a system crash when rebooting or
changing back to Native driver.
- Network traffic might stop if users try to change the number of VMDQ queues
during
runtime while SR-IOV is enabled on VMware ESX 7.0.
- Driver Version 1.3.3.0
Hardware Supported: Intel(R) Ethernet Controllers E810-CAM1, E810-CAM2, E810-
XXVAM2
Supported ESXi releases: 6.5, 6.7, 7.0
New Features:
- ENS Polling Mode
Bug Fixes:
- Changing physical function MTU size or if the physical function is reset might
cause performance degradation for
GENEVE traffic.
- Optimized heap and interrupt vector usage.
- Fix various driver resource clean-up issues when SR-IOV is enabled.
- Fix link event race condition when unloading the driver that results in a
kernel panic (also known as PSOD).
- Fix race condition that might result in a kernel panic (also known as PSOD)
when the driver updates the uplink
shared data and tries to do so while acquiring write lock that has been
released by the ESXi kernel.
- Fix the issue where network traffic stopped working for SR-IOV VFs after an
adapter reset.
- Updated the product branding string.
- Fix performance degradation issues when SR-IOV and VMDQ co-exist.
- Driver Version: 1.2.1.0
Hardware Supported: Intel(R) Ethernet Controllers E810-CAM1, E810-CAM2, E810-
XXVAM2
Supported ESXi releases: 6.5, 6.7, 7.0
New Features:
- ENS Polling mode support
Bug Fixes:
- When using Windows 2016 and 2019 guests, UDP packet loss can occur with
buffer values larger than 32K in SRIOV
mode.
- On VMWare ESX 6.7 and Linux guest, a maximum of four out of the total RSS
queues created can be utilized on the
Linux VF.
- A VM with an E810 VF assigned to it may lose network connectivity if the
E810 physical link speed is changed to
an unsupported switch link speed that it is connected to. Furthermore,
VM/VF is unable to receive the link
change status therefore unable to communicate with other hosts on the
network. VM/VF can communicate with other
VMs that have VF assigned from the same E810 physical function running on
the same host.
- Setting a VLAN on the GENEVE Virtual Tunnel EndPoint (VTEP) causes traffic
is limited to use one queue.
- Altering an existing VLAN Guest Tag (VGT) on a portgroup that was set
previously might result in VF traffic
loss.
- Setting FEC to RS-FEC does not take effect. Requested mode stays Auto-FEC.
- Esxcli intnet LLDP command does not disable FW LLDP Agent
Known Issues and Workarounds:
- E810 Adapter might not achieve line rate using micro benchmarking tools
such as iperf, netperf, etc.
Workaround: None
- VMware ESXi 6.5U1 and U2 may lose connectivity of VMXNET3 adapters when
stressing traffic with various
configuration change (like VLAN, MTU, link state) and reset events.
Workaround: Upgrade to ESXi 6.5U3.
- ESXi host might not shut down when receiving heavy traffic.
Workaround: It is strongly recommended stopping all network traffic and
placing ESX host in Maintenance mode
before restarting or shutting down ESX Server.
- Changing physical function MTU size or if the physical function is reset
might cause performance degradation for
GENEVE traffic.
Workaround: Reboot the host to recover GENEVE performance degradation.
- VF MTU size can be changed from within the VM even if a host administrator
doesn't allow VM MTU changes.
Workaround: None