Discussion:
[dpdk-dev] [PATCH 00/19] A new net PMD - ice
(too old to reply)
Wenzhuo Lu
2018-11-23 06:56:00 UTC
Permalink
This patch set adds the support of a new net PMD,
Intel® Ethernet Network Adapters E810, also
called ice.

Besides enabling this new NIC, also some other features
supported on this NIC.
Like below,

Basic features:
1, Basic device operations: probe, initialization, start/stop, configure, info get.
2, RX/TX queue operations: setup/release, start/stop, info get.
3, RX/TX.

HW Offload features:
1, CRC Stripping/insertion.
2, L2/L3 checksum strip/insertion.
3, PVID set.
4, TPID change.
5, TSO (LRO/RSC not supported).

Stats:
1, statics & xstatics.

Switch functions:
1, MAC Filter Add/Delete.
2, VLAN Filter Add/Delete.

Power saving:
1, RX interrupt mode.

Misc:
1, Interrupt For Link Status.
2, firmware info query.
3, Jumbo Frame Support.
4, ptype check.
5, EEPROM check and set.

Wenzhuo Lu (19):
net/ice: add base code
net/ice: support device initialization
net/ice: support device and queue ops
net/ice: support getting device information
net/ice: support packet type getting
net/ice: support link update
net/ice: support MTU setting
net/ice: support MAC ops
net/ice: support VLAN ops
net/ice: support RSS
net/ice: support RX queue interruption
net/ice: support FW version getting
net/ice: support EEPROM information getting
net/ice: support statistics
net/ice: support queue information getting
net/ice: support basic RX/TX
net/ice: support advance RX/TX
net/ice: support descriptor ops
doc: add ICE description and update release note

MAINTAINERS | 7 +
config/common_base | 9 +
doc/guides/nics/features/ice.ini | 39 +
doc/guides/nics/ice.rst | 78 +
drivers/net/Makefile | 1 +
drivers/net/ice/Makefile | 76 +
drivers/net/ice/base/README | 22 +
drivers/net/ice/base/ice_acl.c | 4 +
drivers/net/ice/base/ice_acl.h | 7 +
drivers/net/ice/base/ice_acl_ctrl.c | 4 +
drivers/net/ice/base/ice_adminq_cmd.h | 1724 ++++++
drivers/net/ice/base/ice_alloc.h | 22 +
drivers/net/ice/base/ice_bitops.h | 233 +
drivers/net/ice/base/ice_common.c | 3332 ++++++++++
drivers/net/ice/base/ice_common.h | 159 +
drivers/net/ice/base/ice_controlq.c | 1098 ++++
drivers/net/ice/base/ice_controlq.h | 97 +
drivers/net/ice/base/ice_devids.h | 17 +
drivers/net/ice/base/ice_flex_pipe.c | 5 +
drivers/net/ice/base/ice_flex_pipe.h | 83 +
drivers/net/ice/base/ice_flex_type.h | 19 +
drivers/net/ice/base/ice_flow.c | 4 +
drivers/net/ice/base/ice_flow.h | 8 +
drivers/net/ice/base/ice_hw_autogen.h | 9815 ++++++++++++++++++++++++++++++
drivers/net/ice/base/ice_impl_guide.c | 167 +
drivers/net/ice/base/ice_lan_tx_rx.h | 2290 +++++++
drivers/net/ice/base/ice_nvm.c | 388 ++
drivers/net/ice/base/ice_osdep.h | 491 ++
drivers/net/ice/base/ice_protocol_type.h | 237 +
drivers/net/ice/base/ice_sbq_cmd.h | 93 +
drivers/net/ice/base/ice_sched.c | 1715 ++++++
drivers/net/ice/base/ice_sched.h | 68 +
drivers/net/ice/base/ice_sriov.c | 129 +
drivers/net/ice/base/ice_sriov.h | 35 +
drivers/net/ice/base/ice_status.h | 45 +
drivers/net/ice/base/ice_switch.c | 2415 ++++++++
drivers/net/ice/base/ice_switch.h | 320 +
drivers/net/ice/base/ice_type.h | 789 +++
drivers/net/ice/base/virtchnl.h | 787 +++
drivers/net/ice/ice_ethdev.c | 3302 ++++++++++
drivers/net/ice/ice_ethdev.h | 318 +
drivers/net/ice/ice_lan_rxtx.c | 2922 +++++++++
drivers/net/ice/ice_logs.h | 45 +
drivers/net/ice/ice_rxtx.h | 155 +
drivers/net/ice/rte_pmd_ice_version.map | 4 +
mk/rte.app.mk | 1 +
46 files changed, 33579 insertions(+)
create mode 100644 doc/guides/nics/features/ice.ini
create mode 100644 doc/guides/nics/ice.rst
create mode 100644 drivers/net/ice/Makefile
create mode 100644 drivers/net/ice/base/README
create mode 100644 drivers/net/ice/base/ice_acl.c
create mode 100644 drivers/net/ice/base/ice_acl.h
create mode 100644 drivers/net/ice/base/ice_acl_ctrl.c
create mode 100644 drivers/net/ice/base/ice_adminq_cmd.h
create mode 100644 drivers/net/ice/base/ice_alloc.h
create mode 100644 drivers/net/ice/base/ice_bitops.h
create mode 100644 drivers/net/ice/base/ice_common.c
create mode 100644 drivers/net/ice/base/ice_common.h
create mode 100644 drivers/net/ice/base/ice_controlq.c
create mode 100644 drivers/net/ice/base/ice_controlq.h
create mode 100644 drivers/net/ice/base/ice_devids.h
create mode 100644 drivers/net/ice/base/ice_flex_pipe.c
create mode 100644 drivers/net/ice/base/ice_flex_pipe.h
create mode 100644 drivers/net/ice/base/ice_flex_type.h
create mode 100644 drivers/net/ice/base/ice_flow.c
create mode 100644 drivers/net/ice/base/ice_flow.h
create mode 100644 drivers/net/ice/base/ice_hw_autogen.h
create mode 100644 drivers/net/ice/base/ice_impl_guide.c
create mode 100644 drivers/net/ice/base/ice_lan_tx_rx.h
create mode 100644 drivers/net/ice/base/ice_nvm.c
create mode 100644 drivers/net/ice/base/ice_osdep.h
create mode 100644 drivers/net/ice/base/ice_protocol_type.h
create mode 100644 drivers/net/ice/base/ice_sbq_cmd.h
create mode 100644 drivers/net/ice/base/ice_sched.c
create mode 100644 drivers/net/ice/base/ice_sched.h
create mode 100644 drivers/net/ice/base/ice_sriov.c
create mode 100644 drivers/net/ice/base/ice_sriov.h
create mode 100644 drivers/net/ice/base/ice_status.h
create mode 100644 drivers/net/ice/base/ice_switch.c
create mode 100644 drivers/net/ice/base/ice_switch.h
create mode 100644 drivers/net/ice/base/ice_type.h
create mode 100644 drivers/net/ice/base/virtchnl.h
create mode 100644 drivers/net/ice/ice_ethdev.c
create mode 100644 drivers/net/ice/ice_ethdev.h
create mode 100644 drivers/net/ice/ice_lan_rxtx.c
create mode 100644 drivers/net/ice/ice_logs.h
create mode 100644 drivers/net/ice/ice_rxtx.h
create mode 100644 drivers/net/ice/rte_pmd_ice_version.map
--
1.9.3
Wenzhuo Lu
2018-11-23 06:56:02 UTC
Permalink
Signed-off-by: Wenzhuo Lu <***@intel.com>
Signed-off-by: Qiming Yang <***@intel.com>
Signed-off-by: Xiaoyun Li <***@intel.com>
Signed-off-by: Jingjing Wu <***@intel.com>
---
config/common_base | 9 +
drivers/net/Makefile | 1 +
drivers/net/ice/Makefile | 75 ++++
drivers/net/ice/ice_ethdev.c | 645 ++++++++++++++++++++++++++++++++
drivers/net/ice/ice_ethdev.h | 318 ++++++++++++++++
drivers/net/ice/ice_logs.h | 45 +++
drivers/net/ice/ice_rxtx.h | 117 ++++++
drivers/net/ice/rte_pmd_ice_version.map | 4 +
mk/rte.app.mk | 1 +
9 files changed, 1215 insertions(+)
create mode 100644 drivers/net/ice/Makefile
create mode 100644 drivers/net/ice/ice_ethdev.c
create mode 100644 drivers/net/ice/ice_ethdev.h
create mode 100644 drivers/net/ice/ice_logs.h
create mode 100644 drivers/net/ice/ice_rxtx.h
create mode 100644 drivers/net/ice/rte_pmd_ice_version.map

diff --git a/config/common_base b/config/common_base
index d12ae98..a342760 100644
--- a/config/common_base
+++ b/config/common_base
@@ -297,6 +297,15 @@ CONFIG_RTE_LIBRTE_FM10K_RX_OLFLAGS_ENABLE=y
CONFIG_RTE_LIBRTE_FM10K_INC_VECTOR=y

#
+# Compile burst-oriented ICE PMD driver
+#
+CONFIG_RTE_LIBRTE_ICE_PMD=y
+CONFIG_RTE_LIBRTE_ICE_DEBUG_RX=n
+CONFIG_RTE_LIBRTE_ICE_DEBUG_TX=n
+CONFIG_RTE_LIBRTE_ICE_DEBUG_TX_FREE=n
+CONFIG_RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC=y
+CONFIG_RTE_LIBRTE_ICE_16BYTE_RX_DESC=n
+
# Compile burst-oriented AVF PMD driver
#
CONFIG_RTE_LIBRTE_AVF_PMD=y
diff --git a/drivers/net/Makefile b/drivers/net/Makefile
index c0386fe..670d7f7 100644
--- a/drivers/net/Makefile
+++ b/drivers/net/Makefile
@@ -30,6 +30,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_ENIC_PMD) += enic
DIRS-$(CONFIG_RTE_LIBRTE_PMD_FAILSAFE) += failsafe
DIRS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k
DIRS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += i40e
+DIRS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice
DIRS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += ixgbe
DIRS-$(CONFIG_RTE_LIBRTE_LIO_PMD) += liquidio
DIRS-$(CONFIG_RTE_LIBRTE_MLX4_PMD) += mlx4
diff --git a/drivers/net/ice/Makefile b/drivers/net/ice/Makefile
new file mode 100644
index 0000000..00e1dda
--- /dev/null
+++ b/drivers/net/ice/Makefile
@@ -0,0 +1,75 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Intel Corporation
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+#
+# library name
+#
+LIB = librte_pmd_ice.a
+
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+EXPORT_MAP := rte_pmd_ice_version.map
+
+LIBABIVER := 1
+
+#
+# Add extra flags for base driver files (also known as shared code)
+# to disable warnings
+#
+ifeq ($(CONFIG_RTE_TOOLCHAIN_ICC),y)
+CFLAGS_BASE_DRIVER = -wd593 -wd188
+else ifeq ($(CONFIG_RTE_TOOLCHAIN_CLANG),y)
+CFLAGS_BASE_DRIVER += -Wno-sign-compare
+CFLAGS_BASE_DRIVER += -Wno-unused-value
+CFLAGS_BASE_DRIVER += -Wno-unused-parameter
+CFLAGS_BASE_DRIVER += -Wno-strict-aliasing
+CFLAGS_BASE_DRIVER += -Wno-format
+CFLAGS_BASE_DRIVER += -Wno-missing-field-initializers
+CFLAGS_BASE_DRIVER += -Wno-pointer-to-int-cast
+CFLAGS_BASE_DRIVER += -Wno-format-nonliteral
+CFLAGS_BASE_DRIVER += -Wno-unused-variable
+else
+CFLAGS_BASE_DRIVER = -Wno-sign-compare
+CFLAGS_BASE_DRIVER += -Wno-unused-value
+CFLAGS_BASE_DRIVER += -Wno-unused-parameter
+CFLAGS_BASE_DRIVER += -Wno-strict-aliasing
+CFLAGS_BASE_DRIVER += -Wno-format
+CFLAGS_BASE_DRIVER += -Wno-missing-field-initializers
+CFLAGS_BASE_DRIVER += -Wno-pointer-to-int-cast
+CFLAGS_BASE_DRIVER += -Wno-format-nonliteral
+CFLAGS_BASE_DRIVER += -Wno-format-security
+CFLAGS_BASE_DRIVER += -Wno-unused-variable
+
+ifeq ($(shell test $(GCC_VERSION) -ge 44 && echo 1), 1)
+CFLAGS_BASE_DRIVER += -Wno-unused-but-set-variable
+endif
+
+endif
+OBJS_BASE_DRIVER=$(patsubst %.c,%.o,$(notdir $(wildcard $(SRCDIR)/base/*.c)))
+$(foreach obj, $(OBJS_BASE_DRIVER), $(eval CFLAGS_$(obj)+=$(CFLAGS_BASE_DRIVER)))
+
+VPATH += $(SRCDIR)/base
+
+#
+# all source are stored in SRCS-y
+#
+SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_controlq.c
+SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_common.c
+SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_sched.c
+SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_flow.c
+SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_switch.c
+SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_nvm.c
+SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_flex_pipe.c
+
+SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_ethdev.c
+
+# this lib depends upon:
+DEPDIRS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += lib/librte_eal lib/librte_ether
+DEPDIRS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += lib/librte_mempool lib/librte_mbuf
+DEPDIRS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += lib/librte_net
+DEPDIRS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += lib/librte_kvargs
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
new file mode 100644
index 0000000..7be77cf
--- /dev/null
+++ b/drivers/net/ice/ice_ethdev.c
@@ -0,0 +1,645 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#include <rte_ethdev_pci.h>
+
+#include "base/ice_sched.h"
+#include "ice_ethdev.h"
+#include "ice_rxtx.h"
+
+#define ICE_MAX_QP_NUM "max_queue_pair_num"
+#define ICE_DFLT_OUTER_TAG_TYPE ICE_AQ_VSI_OUTER_TAG_VLAN_9100
+
+int ice_logtype_init;
+int ice_logtype_driver;
+
+static void ice_dev_close(struct rte_eth_dev *dev);
+
+static const struct rte_pci_id pci_id_ice_map[] = {
+ { RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_BACKPLANE) },
+ { RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_QSFP) },
+ { RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_SFP) },
+ { .vendor_id = 0, /* sentinel */ },
+};
+
+static const struct eth_dev_ops ice_eth_dev_ops = {
+ .dev_configure = NULL,
+};
+
+static void
+ice_init_controlq_parameter(struct ice_hw *hw)
+{
+ /* fields for adminq */
+ hw->adminq.num_rq_entries = ICE_ADMINQ_LEN;
+ hw->adminq.num_sq_entries = ICE_ADMINQ_LEN;
+ hw->adminq.rq_buf_size = ICE_ADMINQ_BUF_SZ;
+ hw->adminq.sq_buf_size = ICE_ADMINQ_BUF_SZ;
+
+ /* fields for mailboxq, DPDK used as PF host */
+ hw->mailboxq.num_rq_entries = ICE_MAILBOXQ_LEN;
+ hw->mailboxq.num_sq_entries = ICE_MAILBOXQ_LEN;
+ hw->mailboxq.rq_buf_size = ICE_MAILBOXQ_BUF_SZ;
+ hw->mailboxq.sq_buf_size = ICE_MAILBOXQ_BUF_SZ;
+}
+
+static int
+ice_check_qp_num(const char *key, const char *qp_value,
+ __rte_unused void *opaque)
+{
+ char *end = NULL;
+ int num = 0;
+
+ while (isblank(*qp_value))
+ qp_value++;
+
+ num = strtoul(qp_value, &end, 10);
+
+ if (!num || (*end == '-') || (errno != 0)) {
+ PMD_DRV_LOG(WARNING, "invalid value:\"%s\" for key:\"%s\", "
+ "value must be > 0",
+ qp_value, key);
+ return -1;
+ }
+
+ return num;
+}
+
+static int
+ice_config_max_queue_pair_num(struct rte_devargs *devargs)
+{
+ struct rte_kvargs *kvlist;
+ const char *queue_num_key = ICE_MAX_QP_NUM;
+ int ret;
+
+ if (!devargs)
+ return 0;
+
+ kvlist = rte_kvargs_parse(devargs->args, NULL);
+ if (!kvlist)
+ return 0;
+
+ if (!rte_kvargs_count(kvlist, queue_num_key)) {
+ rte_kvargs_free(kvlist);
+ return 0;
+ }
+
+ if (rte_kvargs_process(kvlist, queue_num_key,
+ ice_check_qp_num, NULL) < 0) {
+ rte_kvargs_free(kvlist);
+ return 0;
+ }
+ ret = rte_kvargs_process(kvlist, queue_num_key,
+ ice_check_qp_num, NULL);
+ rte_kvargs_free(kvlist);
+
+ return ret;
+}
+
+static int
+ice_res_pool_init(struct ice_res_pool_info *pool, uint32_t base,
+ uint32_t num)
+{
+ struct pool_entry *entry;
+
+ if (!pool || !num)
+ return -EINVAL;
+
+ entry = rte_zmalloc("ice", sizeof(*entry), 0);
+ if (!entry) {
+ PMD_INIT_LOG(ERR,
+ "Failed to allocate memory for resource pool");
+ return -ENOMEM;
+ }
+
+ /* queue heap initialize */
+ pool->num_free = num;
+ pool->num_alloc = 0;
+ pool->base = base;
+ LIST_INIT(&pool->alloc_list);
+ LIST_INIT(&pool->free_list);
+
+ /* Initialize element */
+ entry->base = 0;
+ entry->len = num;
+
+ LIST_INSERT_HEAD(&pool->free_list, entry, next);
+ return 0;
+}
+
+static int
+ice_res_pool_alloc(struct ice_res_pool_info *pool,
+ uint16_t num)
+{
+ struct pool_entry *entry, *valid_entry;
+
+ if (!pool || !num) {
+ PMD_INIT_LOG(ERR, "Invalid parameter");
+ return -EINVAL;
+ }
+
+ if (pool->num_free < num) {
+ PMD_INIT_LOG(ERR, "No resource. ask:%u, available:%u",
+ num, pool->num_free);
+ return -ENOMEM;
+ }
+
+ valid_entry = NULL;
+ /* Lookup in free list and find most fit one */
+ LIST_FOREACH(entry, &pool->free_list, next) {
+ if (entry->len >= num) {
+ /* Find best one */
+ if (entry->len == num) {
+ valid_entry = entry;
+ break;
+ }
+ if (!valid_entry ||
+ valid_entry->len > entry->len)
+ valid_entry = entry;
+ }
+ }
+
+ /* Not find one to satisfy the request, return */
+ if (!valid_entry) {
+ PMD_INIT_LOG(ERR, "No valid entry found");
+ return -ENOMEM;
+ }
+ /**
+ * The entry have equal queue number as requested,
+ * remove it from alloc_list.
+ */
+ if (valid_entry->len == num) {
+ LIST_REMOVE(valid_entry, next);
+ } else {
+ /**
+ * The entry have more numbers than requested,
+ * create a new entry for alloc_list and minus its
+ * queue base and number in free_list.
+ */
+ entry = rte_zmalloc("res_pool", sizeof(*entry), 0);
+ if (!entry) {
+ PMD_INIT_LOG(ERR,
+ "Failed to allocate memory for "
+ "resource pool");
+ return -ENOMEM;
+ }
+ entry->base = valid_entry->base;
+ entry->len = num;
+ valid_entry->base += num;
+ valid_entry->len -= num;
+ valid_entry = entry;
+ }
+
+ /* Insert it into alloc list, not sorted */
+ LIST_INSERT_HEAD(&pool->alloc_list, valid_entry, next);
+
+ pool->num_free -= valid_entry->len;
+ pool->num_alloc += valid_entry->len;
+
+ return valid_entry->base + pool->base;
+}
+
+static void
+ice_res_pool_destroy(struct ice_res_pool_info *pool)
+{
+ struct pool_entry *entry, *next_entry;
+
+ if (!pool)
+ return;
+
+ for (entry = LIST_FIRST(&pool->alloc_list);
+ entry && (next_entry = LIST_NEXT(entry, next), 1);
+ entry = next_entry) {
+ LIST_REMOVE(entry, next);
+ rte_free(entry);
+ }
+
+ for (entry = LIST_FIRST(&pool->free_list);
+ entry && (next_entry = LIST_NEXT(entry, next), 1);
+ entry = next_entry) {
+ LIST_REMOVE(entry, next);
+ rte_free(entry);
+ }
+
+ pool->num_free = 0;
+ pool->num_alloc = 0;
+ pool->base = 0;
+ LIST_INIT(&pool->alloc_list);
+ LIST_INIT(&pool->free_list);
+}
+
+static void
+ice_vsi_config_default_rss(struct ice_aqc_vsi_props *info)
+{
+ /* Set VSI LUT selection */
+ info->q_opt_rss = ICE_AQ_VSI_Q_OPT_RSS_LUT_VSI &
+ ICE_AQ_VSI_Q_OPT_RSS_LUT_M;
+ /* Set Hash scheme */
+ info->q_opt_rss |= ICE_AQ_VSI_Q_OPT_RSS_TPLZ &
+ ICE_AQ_VSI_Q_OPT_RSS_HASH_M;
+ /* enable TC */
+ info->q_opt_tc = ICE_AQ_VSI_Q_OPT_TC_OVR_M;
+}
+
+static enum ice_status
+ice_vsi_config_tc_queue_mapping(struct ice_vsi *vsi,
+ struct ice_aqc_vsi_props *info,
+ uint8_t enabled_tcmap)
+{
+ uint16_t bsf, qp_idx;
+
+ /* default tc 0 now. Multi-TC supporting need to be done later.
+ * Configure TC and queue mapping parameters, for enabled TC,
+ * allocate qpnum_per_tc queues to this traffic.
+ */
+ if (enabled_tcmap != 0x01) {
+ PMD_INIT_LOG(ERR, "only TC0 is supported");
+ return -ENOTSUP;
+ }
+
+ vsi->nb_qps = RTE_MIN(vsi->nb_qps, ICE_MAX_Q_PER_TC);
+ bsf = rte_bsf32(vsi->nb_qps);
+ /* Adjust the queue number to actual queues that can be applied */
+ vsi->nb_qps = 0x1 << bsf;
+
+ qp_idx = 0;
+ /* Set tc and queue mapping with VSI */
+ info->tc_mapping[0] = rte_cpu_to_le_16((qp_idx <<
+ ICE_AQ_VSI_TC_Q_OFFSET_S) |
+ (bsf << ICE_AQ_VSI_TC_Q_NUM_S));
+
+ /* Associate queue number with VSI */
+ info->mapping_flags |= rte_cpu_to_le_16(ICE_AQ_VSI_Q_MAP_CONTIG);
+ info->q_mapping[0] = rte_cpu_to_le_16(vsi->base_queue);
+ info->q_mapping[1] = rte_cpu_to_le_16(vsi->nb_qps);
+ info->valid_sections |=
+ rte_cpu_to_le_16(ICE_AQ_VSI_PROP_RXQ_MAP_VALID);
+ /* Set the info.ingress_table and info.egress_table
+ * for UP translate table. Now just set it to 1:1 map by default
+ * -- 0b 111 110 101 100 011 010 001 000 == 0xFAC688
+ */
+ info->ingress_table = rte_cpu_to_le_32(0x00FAC688);
+ info->egress_table = rte_cpu_to_le_32(0x00FAC688);
+ info->outer_up_table = rte_cpu_to_le_32(0x00FAC688);
+ return 0;
+}
+
+static int
+ice_init_mac_address(struct rte_eth_dev *dev)
+{
+ struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+ if (!is_unicast_ether_addr(
+ (struct ether_addr *)hw->port_info[0].mac.lan_addr)) {
+ PMD_INIT_LOG(ERR, "Invalid MAC address");
+ return -EINVAL;
+ }
+
+ ether_addr_copy((struct ether_addr *)hw->port_info[0].mac.lan_addr,
+ (struct ether_addr *)hw->port_info[0].mac.perm_addr);
+
+ dev->data->mac_addrs = rte_zmalloc("ice", sizeof(struct ether_addr), 0);
+ if (!dev->data->mac_addrs) {
+ PMD_INIT_LOG(ERR,
+ "Failed to allocate memory to store mac address");
+ return -ENOMEM;
+ }
+ /* store it to dev data */
+ ether_addr_copy((struct ether_addr *)hw->port_info[0].mac.perm_addr,
+ &dev->data->mac_addrs[0]);
+ return 0;
+}
+
+/* Initialize SW parameters of PF */
+static int
+ice_pf_sw_init(struct rte_eth_dev *dev)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_hw *hw = ICE_PF_TO_HW(pf);
+
+ if (ice_config_max_queue_pair_num(dev->device->devargs) > 0)
+ pf->lan_nb_qp_max =
+ ice_config_max_queue_pair_num(dev->device->devargs);
+ else
+ pf->lan_nb_qp_max =
+ (uint16_t)RTE_MIN(hw->func_caps.common_cap.num_txq,
+ hw->func_caps.common_cap.num_rxq);
+
+ pf->lan_nb_qps = pf->lan_nb_qp_max;
+
+ return 0;
+}
+
+static struct ice_vsi *
+ice_setup_vsi(struct ice_pf *pf, enum ice_vsi_type type)
+{
+ struct ice_hw *hw = ICE_PF_TO_HW(pf);
+ struct ice_vsi *vsi = NULL;
+ struct ice_vsi_ctx vsi_ctx;
+ int ret;
+ uint16_t max_txqs[ICE_MAX_TRAFFIC_CLASS] = { 0 };
+ uint8_t tc_bitmap = 0x1;
+
+ /* hw->num_lports = 1 in NIC mode */
+ vsi = rte_zmalloc("ice_vsi", sizeof(struct ice_vsi), 0);
+ if (!vsi)
+ return NULL;
+
+ vsi->idx = pf->next_vsi_idx;
+ pf->next_vsi_idx++;
+ vsi->type = type;
+ vsi->adapter = ICE_PF_TO_ADAPTER(pf);
+ vsi->max_macaddrs = ICE_NUM_MACADDR_MAX;
+ vsi->vlan_anti_spoof_on = 0;
+ vsi->vlan_filter_on = 1;
+ TAILQ_INIT(&vsi->mac_list);
+ TAILQ_INIT(&vsi->vlan_list);
+
+ memset(&vsi_ctx, 0, sizeof(vsi_ctx));
+ /* base_queue in used in queue mapping of VSI add/update command.
+ * Suppose vsi->base_queue is 0 now, don't consider SRIOV, VMDQ
+ * cases in the first stage. Only Main VSI.
+ */
+ vsi->base_queue = 0;
+ switch (type) {
+ case ICE_VSI_PF:
+ vsi->nb_qps = pf->lan_nb_qps;
+ ice_vsi_config_default_rss(&vsi_ctx.info);
+ vsi_ctx.alloc_from_pool = true;
+ vsi_ctx.flags = ICE_AQ_VSI_TYPE_PF;
+ /* switch_id is queried by get_switch_config aq, which is done
+ * by ice_init_hw
+ */
+ vsi_ctx.info.sw_id = hw->port_info->sw_id;
+ vsi_ctx.info.sw_flags2 = ICE_AQ_VSI_SW_FLAG_LAN_ENA;
+ /* Allow all untagged or tagged packets */
+ vsi_ctx.info.vlan_flags = ICE_AQ_VSI_VLAN_MODE_ALL;
+ vsi_ctx.info.vlan_flags |= ICE_AQ_VSI_VLAN_EMOD_NOTHING;
+ vsi_ctx.info.q_opt_rss = ICE_AQ_VSI_Q_OPT_RSS_LUT_PF |
+ ICE_AQ_VSI_Q_OPT_RSS_TPLZ;
+ /* Enable VLAN/UP trip */
+ ret = ice_vsi_config_tc_queue_mapping(vsi,
+ &vsi_ctx.info,
+ ICE_DEFAULT_TCMAP);
+ if (ret) {
+ PMD_INIT_LOG(ERR,
+ "tc queue mapping with vsi failed, "
+ "err = %d",
+ ret);
+ goto fail_mem;
+ }
+
+ break;
+ default:
+ /* for other types of VSI */
+ PMD_INIT_LOG(ERR, "other types of VSI not supported");
+ goto fail_mem;
+ }
+
+ /* VF has MSIX interrupt in VF range, don't allocate here */
+ if (type == ICE_VSI_PF) {
+ ret = ice_res_pool_alloc(&pf->msix_pool,
+ RTE_MIN(vsi->nb_qps,
+ RTE_MAX_RXTX_INTR_VEC_ID));
+ if (ret < 0) {
+ PMD_INIT_LOG(ERR, "VSI MAIN %d get heap failed %d",
+ vsi->vsi_id, ret);
+ }
+ vsi->msix_intr = ret;
+ vsi->nb_msix = RTE_MIN(vsi->nb_qps, RTE_MAX_RXTX_INTR_VEC_ID);
+ } else {
+ vsi->msix_intr = 0;
+ vsi->nb_msix = 0;
+ }
+ ret = ice_add_vsi(hw, vsi->idx, &vsi_ctx, NULL);
+ if (ret != ICE_SUCCESS) {
+ PMD_INIT_LOG(ERR, "add vsi failed, err = %d", ret);
+ goto fail_mem;
+ }
+ /* store vsi information is SW structure */
+ vsi->vsi_id = vsi_ctx.vsi_num;
+ vsi->info = vsi_ctx.info;
+ pf->vsis_allocated = vsi_ctx.vsis_allocd;
+ pf->vsis_unallocated = vsi_ctx.vsis_unallocated;
+
+ /* At the beginning, only TC0. */
+ /* What we need here is the maximam number of the TX queues.
+ * Currently vsi->nb_qps means it.
+ * Correct it if any change.
+ */
+ max_txqs[0] = vsi->nb_qps;
+ ret = ice_cfg_vsi_lan(hw->port_info, vsi->idx,
+ tc_bitmap, max_txqs);
+ if (ret != ICE_SUCCESS)
+ PMD_INIT_LOG(ERR, "Failed to config vsi sched");
+
+ return vsi;
+fail_mem:
+ rte_free(vsi);
+ pf->next_vsi_idx--;
+ return NULL;
+}
+
+static int
+ice_pf_setup(struct ice_pf *pf)
+{
+ struct ice_vsi *vsi;
+
+ /* Clear all stats counters */
+ pf->offset_loaded = FALSE;
+ memset(&pf->stats, 0, sizeof(struct ice_hw_port_stats));
+ memset(&pf->stats_offset, 0, sizeof(struct ice_hw_port_stats));
+ memset(&pf->internal_stats, 0, sizeof(struct ice_eth_stats));
+ memset(&pf->internal_stats_offset, 0, sizeof(struct ice_eth_stats));
+
+ vsi = ice_setup_vsi(pf, ICE_VSI_PF);
+ if (!vsi) {
+ PMD_INIT_LOG(ERR, "Failed to add vsi for PF");
+ return -EINVAL;
+ }
+
+ pf->main_vsi = vsi;
+
+ return 0;
+}
+
+static int
+ice_dev_init(struct rte_eth_dev *dev)
+{
+ struct rte_pci_device *pci_dev;
+ struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ int ret;
+
+ dev->dev_ops = &ice_eth_dev_ops;
+
+ pci_dev = RTE_DEV_TO_PCI(dev->device);
+
+ rte_eth_copy_pci_info(dev, pci_dev);
+ pf->adapter = ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+ pf->adapter->eth_dev = dev;
+ pf->dev_data = dev->data;
+ hw->back = pf->adapter;
+ hw->hw_addr = (uint8_t *)pci_dev->mem_resource[0].addr;
+ hw->vendor_id = pci_dev->id.vendor_id;
+ hw->device_id = pci_dev->id.device_id;
+ hw->subsystem_vendor_id = pci_dev->id.subsystem_vendor_id;
+ hw->subsystem_device_id = pci_dev->id.subsystem_device_id;
+ hw->bus.device = pci_dev->addr.devid;
+ hw->bus.func = pci_dev->addr.function;
+
+ ice_init_controlq_parameter(hw);
+
+ ret = ice_init_hw(hw);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to initialize HW");
+ return -EINVAL;
+ }
+
+ PMD_INIT_LOG(INFO, "FW %d.%d.%05d API %d.%d",
+ hw->fw_maj_ver, hw->fw_min_ver, hw->fw_build,
+ hw->api_maj_ver, hw->api_min_ver);
+
+ ice_pf_sw_init(dev);
+ ret = ice_init_mac_address(dev);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to initialize mac address");
+ goto err_init_mac;
+ }
+
+ ret = ice_res_pool_init(&pf->msix_pool, 1,
+ hw->func_caps.common_cap.num_msix_vectors - 1);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to init MSIX pool");
+ goto err_msix_pool_init;
+ }
+
+ ret = ice_pf_setup(pf);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to setup PF");
+ goto err_pf_setup;
+ }
+
+ return 0;
+
+err_pf_setup:
+ ice_res_pool_destroy(&pf->msix_pool);
+err_msix_pool_init:
+ rte_free(dev->data->mac_addrs);
+err_init_mac:
+ ice_sched_cleanup_all(hw);
+ rte_free(hw->port_info);
+ ice_shutdown_all_ctrlq(hw);
+
+ return ret;
+}
+
+static int
+ice_release_vsi(struct ice_vsi *vsi)
+{
+ struct ice_hw *hw;
+ struct ice_vsi_ctx vsi_ctx;
+ enum ice_status ret;
+
+ if (!vsi)
+ return 0;
+
+ hw = ICE_VSI_TO_HW(vsi);
+
+ memset(&vsi_ctx, 0, sizeof(vsi_ctx));
+
+ vsi_ctx.vsi_num = vsi->vsi_id;
+ vsi_ctx.info = vsi->info;
+ ret = ice_free_vsi(hw, vsi->idx, &vsi_ctx, false, NULL);
+ if (ret != ICE_SUCCESS) {
+ PMD_INIT_LOG(ERR, "Failed to free vsi by aq, %u", vsi->vsi_id);
+ rte_free(vsi);
+ return -1;
+ }
+
+ rte_free(vsi);
+ return 0;
+}
+
+static int
+ice_dev_uninit(struct rte_eth_dev *dev)
+{
+ struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+
+ if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+ return 0;
+
+ ice_dev_close(dev);
+
+ dev->dev_ops = NULL;
+ dev->rx_pkt_burst = NULL;
+ dev->tx_pkt_burst = NULL;
+
+ rte_free(dev->data->mac_addrs);
+ dev->data->mac_addrs = NULL;
+
+ ice_release_vsi(pf->main_vsi);
+ ice_sched_cleanup_all(hw);
+ rte_free(hw->port_info);
+ ice_shutdown_all_ctrlq(hw);
+
+ return 0;
+}
+
+static int
+ice_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
+ struct rte_pci_device *pci_dev)
+{
+ return rte_eth_dev_pci_generic_probe(pci_dev,
+ sizeof(struct ice_adapter),
+ ice_dev_init);
+}
+
+static int
+ice_pci_remove(struct rte_pci_device *pci_dev)
+{
+ return rte_eth_dev_pci_generic_remove(pci_dev, ice_dev_uninit);
+}
+
+static struct rte_pci_driver rte_ice_pmd = {
+ .id_table = pci_id_ice_map,
+ .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_INTR_LSC,
+ .probe = ice_pci_probe,
+ .remove = ice_pci_remove,
+};
+
+/**
+ * Driver initialization routine.
+ * Invoked once at EAL init time.
+ * Register itself as the [Poll Mode] Driver of PCI devices.
+ */
+RTE_PMD_REGISTER_PCI(net_ice, rte_ice_pmd);
+RTE_PMD_REGISTER_PCI_TABLE(ice, pci_id_ice_map);
+
+RTE_INIT(ice_init_log);
+static void
+ice_init_log(void)
+{
+ ice_logtype_init = rte_log_register("pmd.ice.init");
+ if (ice_logtype_init >= 0)
+ rte_log_set_level(ice_logtype_init, RTE_LOG_NOTICE);
+ ice_logtype_driver = rte_log_register("pmd.ice.driver");
+ if (ice_logtype_driver >= 0)
+ rte_log_set_level(ice_logtype_driver, RTE_LOG_NOTICE);
+}
+
+static void
+ice_dev_close(struct rte_eth_dev *dev)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+ if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+ return;
+
+ ice_res_pool_destroy(&pf->msix_pool);
+ ice_release_vsi(pf->main_vsi);
+
+ ice_shutdown_all_ctrlq(hw);
+}
diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h
new file mode 100644
index 0000000..7928684
--- /dev/null
+++ b/drivers/net/ice/ice_ethdev.h
@@ -0,0 +1,318 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _ICE_ETHDEV_H_
+#define _ICE_ETHDEV_H_
+
+#include <rte_kvargs.h>
+
+#include "base/ice_common.h"
+#include "base/ice_adminq_cmd.h"
+
+#define ICE_VLAN_TAG_SIZE 4
+
+#define ICE_ADMINQ_LEN 32
+#define ICE_SBIOQ_LEN 32
+#define ICE_MAILBOXQ_LEN 32
+#define ICE_ADMINQ_BUF_SZ 4096
+#define ICE_SBIOQ_BUF_SZ 4096
+#define ICE_MAILBOXQ_BUF_SZ 4096
+/* Number of queues per TC should be one of 1, 2, 4, 8, 16, 32, 64 */
+#define ICE_MAX_Q_PER_TC 64
+#define ICE_NUM_DESC_DEFAULT 512
+#define ICE_BUF_SIZE_MIN 1024
+#define ICE_FRAME_SIZE_MAX 9728
+#define ICE_QUEUE_BASE_ADDR_UNIT 128
+/* number of VSIs and queue default setting */
+#define ICE_MAX_QP_NUM_PER_VF 16
+#define ICE_DEFAULT_QP_NUM_FDIR 1
+#define ICE_UINT32_BIT_SIZE (CHAR_BIT * sizeof(uint32_t))
+#define ICE_VFTA_SIZE (4096 / ICE_UINT32_BIT_SIZE)
+/* Maximun number of MAC addresses */
+#define ICE_NUM_MACADDR_MAX 64
+/* Maximum number of VFs */
+#define ICE_MAX_VF 128
+#define ICE_MAX_INTR_QUEUE_NUM 256
+
+#define ICE_MISC_VEC_ID RTE_INTR_VEC_ZERO_OFFSET
+#define ICE_RX_VEC_ID RTE_INTR_VEC_RXTX_OFFSET
+
+#define ICE_MAX_PKT_TYPE 1024
+
+/**
+ * vlan_id is a 12 bit number.
+ * The VFTA array is actually a 4096 bit array, 128 of 32bit elements.
+ * 2^5 = 32. The val of lower 5 bits specifies the bit in the 32bit element.
+ * The higher 7 bit val specifies VFTA array index.
+ */
+#define ICE_VFTA_BIT(vlan_id) (1 << ((vlan_id) & 0x1F))
+#define ICE_VFTA_IDX(vlan_id) ((vlan_id) >> 5)
+
+/* Default TC traffic in case DCB is not enabled */
+#define ICE_DEFAULT_TCMAP 0x1
+#define ICE_FDIR_QUEUE_ID 0
+
+/* Always assign pool 0 to main VSI, VMDQ will start from 1 */
+#define ICE_VMDQ_POOL_BASE 1
+
+#define ICE_DEFAULT_RX_FREE_THRESH 32
+#define ICE_DEFAULT_RX_PTHRESH 8
+#define ICE_DEFAULT_RX_HTHRESH 8
+#define ICE_DEFAULT_RX_WTHRESH 0
+
+#define ICE_DEFAULT_TX_FREE_THRESH 32
+#define ICE_DEFAULT_TX_PTHRESH 32
+#define ICE_DEFAULT_TX_HTHRESH 0
+#define ICE_DEFAULT_TX_WTHRESH 0
+#define ICE_DEFAULT_TX_RSBIT_THRESH 32
+
+/* Bit shift and mask */
+#define ICE_4_BIT_WIDTH (CHAR_BIT / 2)
+#define ICE_4_BIT_MASK RTE_LEN2MASK(ICE_4_BIT_WIDTH, uint8_t)
+#define ICE_8_BIT_WIDTH CHAR_BIT
+#define ICE_8_BIT_MASK UINT8_MAX
+#define ICE_16_BIT_WIDTH (CHAR_BIT * 2)
+#define ICE_16_BIT_MASK UINT16_MAX
+#define ICE_32_BIT_WIDTH (CHAR_BIT * 4)
+#define ICE_32_BIT_MASK UINT32_MAX
+#define ICE_40_BIT_WIDTH (CHAR_BIT * 5)
+#define ICE_40_BIT_MASK RTE_LEN2MASK(ICE_40_BIT_WIDTH, uint64_t)
+#define ICE_48_BIT_WIDTH (CHAR_BIT * 6)
+#define ICE_48_BIT_MASK RTE_LEN2MASK(ICE_48_BIT_WIDTH, uint64_t)
+
+#define ICE_FLAG_RSS BIT_ULL(0)
+#define ICE_FLAG_DCB BIT_ULL(1)
+#define ICE_FLAG_VMDQ BIT_ULL(2)
+#define ICE_FLAG_SRIOV BIT_ULL(3)
+#define ICE_FLAG_HEADER_SPLIT_DISABLED BIT_ULL(4)
+#define ICE_FLAG_HEADER_SPLIT_ENABLED BIT_ULL(5)
+#define ICE_FLAG_FDIR BIT_ULL(6)
+#define ICE_FLAG_VXLAN BIT_ULL(7)
+#define ICE_FLAG_RSS_AQ_CAPABLE BIT_ULL(8)
+#define ICE_FLAG_VF_MAC_BY_PF BIT_ULL(9)
+#define ICE_FLAG_ALL (ICE_FLAG_RSS | \
+ ICE_FLAG_DCB | \
+ ICE_FLAG_VMDQ | \
+ ICE_FLAG_SRIOV | \
+ ICE_FLAG_HEADER_SPLIT_DISABLED | \
+ ICE_FLAG_HEADER_SPLIT_ENABLED | \
+ ICE_FLAG_FDIR | \
+ ICE_FLAG_VXLAN | \
+ ICE_FLAG_RSS_AQ_CAPABLE | \
+ ICE_FLAG_VF_MAC_BY_PF)
+
+#define ICE_RSS_OFFLOAD_ALL ( \
+ ETH_RSS_FRAG_IPV4 | \
+ ETH_RSS_NONFRAG_IPV4_TCP | \
+ ETH_RSS_NONFRAG_IPV4_UDP | \
+ ETH_RSS_NONFRAG_IPV4_SCTP | \
+ ETH_RSS_NONFRAG_IPV4_OTHER | \
+ ETH_RSS_FRAG_IPV6 | \
+ ETH_RSS_NONFRAG_IPV6_TCP | \
+ ETH_RSS_NONFRAG_IPV6_UDP | \
+ ETH_RSS_NONFRAG_IPV6_SCTP | \
+ ETH_RSS_NONFRAG_IPV6_OTHER | \
+ ETH_RSS_L2_PAYLOAD)
+
+struct ice_adapter;
+
+/**
+ * MAC filter structure
+ */
+struct ice_mac_filter_info {
+ struct ether_addr mac_addr;
+};
+
+TAILQ_HEAD(ice_mac_filter_list, ice_mac_filter);
+
+/* MAC filter list structure */
+struct ice_mac_filter {
+ TAILQ_ENTRY(ice_mac_filter) next;
+ struct ice_mac_filter_info mac_info;
+};
+
+/**
+ * VLAN filter structure
+ */
+struct ice_vlan_filter_info {
+ uint16_t vlan_id;
+};
+
+TAILQ_HEAD(ice_vlan_filter_list, ice_vlan_filter);
+
+/* VLAN filter list structure */
+struct ice_vlan_filter {
+ TAILQ_ENTRY(ice_vlan_filter) next;
+ struct ice_vlan_filter_info vlan_info;
+};
+
+struct pool_entry {
+ LIST_ENTRY(pool_entry) next;
+ uint16_t base;
+ uint16_t len;
+};
+
+LIST_HEAD(res_list, pool_entry);
+
+struct ice_res_pool_info {
+ uint32_t base; /* Resource start index */
+ uint32_t num_alloc; /* Allocated resource number */
+ uint32_t num_free; /* Total available resource number */
+ struct res_list alloc_list; /* Allocated resource list */
+ struct res_list free_list; /* Available resource list */
+};
+
+TAILQ_HEAD(ice_vsi_list_head, ice_vsi_list);
+
+struct ice_vsi;
+
+/* VSI list structure */
+struct ice_vsi_list {
+ TAILQ_ENTRY(ice_vsi_list) list;
+ struct ice_vsi *vsi;
+};
+
+struct ice_rx_queue;
+struct ice_tx_queue;
+
+/**
+ * Structure that defines a VSI, associated with a adapter.
+ */
+struct ice_vsi {
+ struct ice_adapter *adapter; /* Backreference to associated adapter */
+ struct ice_aqc_vsi_props info; /* VSI properties */
+ /**
+ * When drivers loaded, only a default main VSI exists. In case new VSI
+ * needs to add, HW needs to know the layout that VSIs are organized.
+ * Besides that, VSI isan element and can't switch packets, which needs
+ * to add new component VEB to perform switching. So, a new VSI needs
+ * to specify the the uplink VSI (Parent VSI) before created. The
+ * uplink VSI will check whether it had a VEB to switch packets. If no,
+ * it will try to create one. Then, uplink VSI will move the new VSI
+ * into its' sib_vsi_list to manage all the downlink VSI.
+ * sib_vsi_list: the VSI list that shared the same uplink VSI.
+ * parent_vsi : the uplink VSI. It's NULL for main VSI.
+ * veb : the VEB associates with the VSI.
+ */
+ struct ice_vsi_list sib_vsi_list; /* sibling vsi list */
+ struct ice_vsi *parent_vsi;
+ enum ice_vsi_type type; /* VSI types */
+ uint16_t vlan_num; /* Total VLAN number */
+ uint16_t mac_num; /* Total mac number */
+ struct ice_mac_filter_list mac_list; /* macvlan filter list */
+ struct ice_vlan_filter_list vlan_list; /* vlan filter list */
+ uint16_t nb_qps; /* Number of queue pairs VSI can occupy */
+ uint16_t nb_used_qps; /* Number of queue pairs VSI uses */
+ uint16_t max_macaddrs; /* Maximum number of MAC addresses */
+ uint16_t base_queue; /* The first queue index of this VSI */
+ uint16_t vsi_id; /* Hardware Id */
+ uint16_t idx; /* vsi_handle: SW index in hw->vsi_ctx */
+ /* VF number to which the VSI connects, valid when VSI is VF type */
+ uint8_t vf_num;
+ uint16_t msix_intr; /* The MSIX interrupt binds to VSI */
+ uint16_t nb_msix; /* The max number of msix vector */
+ uint8_t enabled_tc; /* The traffic class enabled */
+ uint8_t vlan_anti_spoof_on; /* The VLAN anti-spoofing enabled */
+ uint8_t vlan_filter_on; /* The VLAN filter enabled */
+ /* information about rss configuration */
+ u32 rss_key_size;
+ u32 rss_lut_size;
+ uint8_t *rss_lut;
+ uint8_t *rss_key;
+ struct ice_eth_stats eth_stats_offset;
+ struct ice_eth_stats eth_stats;
+ bool offset_loaded;
+};
+
+struct ice_pf {
+ struct ice_adapter *adapter; /* The adapter this PF associate to */
+ struct ice_vsi *main_vsi; /* pointer to main VSI structure */
+ /* Used for next free software vsi idx.
+ * To save the effort, we don't recycle the index.
+ * Suppose the indexes are more than enough.
+ */
+ uint16_t next_vsi_idx;
+ uint16_t vsis_allocated;
+ uint16_t vsis_unallocated;
+ struct ice_res_pool_info qp_pool; /*Queue pair pool */
+ struct ice_res_pool_info msix_pool; /* MSIX interrupt pool */
+ struct rte_eth_dev_data *dev_data; /* Pointer to the device data */
+ struct ether_addr dev_addr; /* PF device mac address */
+ uint64_t flags; /* PF feature flags */
+ uint16_t hash_lut_size; /* The size of hash lookup table */
+ uint16_t lan_nb_qp_max;
+ uint16_t lan_nb_qps; /* The number of queue pairs of LAN */
+ struct ice_hw_port_stats stats_offset;
+ struct ice_hw_port_stats stats;
+ /* internal packet statistics, it should be excluded from the total */
+ struct ice_eth_stats internal_stats_offset;
+ struct ice_eth_stats internal_stats;
+ bool offset_loaded;
+ bool adapter_stopped;
+};
+
+/**
+ * Structure to store private data for each PF/VF instance.
+ */
+struct ice_adapter {
+ /* Common for both PF and VF */
+ struct ice_hw hw;
+ struct rte_eth_dev *eth_dev;
+ struct ice_pf pf;
+ bool rx_bulk_alloc_allowed;
+ bool tx_simple_allowed;
+ /* ptype mapping table */
+ uint32_t ptype_tbl[ICE_MAX_PKT_TYPE] __rte_cache_min_aligned;
+};
+
+struct ice_vsi_vlan_pvid_info {
+ uint16_t on; /* Enable or disable pvid */
+ union {
+ uint16_t pvid; /* Valid in case 'on' is set to set pvid */
+ struct {
+ /* Valid in case 'on' is cleared. 'tagged' will reject
+ * tagged packets, while 'untagged' will reject
+ * untagged packets.
+ */
+ uint8_t tagged;
+ uint8_t untagged;
+ } reject;
+ } config;
+};
+
+#define ICE_DEV_TO_PCI(eth_dev) \
+ RTE_DEV_TO_PCI((eth_dev)->device)
+
+/* ICE_DEV_PRIVATE_TO */
+#define ICE_DEV_PRIVATE_TO_PF(adapter) \
+ (&((struct ice_adapter *)adapter)->pf)
+#define ICE_DEV_PRIVATE_TO_HW(adapter) \
+ (&((struct ice_adapter *)adapter)->hw)
+#define ICE_DEV_PRIVATE_TO_ADAPTER(adapter) \
+ ((struct ice_adapter *)adapter)
+
+/* ICE_VSI_TO */
+#define ICE_VSI_TO_HW(vsi) \
+ (&(((struct ice_vsi *)vsi)->adapter->hw))
+#define ICE_VSI_TO_PF(vsi) \
+ (&(((struct ice_vsi *)vsi)->adapter->pf))
+#define ICE_VSI_TO_ETH_DEV(vsi) \
+ (((struct ice_vsi *)vsi)->adapter->eth_dev)
+
+/* ICE_PF_TO */
+#define ICE_PF_TO_HW(pf) \
+ (&(((struct ice_pf *)pf)->adapter->hw))
+#define ICE_PF_TO_ADAPTER(pf) \
+ ((struct ice_adapter *)pf->adapter)
+#define ICE_PF_TO_ETH_DEV(pf) \
+ (((struct ice_pf *)pf)->adapter->eth_dev)
+
+static inline int
+ice_align_floor(int n)
+{
+ if (n == 0)
+ return 0;
+ return 1 << (sizeof(n) * CHAR_BIT - 1 - __builtin_clz(n));
+}
+#endif /* _ICE_ETHDEV_H_ */
diff --git a/drivers/net/ice/ice_logs.h b/drivers/net/ice/ice_logs.h
new file mode 100644
index 0000000..de2d573
--- /dev/null
+++ b/drivers/net/ice/ice_logs.h
@@ -0,0 +1,45 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _ICE_LOGS_H_
+#define _ICE_LOGS_H_
+
+extern int ice_logtype_init;
+extern int ice_logtype_driver;
+
+#define PMD_INIT_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ## level, ice_logtype_init, "%s(): " fmt "\n", \
+ __func__, ##args)
+
+#define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, " >>")
+
+#ifdef RTE_LIBRTE_ICE_DEBUG_RX
+#define PMD_RX_LOG(level, fmt, args...) \
+ RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_RX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_ICE_DEBUG_TX
+#define PMD_TX_LOG(level, fmt, args...) \
+ RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_TX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_ICE_DEBUG_TX_FREE
+#define PMD_TX_FREE_LOG(level, fmt, args...) \
+ RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_TX_FREE_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#define PMD_DRV_LOG_RAW(level, fmt, args...) \
+ rte_log(RTE_LOG_ ## level, ice_logtype_driver, "%s(): " fmt, \
+ __func__, ## args)
+
+#define PMD_DRV_LOG(level, fmt, args...) \
+ PMD_DRV_LOG_RAW(level, fmt "\n", ## args)
+
+#endif /* _ICE_LOGS_H_ */
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
new file mode 100644
index 0000000..c37dc23
--- /dev/null
+++ b/drivers/net/ice/ice_rxtx.h
@@ -0,0 +1,117 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _ICE_RXTX_H_
+#define _ICE_RXTX_H_
+
+#include "ice_ethdev.h"
+
+#define ICE_ALIGN_RING_DESC 32
+#define ICE_MIN_RING_DESC 64
+#define ICE_MAX_RING_DESC 4096
+#define ICE_DMA_MEM_ALIGN 4096
+#define ICE_RING_BASE_ALIGN 128
+
+#define ICE_RX_MAX_BURST 32
+#define ICE_TX_MAX_BURST 32
+
+#define ICE_CHK_Q_ENA_COUNT 100
+#define ICE_CHK_Q_ENA_INTERVAL_US 100
+
+#ifdef RTE_LIBRTE_ICE_16BYTE_RX_DESC
+#define ice_rx_desc ice_16byte_rx_desc
+#else
+#define ice_rx_desc ice_32byte_rx_desc
+#endif
+
+#define ICE_SUPPORT_CHAIN_NUM 5
+
+struct ice_rx_entry {
+ struct rte_mbuf *mbuf;
+};
+
+struct ice_rx_queue {
+ struct rte_mempool *mp; /* mbuf pool to populate RX ring */
+ volatile union ice_rx_desc *rx_ring;/* RX ring virtual address */
+ uint64_t rx_ring_phys_addr; /* RX ring DMA address */
+ struct ice_rx_entry *sw_ring; /* address of RX soft ring */
+ uint16_t nb_rx_desc; /* number of RX descriptors */
+ uint16_t rx_free_thresh; /* max free RX desc to hold */
+ uint16_t rx_tail; /* current value of tail */
+ uint16_t nb_rx_hold; /* number of held free RX desc */
+ struct rte_mbuf *pkt_first_seg; /**< first segment of current packet */
+ struct rte_mbuf *pkt_last_seg; /**< last segment of current packet */
+#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC
+ uint16_t rx_nb_avail; /**< number of staged packets ready */
+ uint16_t rx_next_avail; /**< index of next staged packets */
+ uint16_t rx_free_trigger; /**< triggers rx buffer allocation */
+ struct rte_mbuf fake_mbuf; /**< dummy mbuf */
+ struct rte_mbuf *rx_stage[ICE_RX_MAX_BURST * 2];
+#endif
+ uint8_t port_id; /* device port ID */
+ uint8_t crc_len; /* 0 if CRC stripped, 4 otherwise */
+ uint16_t queue_id; /* RX queue index */
+ uint16_t reg_idx; /* RX queue register index */
+ uint8_t drop_en; /* if not 0, set register bit */
+ volatile uint8_t *qrx_tail; /* register address of tail */
+ struct ice_vsi *vsi; /* the VSI this queue belongs to */
+ uint16_t rx_buf_len; /* The packet buffer size */
+ uint16_t rx_hdr_len; /* The header buffer size */
+ uint16_t max_pkt_len; /* Maximum packet length */
+ bool q_set; /* indicate if rx queue has been configured */
+ bool rx_deferred_start; /* don't start this queue in dev start */
+};
+
+struct ice_tx_entry {
+ struct rte_mbuf *mbuf;
+ uint16_t next_id;
+ uint16_t last_id;
+};
+
+struct ice_tx_queue {
+ uint16_t nb_tx_desc; /* number of TX descriptors */
+ uint64_t tx_ring_phys_addr; /* TX ring DMA address */
+ volatile struct ice_tx_desc *tx_ring; /* TX ring virtual address */
+ struct ice_tx_entry *sw_ring; /* virtual address of SW ring */
+ uint16_t tx_tail; /* current value of tail register */
+ volatile uint8_t *qtx_tail; /* register address of tail */
+ uint16_t nb_tx_used; /* number of TX desc used since RS bit set */
+ /* index to last TX descriptor to have been cleaned */
+ uint16_t last_desc_cleaned;
+ /* Total number of TX descriptors ready to be allocated. */
+ uint16_t nb_tx_free;
+ /* Start freeing TX buffers if there are less free descriptors than
+ * this value.
+ */
+ uint16_t tx_free_thresh;
+ /* Number of TX descriptors to use before RS bit is set. */
+ uint16_t tx_rs_thresh;
+ uint8_t pthresh; /**< Prefetch threshold register. */
+ uint8_t hthresh; /**< Host threshold register. */
+ uint8_t wthresh; /**< Write-back threshold reg. */
+ uint8_t port_id; /* Device port identifier. */
+ uint16_t queue_id; /* TX queue index. */
+ uint32_t q_teid; /* TX schedule node id. */
+ uint16_t reg_idx;
+ uint64_t offloads;
+ struct ice_vsi *vsi; /* the VSI this queue belongs to */
+ uint16_t tx_next_dd;
+ uint16_t tx_next_rs;
+ bool tx_deferred_start; /* don't start this queue in dev start */
+ bool q_set; /* indicate if tx queue has been configured */
+};
+
+/* Offload features */
+union ice_tx_offload {
+ uint64_t data;
+ struct {
+ uint64_t l2_len:7; /* L2 (MAC) Header Length. */
+ uint64_t l3_len:9; /* L3 (IP) Header Length. */
+ uint64_t l4_len:8; /* L4 Header Length. */
+ uint64_t tso_segsz:16; /* TCP TSO segment size */
+ uint64_t outer_l2_len:8; /* outer L2 Header Length */
+ uint64_t outer_l3_len:16; /* outer L3 Header Length */
+ };
+};
+#endif /* _ICE_RXTX_H_ */
diff --git a/drivers/net/ice/rte_pmd_ice_version.map b/drivers/net/ice/rte_pmd_ice_version.map
new file mode 100644
index 0000000..7b23b60
--- /dev/null
+++ b/drivers/net/ice/rte_pmd_ice_version.map
@@ -0,0 +1,4 @@
+DPDK_19.02 {
+
+ local: *;
+};
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 5699d97..02e8b6f 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -163,6 +163,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_ENIC_PMD) += -lrte_pmd_enic
_LDLIBS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += -lrte_pmd_fm10k
_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_FAILSAFE) += -lrte_pmd_failsafe
_LDLIBS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += -lrte_pmd_i40e
+_LDLIBS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += -lrte_pmd_ice
_LDLIBS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += -lrte_pmd_ixgbe
ifeq ($(CONFIG_RTE_LIBRTE_KNI),y)
_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_KNI) += -lrte_pmd_kni
--
1.9.3
Varghese, Vipin
2018-11-23 07:56:47 UTC
Permalink
Hi Wenzhuo,

<snipped>
Subject: [dpdk-dev] [PATCH 02/19] net/ice: support device initialization
---
<snipped>
+static int
+ice_dev_init(struct rte_eth_dev *dev)
+{
+ struct rte_pci_device *pci_dev;
+ struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data-
Post by Wenzhuo Lu
dev_private);
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ int ret;
+
+ dev->dev_ops = &ice_eth_dev_ops;
+
+ pci_dev = RTE_DEV_TO_PCI(dev->device);
+
+ rte_eth_copy_pci_info(dev, pci_dev);
+ pf->adapter = ICE_DEV_PRIVATE_TO_ADAPTER(dev->data-
Post by Wenzhuo Lu
dev_private);
+ pf->adapter->eth_dev = dev;
+ pf->dev_data = dev->data;
+ hw->back = pf->adapter;
+ hw->hw_addr = (uint8_t *)pci_dev->mem_resource[0].addr;
+ hw->vendor_id = pci_dev->id.vendor_id;
+ hw->device_id = pci_dev->id.device_id;
+ hw->subsystem_vendor_id = pci_dev->id.subsystem_vendor_id;
+ hw->subsystem_device_id = pci_dev->id.subsystem_device_id;
+ hw->bus.device = pci_dev->addr.devid;
+ hw->bus.func = pci_dev->addr.function;
+
+ ice_init_controlq_parameter(hw);
+
Do we check if process is secondary and ICE PMD is already is initialized? If we do not check will we run to multi process reinitilization?
+ ret = ice_init_hw(hw);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to initialize HW");
+ return -EINVAL;
+ }
+
+ PMD_INIT_LOG(INFO, "FW %d.%d.%05d API %d.%d",
+ hw->fw_maj_ver, hw->fw_min_ver, hw->fw_build,
+ hw->api_maj_ver, hw->api_min_ver);
+
+ ice_pf_sw_init(dev);
+ ret = ice_init_mac_address(dev);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to initialize mac address");
+ goto err_init_mac;
+ }
Assuming in secondary multi process this will be skipped if primary has already initialized. Is this understanding correct?
+
+ ret = ice_res_pool_init(&pf->msix_pool, 1,
+ hw-
Post by Wenzhuo Lu
func_caps.common_cap.num_msix_vectors - 1);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to init MSIX pool");
+ goto err_msix_pool_init;
+ }
+
+ ret = ice_pf_setup(pf);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to setup PF");
+ goto err_pf_setup;
+ }
Pool init and pf setup also for secondary skip if primary is done?
+
+ return 0;
+
+ ice_res_pool_destroy(&pf->msix_pool);
+ rte_free(dev->data->mac_addrs);
+ ice_sched_cleanup_all(hw);
+ rte_free(hw->port_info);
+ ice_shutdown_all_ctrlq(hw);
+
+ return ret;
+}
+
+static int
+ice_release_vsi(struct ice_vsi *vsi)
+{
+ struct ice_hw *hw;
+ struct ice_vsi_ctx vsi_ctx;
+ enum ice_status ret;
+
+ if (!vsi)
+ return 0;
Should we check if process is secondary and primary sees the port, then skip the destroy?
+
+ hw = ICE_VSI_TO_HW(vsi);
+
+ memset(&vsi_ctx, 0, sizeof(vsi_ctx));
+
+ vsi_ctx.vsi_num = vsi->vsi_id;
+ vsi_ctx.info = vsi->info;
+ ret = ice_free_vsi(hw, vsi->idx, &vsi_ctx, false, NULL);
+ if (ret != ICE_SUCCESS) {
+ PMD_INIT_LOG(ERR, "Failed to free vsi by aq, %u", vsi->vsi_id);
+ rte_free(vsi);
+ return -1;
+ }
+
+ rte_free(vsi);
+ return 0;
+}
+
+static int
+ice_dev_uninit(struct rte_eth_dev *dev) {
+ struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data-
Post by Wenzhuo Lu
dev_private);
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+
+ if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+ return 0;
+
Here we have check for secondary, but if the port is added in secondary and not primary is it valid to return 0?
+ ice_dev_close(dev);
+
+ dev->dev_ops = NULL;
+ dev->rx_pkt_burst = NULL;
+ dev->tx_pkt_burst = NULL;
+
+ rte_free(dev->data->mac_addrs);
+ dev->data->mac_addrs = NULL;
+
+ ice_release_vsi(pf->main_vsi);
+ ice_sched_cleanup_all(hw);
+ rte_free(hw->port_info);
+ ice_shutdown_all_ctrlq(hw);
+
+ return 0;
+}
+
<snipped>
+static void
+ice_dev_close(struct rte_eth_dev *dev)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data-
Post by Wenzhuo Lu
dev_private);
+
+ if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+ return;
+
Same as previous comment, if port is started in secondary it will not be seen in primary. Hence is it right to return 0 without checking?
+ ice_res_pool_destroy(&pf->msix_pool);
+ ice_release_vsi(pf->main_vsi);
+
+ ice_shutdown_all_ctrlq(hw);
+}
<snipped>
Li, Xiaoyun
2018-11-26 05:09:35 UTC
Permalink
Hi
Post by Varghese, Vipin
Do we check if process is secondary and ICE PMD is already is initialized? If we
do not check will we run to multi process reinitilization?
Yes. We check. It is in [PATCH 16/19] net/ice: support basic RX/TX. Please see that.
We roughly split the ice codes in different patch. So the file in only one patch is not complete.
Post by Varghese, Vipin
Post by Wenzhuo Lu
+ ret = ice_init_hw(hw);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to initialize HW");
+ return -EINVAL;
+ }
+
+ PMD_INIT_LOG(INFO, "FW %d.%d.%05d API %d.%d",
+ hw->fw_maj_ver, hw->fw_min_ver, hw->fw_build,
+ hw->api_maj_ver, hw->api_min_ver);
+
+ ice_pf_sw_init(dev);
+ ret = ice_init_mac_address(dev);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to initialize mac address");
+ goto err_init_mac;
+ }
Assuming in secondary multi process this will be skipped if primary has already
initialized. Is this understanding correct?
Post by Wenzhuo Lu
+
+ ret = ice_res_pool_init(&pf->msix_pool, 1,
+ hw-
Post by Wenzhuo Lu
func_caps.common_cap.num_msix_vectors - 1);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to init MSIX pool");
+ goto err_msix_pool_init;
+ }
+
+ ret = ice_pf_setup(pf);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to setup PF");
+ goto err_pf_setup;
+ }
Pool init and pf setup also for secondary skip if primary is done?
Post by Wenzhuo Lu
+
+ return 0;
+
+ ice_res_pool_destroy(&pf->msix_pool);
+ rte_free(dev->data->mac_addrs);
+ ice_sched_cleanup_all(hw);
+ rte_free(hw->port_info);
+ ice_shutdown_all_ctrlq(hw);
+
+ return ret;
+}
+
+static int
+ice_release_vsi(struct ice_vsi *vsi)
+{
+ struct ice_hw *hw;
+ struct ice_vsi_ctx vsi_ctx;
+ enum ice_status ret;
+
+ if (!vsi)
+ return 0;
Should we check if process is secondary and primary sees the port, then skip the destroy?
Post by Wenzhuo Lu
+
+ hw = ICE_VSI_TO_HW(vsi);
+
+ memset(&vsi_ctx, 0, sizeof(vsi_ctx));
+
+ vsi_ctx.vsi_num = vsi->vsi_id;
+ vsi_ctx.info = vsi->info;
+ ret = ice_free_vsi(hw, vsi->idx, &vsi_ctx, false, NULL);
+ if (ret != ICE_SUCCESS) {
+ PMD_INIT_LOG(ERR, "Failed to free vsi by aq, %u", vsi->vsi_id);
+ rte_free(vsi);
+ return -1;
+ }
+
+ rte_free(vsi);
+ return 0;
+}
+
+static int
+ice_dev_uninit(struct rte_eth_dev *dev) {
+ struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data-
Post by Wenzhuo Lu
dev_private);
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+
+ if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+ return 0;
+
Here we have check for secondary, but if the port is added in secondary and not
primary is it valid to return 0?
Post by Wenzhuo Lu
+ ice_dev_close(dev);
+
+ dev->dev_ops = NULL;
+ dev->rx_pkt_burst = NULL;
+ dev->tx_pkt_burst = NULL;
+
+ rte_free(dev->data->mac_addrs);
+ dev->data->mac_addrs = NULL;
+
+ ice_release_vsi(pf->main_vsi);
+ ice_sched_cleanup_all(hw);
+ rte_free(hw->port_info);
+ ice_shutdown_all_ctrlq(hw);
+
+ return 0;
+}
+
<snipped>
Post by Wenzhuo Lu
+static void
+ice_dev_close(struct rte_eth_dev *dev) {
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data-
Post by Wenzhuo Lu
dev_private);
+
+ if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+ return;
+
Same as previous comment, if port is started in secondary it will not be seen in
primary. Hence is it right to return 0 without checking?
Post by Wenzhuo Lu
+ ice_res_pool_destroy(&pf->msix_pool);
+ ice_release_vsi(pf->main_vsi);
+
+ ice_shutdown_all_ctrlq(hw);
+}
<snipped>
Varghese, Vipin
2018-11-26 05:13:42 UTC
Permalink
Thanks for the update, so we can assume all functions are available in secondary. If not supported it will return right error code and documentation is updated for the limitation too.
Post by Yang, Qiming
-----Original Message-----
From: Li, Xiaoyun
Sent: Monday, November 26, 2018 10:40 AM
Subject: RE: [dpdk-dev] [PATCH 02/19] net/ice: support device initialization
Hi
Post by Varghese, Vipin
Do we check if process is secondary and ICE PMD is already is
initialized? If we do not check will we run to multi process reinitilization?
Yes. We check. It is in [PATCH 16/19] net/ice: support basic RX/TX. Please see that.
We roughly split the ice codes in different patch. So the file in only one patch is
not complete.
Post by Varghese, Vipin
Post by Wenzhuo Lu
+ ret = ice_init_hw(hw);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to initialize HW");
+ return -EINVAL;
+ }
+
+ PMD_INIT_LOG(INFO, "FW %d.%d.%05d API %d.%d",
+ hw->fw_maj_ver, hw->fw_min_ver, hw->fw_build,
+ hw->api_maj_ver, hw->api_min_ver);
+
+ ice_pf_sw_init(dev);
+ ret = ice_init_mac_address(dev);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to initialize mac address");
+ goto err_init_mac;
+ }
Assuming in secondary multi process this will be skipped if primary
has already initialized. Is this understanding correct?
Post by Wenzhuo Lu
+
+ ret = ice_res_pool_init(&pf->msix_pool, 1,
+ hw-
Post by Wenzhuo Lu
func_caps.common_cap.num_msix_vectors - 1);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to init MSIX pool");
+ goto err_msix_pool_init;
+ }
+
+ ret = ice_pf_setup(pf);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to setup PF");
+ goto err_pf_setup;
+ }
Pool init and pf setup also for secondary skip if primary is done?
Post by Wenzhuo Lu
+
+ return 0;
+
+ ice_res_pool_destroy(&pf->msix_pool);
+ rte_free(dev->data->mac_addrs);
+ ice_sched_cleanup_all(hw);
+ rte_free(hw->port_info);
+ ice_shutdown_all_ctrlq(hw);
+
+ return ret;
+}
+
+static int
+ice_release_vsi(struct ice_vsi *vsi) {
+ struct ice_hw *hw;
+ struct ice_vsi_ctx vsi_ctx;
+ enum ice_status ret;
+
+ if (!vsi)
+ return 0;
Should we check if process is secondary and primary sees the port,
then skip the destroy?
Post by Wenzhuo Lu
+
+ hw = ICE_VSI_TO_HW(vsi);
+
+ memset(&vsi_ctx, 0, sizeof(vsi_ctx));
+
+ vsi_ctx.vsi_num = vsi->vsi_id;
+ vsi_ctx.info = vsi->info;
+ ret = ice_free_vsi(hw, vsi->idx, &vsi_ctx, false, NULL);
+ if (ret != ICE_SUCCESS) {
+ PMD_INIT_LOG(ERR, "Failed to free vsi by aq, %u", vsi->vsi_id);
+ rte_free(vsi);
+ return -1;
+ }
+
+ rte_free(vsi);
+ return 0;
+}
+
+static int
+ice_dev_uninit(struct rte_eth_dev *dev) {
+ struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data-
Post by Wenzhuo Lu
dev_private);
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+
+ if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+ return 0;
+
Here we have check for secondary, but if the port is added in
secondary and not primary is it valid to return 0?
Post by Wenzhuo Lu
+ ice_dev_close(dev);
+
+ dev->dev_ops = NULL;
+ dev->rx_pkt_burst = NULL;
+ dev->tx_pkt_burst = NULL;
+
+ rte_free(dev->data->mac_addrs);
+ dev->data->mac_addrs = NULL;
+
+ ice_release_vsi(pf->main_vsi);
+ ice_sched_cleanup_all(hw);
+ rte_free(hw->port_info);
+ ice_shutdown_all_ctrlq(hw);
+
+ return 0;
+}
+
<snipped>
Post by Wenzhuo Lu
+static void
+ice_dev_close(struct rte_eth_dev *dev) {
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data-
Post by Wenzhuo Lu
dev_private);
+
+ if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+ return;
+
Same as previous comment, if port is started in secondary it will not
be seen in primary. Hence is it right to return 0 without checking?
Post by Wenzhuo Lu
+ ice_res_pool_destroy(&pf->msix_pool);
+ ice_release_vsi(pf->main_vsi);
+
+ ice_shutdown_all_ctrlq(hw);
+}
<snipped>
Li, Xiaoyun
2018-11-26 05:19:01 UTC
Permalink
Yes.
Post by Yang, Qiming
-----Original Message-----
From: Varghese, Vipin
Sent: Monday, November 26, 2018 13:14
Subject: RE: [dpdk-dev] [PATCH 02/19] net/ice: support device initialization
Thanks for the update, so we can assume all functions are available in secondary.
If not supported it will return right error code and documentation is updated for
the limitation too.
Post by Yang, Qiming
-----Original Message-----
From: Li, Xiaoyun
Sent: Monday, November 26, 2018 10:40 AM
Subject: RE: [dpdk-dev] [PATCH 02/19] net/ice: support device initialization
Hi
Post by Varghese, Vipin
Do we check if process is secondary and ICE PMD is already is
initialized? If we do not check will we run to multi process reinitilization?
Yes. We check. It is in [PATCH 16/19] net/ice: support basic RX/TX. Please see that.
We roughly split the ice codes in different patch. So the file in only
one patch is not complete.
Post by Varghese, Vipin
Post by Wenzhuo Lu
+ ret = ice_init_hw(hw);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to initialize HW");
+ return -EINVAL;
+ }
+
+ PMD_INIT_LOG(INFO, "FW %d.%d.%05d API %d.%d",
+ hw->fw_maj_ver, hw->fw_min_ver, hw->fw_build,
+ hw->api_maj_ver, hw->api_min_ver);
+
+ ice_pf_sw_init(dev);
+ ret = ice_init_mac_address(dev);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to initialize mac address");
+ goto err_init_mac;
+ }
Assuming in secondary multi process this will be skipped if primary
has already initialized. Is this understanding correct?
Post by Wenzhuo Lu
+
+ ret = ice_res_pool_init(&pf->msix_pool, 1,
+ hw-
Post by Wenzhuo Lu
func_caps.common_cap.num_msix_vectors - 1);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to init MSIX pool");
+ goto err_msix_pool_init;
+ }
+
+ ret = ice_pf_setup(pf);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to setup PF");
+ goto err_pf_setup;
+ }
Pool init and pf setup also for secondary skip if primary is done?
Post by Wenzhuo Lu
+
+ return 0;
+
+ ice_res_pool_destroy(&pf->msix_pool);
+ rte_free(dev->data->mac_addrs);
+ ice_sched_cleanup_all(hw);
+ rte_free(hw->port_info);
+ ice_shutdown_all_ctrlq(hw);
+
+ return ret;
+}
+
+static int
+ice_release_vsi(struct ice_vsi *vsi) {
+ struct ice_hw *hw;
+ struct ice_vsi_ctx vsi_ctx;
+ enum ice_status ret;
+
+ if (!vsi)
+ return 0;
Should we check if process is secondary and primary sees the port,
then skip the destroy?
Post by Wenzhuo Lu
+
+ hw = ICE_VSI_TO_HW(vsi);
+
+ memset(&vsi_ctx, 0, sizeof(vsi_ctx));
+
+ vsi_ctx.vsi_num = vsi->vsi_id;
+ vsi_ctx.info = vsi->info;
+ ret = ice_free_vsi(hw, vsi->idx, &vsi_ctx, false, NULL);
+ if (ret != ICE_SUCCESS) {
+ PMD_INIT_LOG(ERR, "Failed to free vsi by aq, %u", vsi->vsi_id);
+ rte_free(vsi);
+ return -1;
+ }
+
+ rte_free(vsi);
+ return 0;
+}
+
+static int
+ice_dev_uninit(struct rte_eth_dev *dev) {
+ struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data-
Post by Wenzhuo Lu
dev_private);
+ struct ice_pf *pf =
+ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+
+ if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+ return 0;
+
Here we have check for secondary, but if the port is added in
secondary and not primary is it valid to return 0?
Post by Wenzhuo Lu
+ ice_dev_close(dev);
+
+ dev->dev_ops = NULL;
+ dev->rx_pkt_burst = NULL;
+ dev->tx_pkt_burst = NULL;
+
+ rte_free(dev->data->mac_addrs);
+ dev->data->mac_addrs = NULL;
+
+ ice_release_vsi(pf->main_vsi);
+ ice_sched_cleanup_all(hw);
+ rte_free(hw->port_info);
+ ice_shutdown_all_ctrlq(hw);
+
+ return 0;
+}
+
<snipped>
Post by Wenzhuo Lu
+static void
+ice_dev_close(struct rte_eth_dev *dev) {
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data-
Post by Wenzhuo Lu
dev_private);
+
+ if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+ return;
+
Same as previous comment, if port is started in secondary it will
not be seen in primary. Hence is it right to return 0 without checking?
Post by Wenzhuo Lu
+ ice_res_pool_destroy(&pf->msix_pool);
+ ice_release_vsi(pf->main_vsi);
+
+ ice_shutdown_all_ctrlq(hw);
+}
<snipped>
Varghese, Vipin
2018-11-26 05:22:32 UTC
Permalink
Thank you, will wait for updated documentation and release notes.
Post by Yang, Qiming
-----Original Message-----
From: Li, Xiaoyun
Sent: Monday, November 26, 2018 10:49 AM
Subject: RE: [dpdk-dev] [PATCH 02/19] net/ice: support device initialization
Yes.
Post by Yang, Qiming
-----Original Message-----
From: Varghese, Vipin
Sent: Monday, November 26, 2018 13:14
Subject: RE: [dpdk-dev] [PATCH 02/19] net/ice: support device initialization
Thanks for the update, so we can assume all functions are available in
secondary.
Post by Yang, Qiming
If not supported it will return right error code and documentation is
updated for the limitation too.
Post by Yang, Qiming
-----Original Message-----
From: Li, Xiaoyun
Sent: Monday, November 26, 2018 10:40 AM
Subject: RE: [dpdk-dev] [PATCH 02/19] net/ice: support device initialization
Hi
Post by Varghese, Vipin
Do we check if process is secondary and ICE PMD is already is
initialized? If we do not check will we run to multi process reinitilization?
Yes. We check. It is in [PATCH 16/19] net/ice: support basic RX/TX.
Please see that.
We roughly split the ice codes in different patch. So the file in
only one patch is not complete.
Post by Varghese, Vipin
Post by Wenzhuo Lu
+ ret = ice_init_hw(hw);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to initialize HW");
+ return -EINVAL;
+ }
+
+ PMD_INIT_LOG(INFO, "FW %d.%d.%05d API %d.%d",
+ hw->fw_maj_ver, hw->fw_min_ver, hw->fw_build,
+ hw->api_maj_ver, hw->api_min_ver);
+
+ ice_pf_sw_init(dev);
+ ret = ice_init_mac_address(dev);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to initialize mac address");
+ goto err_init_mac;
+ }
Assuming in secondary multi process this will be skipped if
primary has already initialized. Is this understanding correct?
Post by Wenzhuo Lu
+
+ ret = ice_res_pool_init(&pf->msix_pool, 1,
+ hw-
Post by Wenzhuo Lu
func_caps.common_cap.num_msix_vectors - 1);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to init MSIX pool");
+ goto err_msix_pool_init;
+ }
+
+ ret = ice_pf_setup(pf);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to setup PF");
+ goto err_pf_setup;
+ }
Pool init and pf setup also for secondary skip if primary is done?
Post by Wenzhuo Lu
+
+ return 0;
+
+ ice_res_pool_destroy(&pf->msix_pool);
+ rte_free(dev->data->mac_addrs);
+ ice_sched_cleanup_all(hw);
+ rte_free(hw->port_info);
+ ice_shutdown_all_ctrlq(hw);
+
+ return ret;
+}
+
+static int
+ice_release_vsi(struct ice_vsi *vsi) {
+ struct ice_hw *hw;
+ struct ice_vsi_ctx vsi_ctx;
+ enum ice_status ret;
+
+ if (!vsi)
+ return 0;
Should we check if process is secondary and primary sees the port,
then skip the destroy?
Post by Wenzhuo Lu
+
+ hw = ICE_VSI_TO_HW(vsi);
+
+ memset(&vsi_ctx, 0, sizeof(vsi_ctx));
+
+ vsi_ctx.vsi_num = vsi->vsi_id;
+ vsi_ctx.info = vsi->info;
+ ret = ice_free_vsi(hw, vsi->idx, &vsi_ctx, false, NULL);
+ if (ret != ICE_SUCCESS) {
+ PMD_INIT_LOG(ERR, "Failed to free vsi by aq, %u", vsi-
vsi_id);
Post by Yang, Qiming
Post by Varghese, Vipin
Post by Wenzhuo Lu
+ rte_free(vsi);
+ return -1;
+ }
+
+ rte_free(vsi);
+ return 0;
+}
+
+static int
+ice_dev_uninit(struct rte_eth_dev *dev) {
+ struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data-
Post by Wenzhuo Lu
dev_private);
+ struct ice_pf *pf =
+ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+
+ if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+ return 0;
+
Here we have check for secondary, but if the port is added in
secondary and not primary is it valid to return 0?
Post by Wenzhuo Lu
+ ice_dev_close(dev);
+
+ dev->dev_ops = NULL;
+ dev->rx_pkt_burst = NULL;
+ dev->tx_pkt_burst = NULL;
+
+ rte_free(dev->data->mac_addrs);
+ dev->data->mac_addrs = NULL;
+
+ ice_release_vsi(pf->main_vsi);
+ ice_sched_cleanup_all(hw);
+ rte_free(hw->port_info);
+ ice_shutdown_all_ctrlq(hw);
+
+ return 0;
+}
+
<snipped>
Post by Wenzhuo Lu
+static void
+ice_dev_close(struct rte_eth_dev *dev) {
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data-
dev_private);
Post by Yang, Qiming
Post by Varghese, Vipin
Post by Wenzhuo Lu
+ struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data-
Post by Wenzhuo Lu
dev_private);
+
+ if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+ return;
+
Same as previous comment, if port is started in secondary it will
not be seen in primary. Hence is it right to return 0 without checking?
Post by Wenzhuo Lu
+ ice_res_pool_destroy(&pf->msix_pool);
+ ice_release_vsi(pf->main_vsi);
+
+ ice_shutdown_all_ctrlq(hw);
+}
<snipped>
Wenzhuo Lu
2018-11-23 06:56:03 UTC
Permalink
Normally when starting/stopping the device the queue
should be started and stopped. Support them both in
this patch.

Below ops are added,
dev_configure
dev_start
dev_stop
dev_close
dev_reset
rx_queue_start
rx_queue_stop
tx_queue_start
tx_queue_stop
rx_queue_setup
rx_queue_release
tx_queue_setup
tx_queue_release

Signed-off-by: Wenzhuo Lu <***@intel.com>
Signed-off-by: Qiming Yang <***@intel.com>
Signed-off-by: Xiaoyun Li <***@intel.com>
Signed-off-by: Jingjing Wu <***@intel.com>
---
drivers/net/ice/Makefile | 1 +
drivers/net/ice/ice_ethdev.c | 207 ++++++++-
drivers/net/ice/ice_lan_rxtx.c | 951 +++++++++++++++++++++++++++++++++++++++++
drivers/net/ice/ice_rxtx.h | 20 +
4 files changed, 1178 insertions(+), 1 deletion(-)
create mode 100644 drivers/net/ice/ice_lan_rxtx.c

diff --git a/drivers/net/ice/Makefile b/drivers/net/ice/Makefile
index 00e1dda..955d719 100644
--- a/drivers/net/ice/Makefile
+++ b/drivers/net/ice/Makefile
@@ -65,6 +65,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_nvm.c
SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_flex_pipe.c

SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_ethdev.c
+SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_lan_rxtx.c

# this lib depends upon:
DEPDIRS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += lib/librte_eal lib/librte_ether
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 7be77cf..bbf1ed4 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -14,7 +14,11 @@
int ice_logtype_init;
int ice_logtype_driver;

+static int ice_dev_configure(struct rte_eth_dev *dev);
+static int ice_dev_start(struct rte_eth_dev *dev);
+static void ice_dev_stop(struct rte_eth_dev *dev);
static void ice_dev_close(struct rte_eth_dev *dev);
+static int ice_dev_reset(struct rte_eth_dev *dev);

static const struct rte_pci_id pci_id_ice_map[] = {
{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_BACKPLANE) },
@@ -24,7 +28,19 @@
};

static const struct eth_dev_ops ice_eth_dev_ops = {
- .dev_configure = NULL,
+ .dev_configure = ice_dev_configure,
+ .dev_start = ice_dev_start,
+ .dev_stop = ice_dev_stop,
+ .dev_close = ice_dev_close,
+ .dev_reset = ice_dev_reset,
+ .rx_queue_start = ice_rx_queue_start,
+ .rx_queue_stop = ice_rx_queue_stop,
+ .tx_queue_start = ice_tx_queue_start,
+ .tx_queue_stop = ice_tx_queue_stop,
+ .rx_queue_setup = ice_rx_queue_setup,
+ .rx_queue_release = ice_rx_queue_release,
+ .tx_queue_setup = ice_tx_queue_setup,
+ .tx_queue_release = ice_tx_queue_release,
};

static void
@@ -629,6 +645,164 @@
rte_log_set_level(ice_logtype_driver, RTE_LOG_NOTICE);
}

+static int
+ice_dev_configure(__rte_unused struct rte_eth_dev *dev)
+{
+ struct ice_adapter *ad =
+ ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+
+ if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+ return -E_RTE_SECONDARY;
+
+ /* Initialize to TRUE. If any of Rx queues doesn't meet the
+ * bulk allocation or vector Rx preconditions we will reset it.
+ */
+ ad->rx_bulk_alloc_allowed = true;
+ ad->tx_simple_allowed = true;
+
+ return 0;
+}
+
+static int ice_init_rss(struct ice_pf *pf)
+{
+ struct ice_hw *hw = ICE_PF_TO_HW(pf);
+ struct ice_vsi *vsi = pf->main_vsi;
+ struct rte_eth_dev *dev = pf->adapter->eth_dev;
+ struct rte_eth_rss_conf *rss_conf;
+ struct ice_aqc_get_set_rss_keys key;
+ uint16_t i, nb_q;
+ int ret = 0;
+
+ rss_conf = &dev->data->dev_conf.rx_adv_conf.rss_conf;
+ nb_q = dev->data->nb_rx_queues;
+ vsi->rss_key_size = ICE_AQC_GET_SET_RSS_KEY_DATA_RSS_KEY_SIZE;
+ vsi->rss_lut_size = hw->func_caps.common_cap.rss_table_size;
+
+ if (!vsi->rss_key)
+ vsi->rss_key = rte_zmalloc("rss_key",
+ vsi->rss_key_size, 0);
+ if (!vsi->rss_lut)
+ vsi->rss_lut = rte_zmalloc("rss_lut",
+ vsi->rss_lut_size, 0);
+
+ /* configure RSS key */
+ if (!rss_conf->rss_key) {
+ /* Calculate the default hash key */
+ for (i = 0; i <= vsi->rss_key_size; i++)
+ vsi->rss_key[i] = (uint8_t)rte_rand();
+ } else
+ rte_memcpy(vsi->rss_key, rss_conf->rss_key,
+ RTE_MIN(rss_conf->rss_key_len,
+ vsi->rss_key_size));
+ rte_memcpy(key.standard_rss_key, vsi->rss_key, vsi->rss_key_size);
+ ret = ice_aq_set_rss_key(hw, vsi->vsi_id, &key);
+ if (ret)
+ return -EINVAL;
+
+ /* init RSS LUT table */
+ for (i = 0; i < vsi->rss_lut_size; i++)
+ vsi->rss_lut[i] = i % nb_q;
+
+ ret = ice_aq_set_rss_lut(hw, vsi->vsi_id,
+ ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_PF,
+ vsi->rss_lut, vsi->rss_lut_size);
+ if (ret)
+ return -EINVAL;
+
+ return 0;
+}
+
+static int
+ice_dev_start(struct rte_eth_dev *dev)
+{
+ struct rte_eth_dev_data *data = dev->data;
+ struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ uint16_t nb_rxq = 0;
+ uint16_t nb_txq, i;
+ int ret;
+
+ if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+ return -E_RTE_SECONDARY;
+
+ /* program Tx queues' contex in hardware */
+ for (nb_txq = 0; nb_txq < data->nb_tx_queues; nb_txq++) {
+ ret = ice_tx_queue_start(dev, nb_txq);
+ if (ret) {
+ PMD_DRV_LOG(ERR, "fail to start Tx queue %u", nb_txq);
+ goto tx_err;
+ }
+ }
+
+ /* program Rx queues' context in hardware*/
+ for (nb_rxq = 0; nb_rxq < data->nb_rx_queues; nb_rxq++) {
+ ret = ice_rx_queue_start(dev, nb_rxq);
+ if (ret) {
+ PMD_DRV_LOG(ERR, "fail to start Rx queue %u", nb_rxq);
+ goto rx_err;
+ }
+ }
+
+ ret = ice_init_rss(pf);
+ if (ret) {
+ PMD_DRV_LOG(ERR, "Failed to enable rss for PF");
+ goto rx_err;
+ }
+
+ ret = ice_aq_set_event_mask(hw, hw->port_info->lport,
+ ((u16)(ICE_AQ_LINK_EVENT_LINK_FAULT |
+ ICE_AQ_LINK_EVENT_PHY_TEMP_ALARM |
+ ICE_AQ_LINK_EVENT_EXCESSIVE_ERRORS |
+ ICE_AQ_LINK_EVENT_SIGNAL_DETECT |
+ ICE_AQ_LINK_EVENT_AN_COMPLETED |
+ ICE_AQ_LINK_EVENT_PORT_TX_SUSPENDED)),
+ NULL);
+ if (ret != ICE_SUCCESS)
+ PMD_DRV_LOG(WARNING, "Fail to set phy mask");
+
+ pf->adapter_stopped = false;
+
+ return 0;
+
+ /* stop the started queues if failed to start all queues */
+rx_err:
+ for (i = 0; i < nb_txq; i++)
+ ice_tx_queue_stop(dev, i);
+tx_err:
+ for (i = 0; i < nb_rxq; i++)
+ ice_rx_queue_stop(dev, i);
+
+ return -EIO;
+}
+
+static void
+ice_dev_stop(struct rte_eth_dev *dev)
+{
+ struct rte_eth_dev_data *data = dev->data;
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ uint16_t i;
+
+ /* avoid stopping again */
+ if (pf->adapter_stopped)
+ return;
+
+ if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+ return;
+
+ /* stop and clear all Rx queues */
+ for (i = 0; i < data->nb_rx_queues; i++)
+ ice_rx_queue_stop(dev, i);
+
+ /* stop and clear all Tx queues */
+ for (i = 0; i < data->nb_tx_queues; i++)
+ ice_tx_queue_stop(dev, i);
+
+ /* Clear all queues and release mbufs */
+ ice_clear_queues(dev);
+
+ pf->adapter_stopped = true;
+}
+
static void
ice_dev_close(struct rte_eth_dev *dev)
{
@@ -638,8 +812,39 @@
if (rte_eal_process_type() == RTE_PROC_SECONDARY)
return;

+ ice_dev_stop(dev);
+
+ /* release all queue resource */
+ ice_free_queues(dev);
+
ice_res_pool_destroy(&pf->msix_pool);
ice_release_vsi(pf->main_vsi);

ice_shutdown_all_ctrlq(hw);
}
+
+static int
+ice_dev_reset(struct rte_eth_dev *dev)
+{
+ int ret;
+
+ if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+ return -E_RTE_SECONDARY;
+
+ if (dev->data->sriov.active)
+ return -ENOTSUP;
+
+ ret = ice_dev_uninit(dev);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "failed to uninit device, status = %d", ret);
+ return -ENXIO;
+ }
+
+ ret = ice_dev_init(dev);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "failed to init device, status = %d", ret);
+ return -ENXIO;
+ }
+
+ return 0;
+}
diff --git a/drivers/net/ice/ice_lan_rxtx.c b/drivers/net/ice/ice_lan_rxtx.c
new file mode 100644
index 0000000..67b89df
--- /dev/null
+++ b/drivers/net/ice/ice_lan_rxtx.c
@@ -0,0 +1,951 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#include <rte_ethdev_driver.h>
+#include <rte_net.h>
+
+#include "ice_rxtx.h"
+
+#define ICE_TD_CMD ICE_TX_DESC_CMD_EOP
+
+#define ICE_TX_CKSUM_OFFLOAD_MASK ( \
+ PKT_TX_IP_CKSUM | \
+ PKT_TX_L4_MASK | \
+ PKT_TX_TCP_SEG | \
+ PKT_TX_OUTER_IP_CKSUM)
+
+#define ICE_RX_ERR_BITS 0x3f
+
+static enum ice_status
+ice_program_hw_rx_queue(struct ice_rx_queue *rxq)
+{
+ struct ice_vsi *vsi = rxq->vsi;
+ struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+ struct rte_eth_dev *dev = ICE_VSI_TO_ETH_DEV(rxq->vsi);
+ struct ice_rlan_ctx rx_ctx;
+ enum ice_status err;
+ uint16_t buf_size, len;
+ struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
+ uint32_t regval;
+
+ /**
+ * The kernel driver uses flex descriptor. It sets the register
+ * to flex descriptor mode.
+ * DPDK uses legacy descriptor. It should set the register back
+ * to the default value, then uses legacy descriptor mode.
+ */
+ regval = (0x01 << QRXFLXP_CNTXT_RXDID_PRIO_S) &
+ QRXFLXP_CNTXT_RXDID_PRIO_M;
+ ICE_WRITE_REG(hw, QRXFLXP_CNTXT(rxq->reg_idx), regval);
+
+ /* Set buffer size as the head split is disabled. */
+ buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mp) -
+ RTE_PKTMBUF_HEADROOM);
+ rxq->rx_hdr_len = 0;
+ rxq->rx_buf_len = RTE_ALIGN(buf_size, (1 << ICE_RLAN_CTX_DBUF_S));
+ len = ICE_SUPPORT_CHAIN_NUM * rxq->rx_buf_len;
+ rxq->max_pkt_len = RTE_MIN(len,
+ dev->data->dev_conf.rxmode.max_rx_pkt_len);
+
+ if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if (rxq->max_pkt_len <= ETHER_MAX_LEN ||
+ rxq->max_pkt_len > ICE_FRAME_SIZE_MAX) {
+ PMD_DRV_LOG(ERR, "maximum packet length must "
+ "be larger than %u and smaller than %u,"
+ "as jumbo frame is enabled",
+ (uint32_t)ETHER_MAX_LEN,
+ (uint32_t)ICE_FRAME_SIZE_MAX);
+ return -EINVAL;
+ }
+ } else {
+ if (rxq->max_pkt_len < ETHER_MIN_LEN ||
+ rxq->max_pkt_len > ETHER_MAX_LEN) {
+ PMD_DRV_LOG(ERR, "maximum packet length must be "
+ "larger than %u and smaller than %u, "
+ "as jumbo frame is disabled",
+ (uint32_t)ETHER_MIN_LEN,
+ (uint32_t)ETHER_MAX_LEN);
+ return -EINVAL;
+ }
+ }
+
+ memset(&rx_ctx, 0, sizeof(rx_ctx));
+
+ rx_ctx.base = rxq->rx_ring_phys_addr / ICE_QUEUE_BASE_ADDR_UNIT;
+ rx_ctx.qlen = rxq->nb_rx_desc;
+ rx_ctx.dbuf = rxq->rx_buf_len >> ICE_RLAN_CTX_DBUF_S;
+ rx_ctx.hbuf = rxq->rx_hdr_len >> ICE_RLAN_CTX_HBUF_S;
+ rx_ctx.dtype = 0; /* No Header Split mode */
+#ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC
+ rx_ctx.dsize = 1; /* 32B descriptors */
+#endif
+ rx_ctx.rxmax = rxq->max_pkt_len;
+ /* TPH: Transaction Layer Packet (TLP) processing hints */
+ rx_ctx.tphrdesc_ena = 1;
+ rx_ctx.tphwdesc_ena = 1;
+ rx_ctx.tphdata_ena = 1;
+ rx_ctx.tphhead_ena = 1;
+ /* Low Receive Queue Threshold defined in 64 descriptors units.
+ * When the number of free descriptors goes below the lrxqthresh,
+ * an immediate interrupt is triggered.
+ */
+ rx_ctx.lrxqthresh = 2;
+ /*default use 32 byte descriptor, vlan tag extract to L2TAG2(1st)*/
+ rx_ctx.l2tsel = 1;
+ rx_ctx.showiv = 0;
+
+ err = ice_clear_rxq_ctx(hw, rxq->reg_idx);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Failed to clear Lan Rx queue (%u) context",
+ rxq->queue_id);
+ return -EINVAL;
+ }
+ err = ice_write_rxq_ctx(hw, &rx_ctx, rxq->reg_idx);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Failed to write Lan Rx queue (%u) context",
+ rxq->queue_id);
+ return -EINVAL;
+ }
+
+ buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mp) -
+ RTE_PKTMBUF_HEADROOM);
+
+ /* Check if scattered RX needs to be used. */
+ if ((rxq->max_pkt_len + 2 * ICE_VLAN_TAG_SIZE) > buf_size)
+ dev->data->scattered_rx = 1;
+
+ rxq->qrx_tail = hw->hw_addr + QRX_TAIL(rxq->reg_idx);
+
+ /* Init the Rx tail register*/
+ ICE_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1);
+
+ return 0;
+}
+
+/* Allocate mbufs for all descriptors in rx queue */
+static int
+ice_alloc_rx_queue_mbufs(struct ice_rx_queue *rxq)
+{
+ struct ice_rx_entry *rxe = rxq->sw_ring;
+ uint64_t dma_addr;
+ uint16_t i;
+
+ for (i = 0; i < rxq->nb_rx_desc; i++) {
+ volatile union ice_rx_desc *rxd;
+ struct rte_mbuf *mbuf = rte_mbuf_raw_alloc(rxq->mp);
+
+ if (unlikely(!mbuf)) {
+ PMD_DRV_LOG(ERR, "Failed to allocate mbuf for RX");
+ return -ENOMEM;
+ }
+
+ rte_mbuf_refcnt_set(mbuf, 1);
+ mbuf->next = NULL;
+ mbuf->data_off = RTE_PKTMBUF_HEADROOM;
+ mbuf->nb_segs = 1;
+ mbuf->port = rxq->port_id;
+
+ dma_addr =
+ rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf));
+
+ rxd = &rxq->rx_ring[i];
+ rxd->read.pkt_addr = dma_addr;
+ rxd->read.hdr_addr = 0;
+#ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC
+ rxd->read.rsvd1 = 0;
+ rxd->read.rsvd2 = 0;
+#endif
+ rxe[i].mbuf = mbuf;
+ }
+
+ return 0;
+}
+
+/* Free all mbufs for descriptors in rx queue */
+static void
+ice_rx_queue_release_mbufs(struct ice_rx_queue *rxq)
+{
+ uint16_t i;
+
+ if (!rxq || !rxq->sw_ring) {
+ PMD_DRV_LOG(DEBUG, "Pointer to sw_ring is NULL");
+ return;
+ }
+
+ for (i = 0; i < rxq->nb_rx_desc; i++) {
+ if (rxq->sw_ring[i].mbuf) {
+ rte_pktmbuf_free_seg(rxq->sw_ring[i].mbuf);
+ rxq->sw_ring[i].mbuf = NULL;
+ }
+ }
+#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC
+ if (rxq->rx_nb_avail == 0)
+ return;
+ for (i = 0; i < rxq->rx_nb_avail; i++) {
+ struct rte_mbuf *mbuf;
+
+ mbuf = rxq->rx_stage[rxq->rx_next_avail + i];
+ rte_pktmbuf_free_seg(mbuf);
+ }
+ rxq->rx_nb_avail = 0;
+#endif /* RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC */
+}
+
+/* turn on or off rx queue
+ * @q_idx: queue index in pf scope
+ * @on: turn on or off the queue
+ */
+static int
+ice_switch_rx_queue(struct ice_hw *hw, uint16_t q_idx, bool on)
+{
+ uint32_t reg;
+ uint16_t j;
+
+ /* QRX_CTRL = QRX_ENA */
+ reg = ICE_READ_REG(hw, QRX_CTRL(q_idx));
+
+ if (on) {
+ if (reg & QRX_CTRL_QENA_STAT_M)
+ return 0; /* Already on, skip */
+ reg |= QRX_CTRL_QENA_REQ_M;
+ } else {
+ if (!(reg & QRX_CTRL_QENA_STAT_M))
+ return 0; /* Already off, skip */
+ reg &= ~QRX_CTRL_QENA_REQ_M;
+ }
+
+ /* Write the register */
+ ICE_WRITE_REG(hw, QRX_CTRL(q_idx), reg);
+ /* Check the result. It is said that QENA_STAT
+ * follows the QENA_REQ not more than 10 use.
+ * TODO: need to change the wait counter later
+ */
+ for (j = 0; j < ICE_CHK_Q_ENA_COUNT; j++) {
+ rte_delay_us(ICE_CHK_Q_ENA_INTERVAL_US);
+ reg = ICE_READ_REG(hw, QRX_CTRL(q_idx));
+ if (on) {
+ if ((reg & QRX_CTRL_QENA_REQ_M) &&
+ (reg & QRX_CTRL_QENA_STAT_M))
+ break;
+ } else {
+ if (!(reg & QRX_CTRL_QENA_REQ_M) &&
+ !(reg & QRX_CTRL_QENA_STAT_M))
+ break;
+ }
+ }
+
+ /* Check if it is timeout */
+ if (j >= ICE_CHK_Q_ENA_COUNT) {
+ PMD_DRV_LOG(ERR, "Failed to %s rx queue[%u]",
+ (on ? "enable" : "disable"), q_idx);
+ return -ETIMEDOUT;
+ }
+
+ return 0;
+}
+
+static inline int
+#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC
+ice_check_rx_burst_bulk_alloc_preconditions(struct ice_rx_queue *rxq)
+#else
+ice_check_rx_burst_bulk_alloc_preconditions(
+ __rte_unused struct ice_rx_queue *rxq)
+#endif
+{
+ int ret = 0;
+
+#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC
+ if (!(rxq->rx_free_thresh >= ICE_RX_MAX_BURST)) {
+ PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions: "
+ "rxq->rx_free_thresh=%d, "
+ "ICE_RX_MAX_BURST=%d",
+ rxq->rx_free_thresh, ICE_RX_MAX_BURST);
+ ret = -EINVAL;
+ } else if (!(rxq->rx_free_thresh < rxq->nb_rx_desc)) {
+ PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions: "
+ "rxq->rx_free_thresh=%d, "
+ "rxq->nb_rx_desc=%d",
+ rxq->rx_free_thresh, rxq->nb_rx_desc);
+ ret = -EINVAL;
+ } else if (rxq->nb_rx_desc % rxq->rx_free_thresh != 0) {
+ PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions: "
+ "rxq->nb_rx_desc=%d, "
+ "rxq->rx_free_thresh=%d",
+ rxq->nb_rx_desc, rxq->rx_free_thresh);
+ ret = -EINVAL;
+ }
+#else
+ ret = -EINVAL;
+#endif
+
+ return ret;
+}
+
+/* reset fields in ice_rx_queue back to default */
+static void
+ice_reset_rx_queue(struct ice_rx_queue *rxq)
+{
+ unsigned i;
+ uint16_t len;
+
+ if (!rxq) {
+ PMD_DRV_LOG(DEBUG, "Pointer to rxq is NULL");
+ return;
+ }
+
+#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC
+ if (ice_check_rx_burst_bulk_alloc_preconditions(rxq) == 0)
+ len = (uint16_t)(rxq->nb_rx_desc + ICE_RX_MAX_BURST);
+ else
+#endif /* RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC */
+ len = rxq->nb_rx_desc;
+
+ for (i = 0; i < len * sizeof(union ice_rx_desc); i++)
+ ((volatile char *)rxq->rx_ring)[i] = 0;
+
+#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC
+ memset(&rxq->fake_mbuf, 0x0, sizeof(rxq->fake_mbuf));
+ for (i = 0; i < ICE_RX_MAX_BURST; ++i)
+ rxq->sw_ring[rxq->nb_rx_desc + i].mbuf = &rxq->fake_mbuf;
+
+ rxq->rx_nb_avail = 0;
+ rxq->rx_next_avail = 0;
+ rxq->rx_free_trigger = (uint16_t)(rxq->rx_free_thresh - 1);
+#endif /* RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC */
+
+ rxq->rx_tail = 0;
+ rxq->nb_rx_hold = 0;
+ rxq->pkt_first_seg = NULL;
+ rxq->pkt_last_seg = NULL;
+}
+
+int
+ice_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+ struct ice_rx_queue *rxq;
+ int err;
+ struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+ if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+ return -E_RTE_SECONDARY;
+
+ PMD_INIT_FUNC_TRACE();
+
+ if (rx_queue_id >= dev->data->nb_rx_queues) {
+ PMD_DRV_LOG(ERR, "RX queue %u is out of range %u",
+ rx_queue_id, dev->data->nb_rx_queues);
+ return -EINVAL;
+ }
+
+ rxq = dev->data->rx_queues[rx_queue_id];
+ if (!rxq || !rxq->q_set) {
+ PMD_DRV_LOG(ERR, "RX queue %u not available or setup",
+ rx_queue_id);
+ return -EINVAL;
+ }
+
+ err = ice_program_hw_rx_queue(rxq);
+ if (err) {
+ PMD_DRV_LOG(ERR, "fail to program RX queue %u",
+ rx_queue_id);
+ return -EIO;
+ }
+
+ err = ice_alloc_rx_queue_mbufs(rxq);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Failed to allocate RX queue mbuf");
+ return -ENOMEM;
+ }
+
+ rte_wmb();
+
+ /* Init the RX tail register. */
+ ICE_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1);
+
+ err = ice_switch_rx_queue(hw, rxq->reg_idx, TRUE);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Failed to switch RX queue %u on",
+ rx_queue_id);
+
+ ice_rx_queue_release_mbufs(rxq);
+ ice_reset_rx_queue(rxq);
+ return -EINVAL;
+ }
+
+ dev->data->rx_queue_state[rx_queue_id] =
+ RTE_ETH_QUEUE_STATE_STARTED;
+
+ return 0;
+}
+
+int
+ice_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+ struct ice_rx_queue *rxq;
+ int err;
+ struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+ if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+ return -E_RTE_SECONDARY;
+
+ if (rx_queue_id < dev->data->nb_rx_queues) {
+ rxq = dev->data->rx_queues[rx_queue_id];
+
+ err = ice_switch_rx_queue(hw, rxq->reg_idx, FALSE);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Failed to switch RX queue %u off",
+ rx_queue_id);
+ return -EINVAL;
+ }
+ ice_rx_queue_release_mbufs(rxq);
+ ice_reset_rx_queue(rxq);
+ dev->data->rx_queue_state[rx_queue_id] =
+ RTE_ETH_QUEUE_STATE_STOPPED;
+ }
+
+ return 0;
+}
+
+int
+ice_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+{
+ struct ice_tx_queue *txq;
+ int err;
+ struct ice_vsi *vsi;
+ struct ice_hw *hw;
+ struct ice_aqc_add_tx_qgrp txq_elem;
+ struct ice_tlan_ctx tx_ctx;
+
+ if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+ return -E_RTE_SECONDARY;
+
+ PMD_INIT_FUNC_TRACE();
+
+ if (tx_queue_id >= dev->data->nb_tx_queues) {
+ PMD_DRV_LOG(ERR, "TX queue %u is out of range %u",
+ tx_queue_id, dev->data->nb_tx_queues);
+ return -EINVAL;
+ }
+
+ txq = dev->data->tx_queues[tx_queue_id];
+ if (!txq || !txq->q_set) {
+ PMD_DRV_LOG(ERR, "TX queue %u is not available or setup",
+ tx_queue_id);
+ return -EINVAL;
+ }
+
+ vsi = txq->vsi;
+ hw = ICE_VSI_TO_HW(vsi);
+
+ memset(&txq_elem, 0, sizeof(txq_elem));
+ memset(&tx_ctx, 0, sizeof(tx_ctx));
+ txq_elem.num_txqs = 1;
+ txq_elem.txqs[0].txq_id = rte_cpu_to_le_16(txq->reg_idx);
+
+ tx_ctx.base = txq->tx_ring_phys_addr / ICE_QUEUE_BASE_ADDR_UNIT;
+ tx_ctx.qlen = txq->nb_tx_desc;
+ tx_ctx.pf_num = hw->pf_id;
+ tx_ctx.vmvf_type = ICE_TLAN_CTX_VMVF_TYPE_PF;
+ tx_ctx.src_vsi = vsi->vsi_id;
+ tx_ctx.port_num = hw->port_info->lport;
+ tx_ctx.tso_ena = 1; /* tso enable */
+ tx_ctx.tso_qnum = txq->reg_idx; /* index for tso state structure */
+ tx_ctx.legacy_int = 1; /* Legacy or Advanced Host Interface */
+
+ ice_set_ctx((uint8_t *)&tx_ctx, txq_elem.txqs[0].txq_ctx,
+ ice_tlan_ctx_info);
+
+ txq->qtx_tail = hw->hw_addr + QTX_COMM_DBELL(txq->reg_idx);
+
+ /* Init the Tx tail register*/
+ ICE_PCI_REG_WRITE(txq->qtx_tail, 0);
+
+ err = ice_ena_vsi_txq(hw->port_info, vsi->idx, 0, 1, &txq_elem,
+ sizeof(txq_elem), NULL);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Failed to add lan txq");
+ return -EIO;
+ }
+ /* store the schedule node id */
+ txq->q_teid = txq_elem.txqs[0].q_teid;
+
+ dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED;
+ return 0;
+}
+
+/* Free all mbufs for descriptors in tx queue */
+static void
+ice_tx_queue_release_mbufs(struct ice_tx_queue *txq)
+{
+ uint16_t i;
+
+ if (!txq || !txq->sw_ring) {
+ PMD_DRV_LOG(DEBUG, "Pointer to txq or sw_ring is NULL");
+ return;
+ }
+
+ for (i = 0; i < txq->nb_tx_desc; i++) {
+ if (txq->sw_ring[i].mbuf) {
+ rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
+ txq->sw_ring[i].mbuf = NULL;
+ }
+ }
+}
+
+static void
+ice_reset_tx_queue(struct ice_tx_queue *txq)
+{
+ struct ice_tx_entry *txe;
+ uint16_t i, prev, size;
+
+ if (!txq) {
+ PMD_DRV_LOG(DEBUG, "Pointer to txq is NULL");
+ return;
+ }
+
+ txe = txq->sw_ring;
+ size = sizeof(struct ice_tx_desc) * txq->nb_tx_desc;
+ for (i = 0; i < size; i++)
+ ((volatile char *)txq->tx_ring)[i] = 0;
+
+ prev = (uint16_t)(txq->nb_tx_desc - 1);
+ for (i = 0; i < txq->nb_tx_desc; i++) {
+ volatile struct ice_tx_desc *txd = &txq->tx_ring[i];
+
+ txd->cmd_type_offset_bsz =
+ rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE);
+ txe[i].mbuf = NULL;
+ txe[i].last_id = i;
+ txe[prev].next_id = i;
+ prev = i;
+ }
+
+ txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1);
+ txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
+
+ txq->tx_tail = 0;
+ txq->nb_tx_used = 0;
+
+ txq->last_desc_cleaned = (uint16_t)(txq->nb_tx_desc - 1);
+ txq->nb_tx_free = (uint16_t)(txq->nb_tx_desc - 1);
+}
+
+int
+ice_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+{
+ struct ice_tx_queue *txq;
+ struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ enum ice_status status;
+ uint16_t q_ids[1];
+ uint32_t q_teids[1];
+
+ if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+ return -E_RTE_SECONDARY;
+
+ if (tx_queue_id >= dev->data->nb_tx_queues) {
+ PMD_DRV_LOG(ERR, "TX queue %u is out of range %u",
+ tx_queue_id, dev->data->nb_tx_queues);
+ return -EINVAL;
+ }
+
+ txq = dev->data->tx_queues[tx_queue_id];
+ if (!txq) {
+ PMD_DRV_LOG(ERR, "TX queue %u is not available",
+ tx_queue_id);
+ return -EINVAL;
+ }
+
+ q_ids[0] = txq->reg_idx;
+ q_teids[0] = txq->q_teid;
+
+ status = ice_dis_vsi_txq(hw->port_info, 1, q_ids, q_teids,
+ ICE_NO_RESET, 0, NULL);
+ if (status != ICE_SUCCESS) {
+ PMD_DRV_LOG(DEBUG, "Failed to disble Lan Tx queue");
+ return -EINVAL;
+ }
+
+ ice_tx_queue_release_mbufs(txq);
+ ice_reset_tx_queue(txq);
+ dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
+
+ return 0;
+}
+
+int
+ice_rx_queue_setup(struct rte_eth_dev *dev,
+ uint16_t queue_idx,
+ uint16_t nb_desc,
+ unsigned int socket_id,
+ const struct rte_eth_rxconf *rx_conf,
+ struct rte_mempool *mp)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_adapter *ad =
+ ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+ struct ice_vsi *vsi = pf->main_vsi;
+ struct ice_rx_queue *rxq;
+ const struct rte_memzone *rz;
+ uint32_t ring_size;
+ uint16_t len;
+ int use_def_burst_func = 1;
+
+ if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+ return -E_RTE_SECONDARY;
+
+ if (nb_desc % ICE_ALIGN_RING_DESC != 0 ||
+ nb_desc > ICE_MAX_RING_DESC ||
+ nb_desc < ICE_MIN_RING_DESC) {
+ PMD_INIT_LOG(ERR, "Number (%u) of receive descriptors is "
+ "invalid", nb_desc);
+ return -EINVAL;
+ }
+
+ /* Free memory if needed */
+ if (dev->data->rx_queues[queue_idx]) {
+ ice_rx_queue_release(dev->data->rx_queues[queue_idx]);
+ dev->data->rx_queues[queue_idx] = NULL;
+ }
+
+ /* Allocate the rx queue data structure */
+ rxq = rte_zmalloc_socket("ice rx queue",
+ sizeof(struct ice_rx_queue),
+ RTE_CACHE_LINE_SIZE,
+ socket_id);
+ if (!rxq) {
+ PMD_INIT_LOG(ERR, "Failed to allocate memory for "
+ "rx queue data structure");
+ return -ENOMEM;
+ }
+ rxq->mp = mp;
+ rxq->nb_rx_desc = nb_desc;
+ rxq->rx_free_thresh = rx_conf->rx_free_thresh;
+ rxq->queue_id = queue_idx;
+
+ rxq->reg_idx = vsi->base_queue + queue_idx;
+ rxq->port_id = dev->data->port_id;
+ if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ rxq->crc_len = ETHER_CRC_LEN;
+ else
+ rxq->crc_len = 0;
+
+ rxq->drop_en = rx_conf->rx_drop_en;
+ rxq->vsi = vsi;
+ rxq->rx_deferred_start = rx_conf->rx_deferred_start;
+
+ /* Allocate the maximun number of RX ring hardware descriptor. */
+ len = ICE_MAX_RING_DESC;
+
+#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC
+ /**
+ * Allocating a little more memory because vectorized/bulk_alloc Rx
+ * functions doesn't check boundaries each time.
+ */
+ len += ICE_RX_MAX_BURST;
+#endif
+
+ /* Allocate the maximum number of RX ring hardware descriptor. */
+ ring_size = sizeof(union ice_rx_desc) * len;
+ ring_size = RTE_ALIGN(ring_size, ICE_DMA_MEM_ALIGN);
+ rz = rte_eth_dma_zone_reserve(dev, "rx_ring", queue_idx,
+ ring_size, ICE_RING_BASE_ALIGN,
+ socket_id);
+ if (!rz) {
+ ice_rx_queue_release(rxq);
+ PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for RX");
+ return -ENOMEM;
+ }
+
+ /* Zero all the descriptors in the ring. */
+ memset(rz->addr, 0, ring_size);
+
+ rxq->rx_ring_phys_addr = rz->phys_addr;
+ rxq->rx_ring = (union ice_rx_desc *)rz->addr;
+
+#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC
+ len = (uint16_t)(nb_desc + ICE_RX_MAX_BURST);
+#else
+ len = nb_desc;
+#endif
+
+ /* Allocate the software ring. */
+ rxq->sw_ring = rte_zmalloc_socket("ice rx sw ring",
+ sizeof(struct ice_rx_entry) * len,
+ RTE_CACHE_LINE_SIZE,
+ socket_id);
+ if (!rxq->sw_ring) {
+ ice_rx_queue_release(rxq);
+ PMD_INIT_LOG(ERR, "Failed to allocate memory for SW ring");
+ return -ENOMEM;
+ }
+
+ ice_reset_rx_queue(rxq);
+ rxq->q_set = TRUE;
+ dev->data->rx_queues[queue_idx] = rxq;
+
+ use_def_burst_func = ice_check_rx_burst_bulk_alloc_preconditions(rxq);
+
+ if (!use_def_burst_func) {
+#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC
+ PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions are "
+ "satisfied. Rx Burst Bulk Alloc function will be "
+ "used on port=%d, queue=%d.",
+ rxq->port_id, rxq->queue_id);
+#endif /* RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC */
+ } else {
+ PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions are "
+ "not satisfied, Scattered Rx is requested, "
+ "or RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC is "
+ "not enabled on port=%d, queue=%d.",
+ rxq->port_id, rxq->queue_id);
+ ad->rx_bulk_alloc_allowed = false;
+ }
+
+ return 0;
+}
+
+void
+ice_rx_queue_release(void *rxq)
+{
+ struct ice_rx_queue *q = (struct ice_rx_queue *)rxq;
+
+ if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+ return;
+
+ if (!q) {
+ PMD_DRV_LOG(DEBUG, "Pointer to rxq is NULL");
+ return;
+ }
+
+ ice_rx_queue_release_mbufs(q);
+ rte_free(q->sw_ring);
+ rte_free(q);
+}
+
+int
+ice_tx_queue_setup(struct rte_eth_dev *dev,
+ uint16_t queue_idx,
+ uint16_t nb_desc,
+ unsigned int socket_id,
+ const struct rte_eth_txconf *tx_conf)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_vsi *vsi = pf->main_vsi;
+ struct ice_tx_queue *txq;
+ const struct rte_memzone *tz;
+ uint32_t ring_size;
+ uint16_t tx_rs_thresh, tx_free_thresh;
+ uint64_t offloads;
+
+ offloads = tx_conf->offloads | dev->data->dev_conf.txmode.offloads;
+
+ if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+ return -E_RTE_SECONDARY;
+
+ if (nb_desc % ICE_ALIGN_RING_DESC != 0 ||
+ nb_desc > ICE_MAX_RING_DESC ||
+ nb_desc < ICE_MIN_RING_DESC) {
+ PMD_INIT_LOG(ERR, "Number (%u) of transmit descriptors is "
+ "invalid", nb_desc);
+ return -EINVAL;
+ }
+
+ /**
+ * The following two parameters control the setting of the RS bit on
+ * transmit descriptors. TX descriptors will have their RS bit set
+ * after txq->tx_rs_thresh descriptors have been used. The TX
+ * descriptor ring will be cleaned after txq->tx_free_thresh
+ * descriptors are used or if the number of descriptors required to
+ * transmit a packet is greater than the number of free TX descriptors.
+ *
+ * The following constraints must be satisfied:
+ * - tx_rs_thresh must be greater than 0.
+ * - tx_rs_thresh must be less than the size of the ring minus 2.
+ * - tx_rs_thresh must be less than or equal to tx_free_thresh.
+ * - tx_rs_thresh must be a divisor of the ring size.
+ * - tx_free_thresh must be greater than 0.
+ * - tx_free_thresh must be less than the size of the ring minus 3.
+ *
+ * One descriptor in the TX ring is used as a sentinel to avoid a H/W
+ * race condition, hence the maximum threshold constraints. When set
+ * to zero use default values.
+ */
+ tx_rs_thresh = (uint16_t)(tx_conf->tx_rs_thresh ?
+ tx_conf->tx_rs_thresh :
+ ICE_DEFAULT_TX_RSBIT_THRESH);
+ tx_free_thresh = (uint16_t)(tx_conf->tx_free_thresh ?
+ tx_conf->tx_free_thresh :
+ ICE_DEFAULT_TX_FREE_THRESH);
+ if (tx_rs_thresh >= (nb_desc - 2)) {
+ PMD_INIT_LOG(ERR, "tx_rs_thresh must be less than the "
+ "number of TX descriptors minus 2. "
+ "(tx_rs_thresh=%u port=%d queue=%d)",
+ (unsigned int)tx_rs_thresh,
+ (int)dev->data->port_id,
+ (int)queue_idx);
+ return -EINVAL;
+ }
+ if (tx_free_thresh >= (nb_desc - 3)) {
+ PMD_INIT_LOG(ERR, "tx_rs_thresh must be less than the "
+ "tx_free_thresh must be less than the "
+ "number of TX descriptors minus 3. "
+ "(tx_free_thresh=%u port=%d queue=%d)",
+ (unsigned int)tx_free_thresh,
+ (int)dev->data->port_id,
+ (int)queue_idx);
+ return -EINVAL;
+ }
+ if (tx_rs_thresh > tx_free_thresh) {
+ PMD_INIT_LOG(ERR, "tx_rs_thresh must be less than or "
+ "equal to tx_free_thresh. (tx_free_thresh=%u"
+ " tx_rs_thresh=%u port=%d queue=%d)",
+ (unsigned int)tx_free_thresh,
+ (unsigned int)tx_rs_thresh,
+ (int)dev->data->port_id,
+ (int)queue_idx);
+ return -EINVAL;
+ }
+ if ((nb_desc % tx_rs_thresh) != 0) {
+ PMD_INIT_LOG(ERR, "tx_rs_thresh must be a divisor of the "
+ "number of TX descriptors. (tx_rs_thresh=%u"
+ " port=%d queue=%d)",
+ (unsigned int)tx_rs_thresh,
+ (int)dev->data->port_id,
+ (int)queue_idx);
+ return -EINVAL;
+ }
+ if ((tx_rs_thresh > 1) && (tx_conf->tx_thresh.wthresh != 0)) {
+ PMD_INIT_LOG(ERR, "TX WTHRESH must be set to 0 if "
+ "tx_rs_thresh is greater than 1. "
+ "(tx_rs_thresh=%u port=%d queue=%d)",
+ (unsigned int)tx_rs_thresh,
+ (int)dev->data->port_id,
+ (int)queue_idx);
+ return -EINVAL;
+ }
+
+ /* Free memory if needed. */
+ if (dev->data->tx_queues[queue_idx]) {
+ ice_tx_queue_release(dev->data->tx_queues[queue_idx]);
+ dev->data->tx_queues[queue_idx] = NULL;
+ }
+
+ /* Allocate the TX queue data structure. */
+ txq = rte_zmalloc_socket("ice tx queue",
+ sizeof(struct ice_tx_queue),
+ RTE_CACHE_LINE_SIZE,
+ socket_id);
+ if (!txq) {
+ PMD_INIT_LOG(ERR, "Failed to allocate memory for "
+ "tx queue structure");
+ return -ENOMEM;
+ }
+
+ /* Allocate TX hardware ring descriptors. */
+ ring_size = sizeof(struct ice_tx_desc) * ICE_MAX_RING_DESC;
+ ring_size = RTE_ALIGN(ring_size, ICE_DMA_MEM_ALIGN);
+ tz = rte_eth_dma_zone_reserve(dev, "tx_ring", queue_idx,
+ ring_size, ICE_RING_BASE_ALIGN,
+ socket_id);
+ if (!tz) {
+ ice_tx_queue_release(txq);
+ PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for TX");
+ return -ENOMEM;
+ }
+
+ txq->nb_tx_desc = nb_desc;
+ txq->tx_rs_thresh = tx_rs_thresh;
+ txq->tx_free_thresh = tx_free_thresh;
+ txq->pthresh = tx_conf->tx_thresh.pthresh;
+ txq->hthresh = tx_conf->tx_thresh.hthresh;
+ txq->wthresh = tx_conf->tx_thresh.wthresh;
+ txq->queue_id = queue_idx;
+
+ txq->reg_idx = vsi->base_queue + queue_idx;
+ txq->port_id = dev->data->port_id;
+ txq->offloads = offloads;
+ txq->vsi = vsi;
+ txq->tx_deferred_start = tx_conf->tx_deferred_start;
+
+ txq->tx_ring_phys_addr = tz->phys_addr;
+ txq->tx_ring = (struct ice_tx_desc *)tz->addr;
+
+ /* Allocate software ring */
+ txq->sw_ring =
+ rte_zmalloc_socket("ice tx sw ring",
+ sizeof(struct ice_tx_entry) * nb_desc,
+ RTE_CACHE_LINE_SIZE,
+ socket_id);
+ if (!txq->sw_ring) {
+ ice_tx_queue_release(txq);
+ PMD_INIT_LOG(ERR, "Failed to allocate memory for SW TX ring");
+ return -ENOMEM;
+ }
+
+ ice_reset_tx_queue(txq);
+ txq->q_set = TRUE;
+ dev->data->tx_queues[queue_idx] = txq;
+
+ return 0;
+}
+
+void
+ice_tx_queue_release(void *txq)
+{
+ struct ice_tx_queue *q = (struct ice_tx_queue *)txq;
+
+ if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+ return;
+
+ if (!q) {
+ PMD_DRV_LOG(DEBUG, "Pointer to TX queue is NULL");
+ return;
+ }
+
+ ice_tx_queue_release_mbufs(q);
+ rte_free(q->sw_ring);
+ rte_free(q);
+}
+
+void
+ice_clear_queues(struct rte_eth_dev *dev)
+{
+ uint16_t i;
+
+ PMD_INIT_FUNC_TRACE();
+
+ for (i = 0; i < dev->data->nb_tx_queues; i++) {
+ ice_tx_queue_release_mbufs(dev->data->tx_queues[i]);
+ ice_reset_tx_queue(dev->data->tx_queues[i]);
+ }
+
+ for (i = 0; i < dev->data->nb_rx_queues; i++) {
+ ice_rx_queue_release_mbufs(dev->data->rx_queues[i]);
+ ice_reset_rx_queue(dev->data->rx_queues[i]);
+ }
+}
+
+void
+ice_free_queues(struct rte_eth_dev *dev)
+{
+ uint16_t i;
+
+ PMD_INIT_FUNC_TRACE();
+
+ for (i = 0; i < dev->data->nb_rx_queues; i++) {
+ if (!dev->data->rx_queues[i])
+ continue;
+ ice_rx_queue_release(dev->data->rx_queues[i]);
+ dev->data->rx_queues[i] = NULL;
+ }
+ dev->data->nb_rx_queues = 0;
+
+ for (i = 0; i < dev->data->nb_tx_queues; i++) {
+ if (!dev->data->tx_queues[i])
+ continue;
+ ice_tx_queue_release(dev->data->tx_queues[i]);
+ dev->data->tx_queues[i] = NULL;
+ }
+ dev->data->nb_tx_queues = 0;
+}
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
index c37dc23..088a206 100644
--- a/drivers/net/ice/ice_rxtx.h
+++ b/drivers/net/ice/ice_rxtx.h
@@ -114,4 +114,24 @@ struct ice_tx_queue {
uint64_t outer_l3_len:16; /* outer L3 Header Length */
};
};
+
+int ice_rx_queue_setup(struct rte_eth_dev *dev,
+ uint16_t queue_idx,
+ uint16_t nb_desc,
+ unsigned int socket_id,
+ const struct rte_eth_rxconf *rx_conf,
+ struct rte_mempool *mp);
+int ice_tx_queue_setup(struct rte_eth_dev *dev,
+ uint16_t queue_idx,
+ uint16_t nb_desc,
+ unsigned int socket_id,
+ const struct rte_eth_txconf *tx_conf);
+int ice_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+int ice_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+int ice_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
+int ice_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
+void ice_rx_queue_release(void *rxq);
+void ice_tx_queue_release(void *txq);
+void ice_clear_queues(struct rte_eth_dev *dev);
+void ice_free_queues(struct rte_eth_dev *dev);
#endif /* _ICE_RXTX_H_ */
--
1.9.3
Rami Rosen
2018-12-03 15:24:01 UTC
Permalink
Hi, Wenzhuo,
Post by Wenzhuo Lu
+static int
+ice_dev_start(struct rte_eth_dev *dev)
+{
+ struct rte_eth_dev_data *data = dev->data;
+ struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ uint16_t nb_rxq = 0;
+ uint16_t nb_txq, i;
+ int ret;
+
+ if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+ return -E_RTE_SECONDARY;
+
[Rami Rosen] Suppose start of a TX queue failes in the loop below. You
go to **tx_err** label, where you stop
all **RX** queues (which actually were not started at all, since they
are started only later in this method; and then you
return -EIO and the ice_dev_start() method is terminated, without
actually stopping any TX queues which were already started;
So maybe it is better to call ice_tx_queue_stop() in tx_err and
ice_rx_queue_stop() in rx_err.
Apart from it, there is a typo: "Tx queues' contex" should be =>Tx
queues' context"
Post by Wenzhuo Lu
+ /* program Tx queues' contex in hardware */
+ for (nb_txq = 0; nb_txq < data->nb_tx_queues; nb_txq++) {
+ ret = ice_tx_queue_start(dev, nb_txq);
+ if (ret) {
+ PMD_DRV_LOG(ERR, "fail to start Tx queue %u", nb_txq);
+ goto tx_err;
+ }
+ }
+
+ /* program Rx queues' context in hardware*/
+ for (nb_rxq = 0; nb_rxq < data->nb_rx_queues; nb_rxq++) {
+ ret = ice_rx_queue_start(dev, nb_rxq);
+ if (ret) {
+ PMD_DRV_LOG(ERR, "fail to start Rx queue %u", nb_rxq);
+ goto rx_err;
+ }
+ }
,,,
Post by Wenzhuo Lu
+ /* stop the started queues if failed to start all queues */
+ for (i = 0; i < nb_txq; i++)
+ ice_tx_queue_stop(dev, i);
+ for (i = 0; i < nb_rxq; i++)
+ ice_rx_queue_stop(dev, i);
+
+ return -EIO;
+}
+
Regards,
Rami Rosen
Rami Rosen
2018-12-03 15:43:57 UTC
Permalink
Hi,

The same comment refers also to V2 of the patch, [PATCH v2 03/20]
net/ice: support device and queue ops

Regards,
Rami Rosen
Post by Rami Rosen
Hi, Wenzhuo,
Post by Wenzhuo Lu
+static int
+ice_dev_start(struct rte_eth_dev *dev)
+{
+ struct rte_eth_dev_data *data = dev->data;
+ struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ uint16_t nb_rxq = 0;
+ uint16_t nb_txq, i;
+ int ret;
+
+ if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+ return -E_RTE_SECONDARY;
+
[Rami Rosen] Suppose start of a TX queue failes in the loop below. You
go to **tx_err** label, where you stop
all **RX** queues (which actually were not started at all, since they
are started only later in this method; and then you
return -EIO and the ice_dev_start() method is terminated, without
actually stopping any TX queues which were already started;
So maybe it is better to call ice_tx_queue_stop() in tx_err and
ice_rx_queue_stop() in rx_err.
Apart from it, there is a typo: "Tx queues' contex" should be =>Tx
queues' context"
Post by Wenzhuo Lu
+ /* program Tx queues' contex in hardware */
+ for (nb_txq = 0; nb_txq < data->nb_tx_queues; nb_txq++) {
+ ret = ice_tx_queue_start(dev, nb_txq);
+ if (ret) {
+ PMD_DRV_LOG(ERR, "fail to start Tx queue %u", nb_txq);
+ goto tx_err;
+ }
+ }
+
+ /* program Rx queues' context in hardware*/
+ for (nb_rxq = 0; nb_rxq < data->nb_rx_queues; nb_rxq++) {
+ ret = ice_rx_queue_start(dev, nb_rxq);
+ if (ret) {
+ PMD_DRV_LOG(ERR, "fail to start Rx queue %u", nb_rxq);
+ goto rx_err;
+ }
+ }
,,,
Post by Wenzhuo Lu
+ /* stop the started queues if failed to start all queues */
+ for (i = 0; i < nb_txq; i++)
+ ice_tx_queue_stop(dev, i);
+ for (i = 0; i < nb_rxq; i++)
+ ice_rx_queue_stop(dev, i);
+
+ return -EIO;
+}
+
Regards,
Rami Rosen
Lu, Wenzhuo
2018-12-06 02:53:57 UTC
Permalink
Hi Rami,
Post by Yang, Qiming
-----Original Message-----
Sent: Monday, December 3, 2018 11:24 PM
Subject: Re: [dpdk-dev] [PATCH 03/19] net/ice: support device and queue
ops
Hi, Wenzhuo,
Post by Wenzhuo Lu
+static int
+ice_dev_start(struct rte_eth_dev *dev) {
+ struct rte_eth_dev_data *data = dev->data;
+ struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data-
dev_private);
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ uint16_t nb_rxq = 0;
+ uint16_t nb_txq, i;
+ int ret;
+
+ if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+ return -E_RTE_SECONDARY;
+
[Rami Rosen] Suppose start of a TX queue failes in the loop below. You go to
**tx_err** label, where you stop all **RX** queues (which actually were not
started at all, since they are started only later in this method; and then you
return -EIO and the ice_dev_start() method is terminated, without actually
stopping any TX queues which were already started;
So maybe it is better to call ice_tx_queue_stop() in tx_err and
ice_rx_queue_stop() in rx_err.
Apart from it, there is a typo: "Tx queues' contex" should be =>Tx queues'
context"
Thanks for the comments. The logic is not good. Will make it better in the next version.
Post by Yang, Qiming
Post by Wenzhuo Lu
+ /* program Tx queues' contex in hardware */
+ for (nb_txq = 0; nb_txq < data->nb_tx_queues; nb_txq++) {
+ ret = ice_tx_queue_start(dev, nb_txq);
+ if (ret) {
+ PMD_DRV_LOG(ERR, "fail to start Tx queue %u", nb_txq);
+ goto tx_err;
+ }
+ }
+
+ /* program Rx queues' context in hardware*/
+ for (nb_rxq = 0; nb_rxq < data->nb_rx_queues; nb_rxq++) {
+ ret = ice_rx_queue_start(dev, nb_rxq);
+ if (ret) {
+ PMD_DRV_LOG(ERR, "fail to start Rx queue %u", nb_rxq);
+ goto rx_err;
+ }
+ }
,,,
Post by Wenzhuo Lu
+ /* stop the started queues if failed to start all queues */
+ for (i = 0; i < nb_txq; i++)
+ ice_tx_queue_stop(dev, i);
+ for (i = 0; i < nb_rxq; i++)
+ ice_rx_queue_stop(dev, i);
+
+ return -EI
Wenzhuo Lu
2018-11-23 06:56:04 UTC
Permalink
Add ops dev_infos_get.

Signed-off-by: Wenzhuo Lu <***@intel.com>
Signed-off-by: Qiming Yang <***@intel.com>
Signed-off-by: Xiaoyun Li <***@intel.com>
Signed-off-by: Jingjing Wu <***@intel.com>
---
drivers/net/ice/ice_ethdev.c | 90 ++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 90 insertions(+)

diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index bbf1ed4..b46f7b2 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -19,6 +19,8 @@
static void ice_dev_stop(struct rte_eth_dev *dev);
static void ice_dev_close(struct rte_eth_dev *dev);
static int ice_dev_reset(struct rte_eth_dev *dev);
+static void ice_dev_info_get(struct rte_eth_dev *dev,
+ struct rte_eth_dev_info *dev_info);

static const struct rte_pci_id pci_id_ice_map[] = {
{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_BACKPLANE) },
@@ -41,6 +43,7 @@
.rx_queue_release = ice_rx_queue_release,
.tx_queue_setup = ice_tx_queue_setup,
.tx_queue_release = ice_tx_queue_release,
+ .dev_infos_get = ice_dev_info_get,
};

static void
@@ -848,3 +851,90 @@ static int ice_init_rss(struct ice_pf *pf)

return 0;
}
+
+static void
+ice_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct ice_vsi *vsi = pf->main_vsi;
+ struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device);
+
+ dev_info->min_rx_bufsize = ICE_BUF_SIZE_MIN;
+ dev_info->max_rx_pktlen = ICE_FRAME_SIZE_MAX;
+ dev_info->max_rx_queues = vsi->nb_qps;
+ dev_info->max_tx_queues = vsi->nb_qps;
+ dev_info->max_mac_addrs = vsi->max_macaddrs;
+ dev_info->max_vfs = pci_dev->max_vfs;
+
+ dev_info->rx_offload_capa =
+ DEV_RX_OFFLOAD_VLAN_STRIP |
+ DEV_RX_OFFLOAD_IPV4_CKSUM |
+ DEV_RX_OFFLOAD_UDP_CKSUM |
+ DEV_RX_OFFLOAD_TCP_CKSUM |
+ DEV_RX_OFFLOAD_QINQ_STRIP |
+ DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ DEV_RX_OFFLOAD_VLAN_EXTEND |
+ DEV_RX_OFFLOAD_JUMBO_FRAME;
+ dev_info->tx_offload_capa =
+ DEV_TX_OFFLOAD_VLAN_INSERT |
+ DEV_TX_OFFLOAD_QINQ_INSERT |
+ DEV_TX_OFFLOAD_IPV4_CKSUM |
+ DEV_TX_OFFLOAD_UDP_CKSUM |
+ DEV_TX_OFFLOAD_TCP_CKSUM |
+ DEV_TX_OFFLOAD_SCTP_CKSUM |
+ DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ DEV_TX_OFFLOAD_TCP_TSO;
+ dev_info->rx_queue_offload_capa = 0;
+ dev_info->tx_queue_offload_capa = 0;
+
+ dev_info->reta_size = hw->func_caps.common_cap.rss_table_size;
+ dev_info->hash_key_size = (VSIQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t);
+ dev_info->flow_type_rss_offloads = ICE_RSS_OFFLOAD_ALL;
+
+ dev_info->default_rxconf = (struct rte_eth_rxconf) {
+ .rx_thresh = {
+ .pthresh = ICE_DEFAULT_RX_PTHRESH,
+ .hthresh = ICE_DEFAULT_RX_HTHRESH,
+ .wthresh = ICE_DEFAULT_RX_WTHRESH,
+ },
+ .rx_free_thresh = ICE_DEFAULT_RX_FREE_THRESH,
+ .rx_drop_en = 0,
+ .offloads = 0,
+ };
+
+ dev_info->default_txconf = (struct rte_eth_txconf) {
+ .tx_thresh = {
+ .pthresh = ICE_DEFAULT_TX_PTHRESH,
+ .hthresh = ICE_DEFAULT_TX_HTHRESH,
+ .wthresh = ICE_DEFAULT_TX_WTHRESH,
+ },
+ .tx_free_thresh = ICE_DEFAULT_TX_FREE_THRESH,
+ .tx_rs_thresh = ICE_DEFAULT_TX_RSBIT_THRESH,
+ .offloads = 0,
+ };
+
+ dev_info->rx_desc_lim = (struct rte_eth_desc_lim) {
+ .nb_max = ICE_MAX_RING_DESC,
+ .nb_min = ICE_MIN_RING_DESC,
+ .nb_align = ICE_ALIGN_RING_DESC,
+ };
+
+ dev_info->tx_desc_lim = (struct rte_eth_desc_lim) {
+ .nb_max = ICE_MAX_RING_DESC,
+ .nb_min = ICE_MIN_RING_DESC,
+ .nb_align = ICE_ALIGN_RING_DESC,
+ };
+
+ dev_info->speed_capa = ETH_LINK_SPEED_100G;
+
+ dev_info->nb_rx_queues = dev->data->nb_rx_queues;
+ dev_info->nb_tx_queues = dev->data->nb_tx_queues;
+
+ dev_info->default_rxportconf.burst_size = 32;
+ dev_info->default_txportconf.burst_size = 32;
+ dev_info->default_rxportconf.nb_queues = 1;
+ dev_info->default_txportconf.nb_queues = 1;
+ dev_info->default_rxportconf.ring_size = 1024;
+ dev_info->default_txportconf.ring_size = 1024;
+}
--
1.9.3
Wenzhuo Lu
2018-11-23 06:56:05 UTC
Permalink
Add ops dev_supported_ptypes_get.

Signed-off-by: Wei Zhao <***@intel.com>
Signed-off-by: Wenzhuo Lu <***@intel.com>
---
drivers/net/ice/ice_ethdev.c | 2 +
drivers/net/ice/ice_lan_rxtx.c | 601 +++++++++++++++++++++++++++++++++++++++++
drivers/net/ice/ice_rxtx.h | 2 +
3 files changed, 605 insertions(+)

diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index b46f7b2..61ae9ca 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -44,6 +44,7 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
.tx_queue_setup = ice_tx_queue_setup,
.tx_queue_release = ice_tx_queue_release,
.dev_infos_get = ice_dev_info_get,
+ .dev_supported_ptypes_get = ice_dev_supported_ptypes_get,
};

static void
@@ -492,6 +493,7 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,

dev->dev_ops = &ice_eth_dev_ops;

+ ice_set_default_ptype_table(dev);
pci_dev = RTE_DEV_TO_PCI(dev->device);

rte_eth_copy_pci_info(dev, pci_dev);
diff --git a/drivers/net/ice/ice_lan_rxtx.c b/drivers/net/ice/ice_lan_rxtx.c
index 67b89df..a9e8c03 100644
--- a/drivers/net/ice/ice_lan_rxtx.c
+++ b/drivers/net/ice/ice_lan_rxtx.c
@@ -908,6 +908,42 @@
rte_free(q);
}

+const uint32_t *
+ice_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
+{
+ static const uint32_t ptypes[] = {
+ /* refers to ice_get_default_pkt_type() */
+ RTE_PTYPE_L2_ETHER,
+ RTE_PTYPE_L2_ETHER_LLDP,
+ RTE_PTYPE_L2_ETHER_ARP,
+ RTE_PTYPE_L3_IPV4_EXT_UNKNOWN,
+ RTE_PTYPE_L3_IPV6_EXT_UNKNOWN,
+ RTE_PTYPE_L4_FRAG,
+ RTE_PTYPE_L4_ICMP,
+ RTE_PTYPE_L4_NONFRAG,
+ RTE_PTYPE_L4_SCTP,
+ RTE_PTYPE_L4_TCP,
+ RTE_PTYPE_L4_UDP,
+ RTE_PTYPE_TUNNEL_GRENAT,
+ RTE_PTYPE_TUNNEL_IP,
+ RTE_PTYPE_INNER_L2_ETHER,
+ RTE_PTYPE_INNER_L2_ETHER_VLAN,
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN,
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN,
+ RTE_PTYPE_INNER_L4_FRAG,
+ RTE_PTYPE_INNER_L4_ICMP,
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ RTE_PTYPE_INNER_L4_SCTP,
+ RTE_PTYPE_INNER_L4_TCP,
+ RTE_PTYPE_INNER_L4_UDP,
+ RTE_PTYPE_TUNNEL_GTPC,
+ RTE_PTYPE_TUNNEL_GTPU,
+ RTE_PTYPE_UNKNOWN
+ };
+
+ return ptypes;
+}
+
void
ice_clear_queues(struct rte_eth_dev *dev)
{
@@ -949,3 +985,568 @@
}
dev->data->nb_tx_queues = 0;
}
+
+/* For each value it means, datasheet of hardware can tell more details
+ *
+ * @note: fix ice_dev_supported_ptypes_get() if any change here.
+ */
+static inline uint32_t
+ice_get_default_pkt_type(uint16_t ptype)
+{
+ static const uint32_t type_table[ICE_MAX_PKT_TYPE]
+ __rte_cache_aligned = {
+ /* L2 types */
+ /* [0] reserved */
+ [1] = RTE_PTYPE_L2_ETHER,
+ /* [2] - [5] reserved */
+ [6] = RTE_PTYPE_L2_ETHER_LLDP,
+ /* [7] - [10] reserved */
+ [11] = RTE_PTYPE_L2_ETHER_ARP,
+ /* [12] - [21] reserved */
+
+ /* Non tunneled IPv4 */
+ [22] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_FRAG,
+ [23] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_NONFRAG,
+ [24] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_UDP,
+ /* [25] reserved */
+ [26] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_TCP,
+ [27] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_SCTP,
+ [28] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_ICMP,
+
+ /* IPv4 --> IPv4 */
+ [29] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [30] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [31] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [32] reserved */
+ [33] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [34] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [35] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> IPv6 */
+ [36] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [37] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [38] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [39] reserved */
+ [40] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [41] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [42] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN */
+ [43] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> IPv4 */
+ [44] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [45] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [46] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [47] reserved */
+ [48] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [49] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [50] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> IPv6 */
+ [51] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [52] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [53] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [54] reserved */
+ [55] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [56] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [57] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC */
+ [58] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC --> IPv4 */
+ [59] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [60] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [61] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [62] reserved */
+ [63] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [64] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [65] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC --> IPv6 */
+ [66] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [67] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [68] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [69] reserved */
+ [70] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [71] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [72] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC/VLAN */
+ [73] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L2_ETHER_VLAN,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv4 */
+ [74] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L2_ETHER_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [75] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L2_ETHER_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [76] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L2_ETHER_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [77] reserved */
+ [78] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L2_ETHER_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [79] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L2_ETHER_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [80] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L2_ETHER_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv6 */
+ [81] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L2_ETHER_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [82] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L2_ETHER_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [83] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L2_ETHER_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [84] reserved */
+ [85] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L2_ETHER_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [86] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L2_ETHER_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [87] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L2_ETHER_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* Non tunneled IPv6 */
+ [88] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_FRAG,
+ [89] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_NONFRAG,
+ [90] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_UDP,
+ /* [91] reserved */
+ [92] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_TCP,
+ [93] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_SCTP,
+ [94] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_ICMP,
+
+ /* IPv6 --> IPv4 */
+ [95] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [96] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [97] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [98] reserved */
+ [99] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [100] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [101] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> IPv6 */
+ [102] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [103] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [104] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [105] reserved */
+ [106] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [107] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [108] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN */
+ [109] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> IPv4 */
+ [110] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [111] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [112] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [113] reserved */
+ [114] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [115] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [116] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> IPv6 */
+ [117] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [118] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [119] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [120] reserved */
+ [121] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [122] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [123] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC */
+ [124] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC --> IPv4 */
+ [125] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [126] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [127] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [128] reserved */
+ [129] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [130] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [131] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC --> IPv6 */
+ [132] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [133] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [134] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [135] reserved */
+ [136] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [137] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [138] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC/VLAN */
+ [139] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L2_ETHER_VLAN,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv4 */
+ [140] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L2_ETHER_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [141] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L2_ETHER_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [142] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L2_ETHER_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [143] reserved */
+ [144] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L2_ETHER_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [145] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L2_ETHER_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [146] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L2_ETHER_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv6 */
+ [147] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L2_ETHER_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [148] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L2_ETHER_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [149] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L2_ETHER_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [150] reserved */
+ [151] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L2_ETHER_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [152] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L2_ETHER_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [153] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L2_ETHER_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+ /* [154] - [255] reserved */
+ [256] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GTPC,
+ [257] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GTPC,
+ [258] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GTPU,
+ [259] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GTPU,
+ /* [260] - [263] reserved */
+ [264] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GTPC,
+ [265] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GTPC,
+ [266] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GTPU,
+ [267] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GTPU,
+
+ /* All others reserved */
+ };
+
+ return type_table[ptype];
+}
+
+void __attribute__((cold))
+ice_set_default_ptype_table(struct rte_eth_dev *dev)
+{
+ struct ice_adapter *ad =
+ ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+ int i;
+
+ for (i = 0; i < ICE_MAX_PKT_TYPE; i++)
+ ad->ptype_tbl[i] = ice_get_default_pkt_type(i);
+}
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
index 088a206..871646f 100644
--- a/drivers/net/ice/ice_rxtx.h
+++ b/drivers/net/ice/ice_rxtx.h
@@ -134,4 +134,6 @@ int ice_tx_queue_setup(struct rte_eth_dev *dev,
void ice_tx_queue_release(void *txq);
void ice_clear_queues(struct rte_eth_dev *dev);
void ice_free_queues(struct rte_eth_dev *dev);
+void ice_set_default_ptype_table(struct rte_eth_dev *dev);
+const uint32_t *ice_dev_supported_ptypes_get(struct rte_eth_dev *dev);
#endif /* _ICE_RXTX_H_ */
--
1.9.3
Wenzhuo Lu
2018-11-23 06:56:06 UTC
Permalink
Add ops link_update.

Signed-off-by: Wenzhuo Lu <***@intel.com>
Signed-off-by: Qiming Yang <***@intel.com>
Signed-off-by: Xiaoyun Li <***@intel.com>
Signed-off-by: Jingjing Wu <***@intel.com>
---
drivers/net/ice/ice_ethdev.c | 335 +++++++++++++++++++++++++++++++++++++++++++
1 file changed, 335 insertions(+)

diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 61ae9ca..75246da 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -21,6 +21,8 @@
static int ice_dev_reset(struct rte_eth_dev *dev);
static void ice_dev_info_get(struct rte_eth_dev *dev,
struct rte_eth_dev_info *dev_info);
+static int ice_link_update(struct rte_eth_dev *dev,
+ int wait_to_complete);

static const struct rte_pci_id pci_id_ice_map[] = {
{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_BACKPLANE) },
@@ -45,6 +47,7 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
.tx_queue_release = ice_tx_queue_release,
.dev_infos_get = ice_dev_info_get,
.dev_supported_ptypes_get = ice_dev_supported_ptypes_get,
+ .link_update = ice_link_update,
};

static void
@@ -330,6 +333,187 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
return 0;
}

+/* Enable IRQ0 */
+static void
+ice_pf_enable_irq0(struct ice_hw *hw)
+{
+ /* reset the registers */
+ ICE_WRITE_REG(hw, PFINT_OICR_ENA, 0);
+ ICE_READ_REG(hw, PFINT_OICR);
+
+#ifdef ICE_LSE_SPT
+ ICE_WRITE_REG(hw, PFINT_OICR_ENA,
+ (uint32_t)(PFINT_OICR_ENA_INT_ENA_M &
+ (~PFINT_OICR_LINK_STAT_CHANGE_M)));
+
+ ICE_WRITE_REG(hw, PFINT_OICR_CTL,
+ (0 & PFINT_OICR_CTL_MSIX_INDX_M) |
+ ((0 << PFINT_OICR_CTL_ITR_INDX_S) &
+ PFINT_OICR_CTL_ITR_INDX_M) |
+ PFINT_OICR_CTL_CAUSE_ENA_M);
+
+ ICE_WRITE_REG(hw, PFINT_FW_CTL,
+ (0 & PFINT_FW_CTL_MSIX_INDX_M) |
+ ((0 << PFINT_FW_CTL_ITR_INDX_S) &
+ PFINT_FW_CTL_ITR_INDX_M) |
+ PFINT_FW_CTL_CAUSE_ENA_M);
+#else
+ ICE_WRITE_REG(hw, PFINT_OICR_ENA, PFINT_OICR_ENA_INT_ENA_M);
+#endif
+
+ ICE_WRITE_REG(hw, GLINT_DYN_CTL(0),
+ GLINT_DYN_CTL_INTENA_M |
+ GLINT_DYN_CTL_CLEARPBA_M |
+ GLINT_DYN_CTL_ITR_INDX_M);
+
+ ice_flush(hw);
+}
+
+/* Disable IRQ0 */
+static void
+ice_pf_disable_irq0(struct ice_hw *hw)
+{
+ /* Disable all interrupt types */
+ ICE_WRITE_REG(hw, GLINT_DYN_CTL(0), GLINT_DYN_CTL_WB_ON_ITR_M);
+ ice_flush(hw);
+}
+
+#ifdef ICE_LSE_SPT
+static void
+ice_handle_aq_msg(struct rte_eth_dev *dev)
+{
+ struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct ice_ctl_q_info *cq = &hw->adminq;
+ struct ice_rq_event_info event;
+ uint16_t pending, opcode;
+ int ret;
+
+ event.buf_len = ICE_AQ_MAX_BUF_LEN;
+ event.msg_buf = rte_zmalloc("msg_buffer", event.buf_len, 0);
+ if (!event.msg_buf) {
+ PMD_DRV_LOG(ERR, "Failed to allocate mem");
+ return;
+ }
+
+ pending = 1;
+ while (pending) {
+ ret = ice_clean_rq_elem(hw, cq, &event, &pending);
+
+ if (ret != ICE_SUCCESS) {
+ PMD_DRV_LOG(INFO,
+ "Failed to read msg from AdminQ, "
+ "adminq_err: %u",
+ hw->adminq.sq_last_status);
+ break;
+ }
+ opcode = rte_le_to_cpu_16(event.desc.opcode);
+
+ switch (opcode) {
+ case ice_aqc_opc_get_link_status:
+ ret = ice_link_update(dev, 0);
+ if (!ret)
+ _rte_eth_dev_callback_process
+ (dev, RTE_ETH_EVENT_INTR_LSC, NULL);
+ break;
+ default:
+ PMD_DRV_LOG(DEBUG, "Request %u is not supported yet",
+ opcode);
+ break;
+ }
+ }
+ rte_free(event.msg_buf);
+}
+#endif
+
+/**
+ * Interrupt handler triggered by NIC for handling
+ * specific interrupt.
+ *
+ * @param handle
+ * Pointer to interrupt handle.
+ * @param param
+ * The address of parameter (struct rte_eth_dev *) regsitered before.
+ *
+ * @return
+ * void
+ */
+static void
+ice_interrupt_handler(void *param)
+{
+ struct rte_eth_dev *dev = (struct rte_eth_dev *)param;
+ struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ uint32_t oicr;
+ uint32_t reg;
+ uint8_t pf_num;
+ uint8_t event;
+ uint16_t queue;
+#ifdef ICE_LSE_SPT
+ uint32_t int_fw_ctl;
+#endif
+
+ /* Disable interrupt */
+ ice_pf_disable_irq0(hw);
+
+ /* read out interrupt causes */
+ oicr = ICE_READ_REG(hw, PFINT_OICR);
+#ifdef ICE_LSE_SPT
+ int_fw_ctl = ICE_READ_REG(hw, PFINT_FW_CTL);
+#endif
+
+ /* No interrupt event indicated */
+ if (!(oicr & PFINT_OICR_INTEVENT_M)) {
+ PMD_DRV_LOG(INFO, "No interrupt event");
+ goto done;
+ }
+
+#ifdef ICE_LSE_SPT
+ if (int_fw_ctl & PFINT_FW_CTL_INTEVENT_M) {
+ PMD_DRV_LOG(INFO, "FW_CTL: link state change event");
+ ice_handle_aq_msg(dev);
+ }
+#else
+ if (oicr & PFINT_OICR_LINK_STAT_CHANGE_M) {
+ PMD_DRV_LOG(INFO, "OICR: link state change event");
+ ice_link_update(dev, 0);
+ }
+#endif
+
+ if (oicr & PFINT_OICR_MAL_DETECT_M) {
+ PMD_DRV_LOG(WARNING, "OICR: MDD event");
+ reg = ICE_READ_REG(hw, GL_MDET_TX_PQM);
+ if (reg & GL_MDET_TX_PQM_VALID_M) {
+ pf_num = (reg & GL_MDET_TX_PQM_PF_NUM_M) >>
+ GL_MDET_TX_PQM_PF_NUM_S;
+ event = (reg & GL_MDET_TX_PQM_MAL_TYPE_M) >>
+ GL_MDET_TX_PQM_MAL_TYPE_S;
+ queue = (reg & GL_MDET_TX_PQM_QNUM_M) >>
+ GL_MDET_TX_PQM_QNUM_S;
+
+ PMD_DRV_LOG(WARNING, "Malicious Driver Detection event "
+ "%d by PQM on TX queue %d PF# %d",
+ event, queue, pf_num);
+ }
+
+ reg = ICE_READ_REG(hw, GL_MDET_TX_TCLAN);
+ if (reg & GL_MDET_TX_TCLAN_VALID_M) {
+ pf_num = (reg & GL_MDET_TX_TCLAN_PF_NUM_M) >>
+ GL_MDET_TX_TCLAN_PF_NUM_S;
+ event = (reg & GL_MDET_TX_TCLAN_MAL_TYPE_M) >>
+ GL_MDET_TX_TCLAN_MAL_TYPE_S;
+ queue = (reg & GL_MDET_TX_TCLAN_QNUM_M) >>
+ GL_MDET_TX_TCLAN_QNUM_S;
+
+ PMD_DRV_LOG(WARNING, "Malicious Driver Detection event "
+ "%d by TCLAN on TX queue %d PF# %d",
+ event, queue, pf_num);
+ }
+ }
+done:
+ /* Enable interrupt */
+ ice_pf_enable_irq0(hw);
+ rte_intr_enable(dev->intr_handle);
+}
+
/* Initialize SW parameters of PF */
static int
ice_pf_sw_init(struct rte_eth_dev *dev)
@@ -487,6 +671,7 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
ice_dev_init(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev;
+ struct rte_intr_handle *intr_handle;
struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
int ret;
@@ -495,6 +680,7 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,

ice_set_default_ptype_table(dev);
pci_dev = RTE_DEV_TO_PCI(dev->device);
+ intr_handle = &pci_dev->intr_handle;

rte_eth_copy_pci_info(dev, pci_dev);
pf->adapter = ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
@@ -541,6 +727,15 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
goto err_pf_setup;
}

+ /* register callback func to eal lib */
+ rte_intr_callback_register(intr_handle,
+ ice_interrupt_handler, dev);
+
+ ice_pf_enable_irq0(hw);
+
+ /* enable uio intr after callback register */
+ rte_intr_enable(intr_handle);
+
return 0;

err_pf_setup:
@@ -587,6 +782,8 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
{
struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+ struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;

if (rte_eal_process_type() == RTE_PROC_SECONDARY)
return 0;
@@ -600,6 +797,13 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
rte_free(dev->data->mac_addrs);
dev->data->mac_addrs = NULL;

+ /* disable uio intr before callback unregister */
+ rte_intr_disable(intr_handle);
+
+ /* register callback func to eal lib */
+ rte_intr_callback_unregister(intr_handle,
+ ice_interrupt_handler, dev);
+
ice_release_vsi(pf->main_vsi);
ice_sched_cleanup_all(hw);
rte_free(hw->port_info);
@@ -765,6 +969,9 @@ static int ice_init_rss(struct ice_pf *pf)
if (ret != ICE_SUCCESS)
PMD_DRV_LOG(WARNING, "Fail to set phy mask");

+ /* Call get_link_info aq commond to enable/disable LSE */
+ ice_link_update(dev, 0);
+
pf->adapter_stopped = false;

return 0;
@@ -785,6 +992,8 @@ static int ice_init_rss(struct ice_pf *pf)
{
struct rte_eth_dev_data *data = dev->data;
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
+ struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
uint16_t i;

/* avoid stopping again */
@@ -805,6 +1014,13 @@ static int ice_init_rss(struct ice_pf *pf)
/* Clear all queues and release mbufs */
ice_clear_queues(dev);

+ /* Clean datapath event and queue/vec mapping */
+ rte_intr_efd_disable(intr_handle);
+ if (intr_handle->intr_vec) {
+ rte_free(intr_handle->intr_vec);
+ intr_handle->intr_vec = NULL;
+ }
+
pf->adapter_stopped = true;
}

@@ -940,3 +1156,122 @@ static int ice_init_rss(struct ice_pf *pf)
dev_info->default_rxportconf.ring_size = 1024;
dev_info->default_txportconf.ring_size = 1024;
}
+
+static inline int
+ice_atomic_read_link_status(struct rte_eth_dev *dev,
+ struct rte_eth_link *link)
+{
+ struct rte_eth_link *dst = link;
+ struct rte_eth_link *src = &dev->data->dev_link;
+
+ if (rte_atomic64_cmpset((uint64_t *)dst, *(uint64_t *)dst,
+ *(uint64_t *)src) == 0)
+ return -1;
+
+ return 0;
+}
+
+static inline int
+ice_atomic_write_link_status(struct rte_eth_dev *dev,
+ struct rte_eth_link *link)
+{
+ struct rte_eth_link *dst = &dev->data->dev_link;
+ struct rte_eth_link *src = link;
+
+ if (rte_atomic64_cmpset((uint64_t *)dst, *(uint64_t *)dst,
+ *(uint64_t *)src) == 0)
+ return -1;
+
+ return 0;
+}
+
+static int
+ice_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete)
+{
+#define CHECK_INTERVAL 100 /* 100ms */
+#define MAX_REPEAT_TIME 10 /* 1s (10 * 100ms) in total */
+ struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct ice_link_status link_status;
+ struct rte_eth_link link, old;
+ int status;
+ unsigned int rep_cnt = MAX_REPEAT_TIME;
+ bool enable_lse = dev->data->dev_conf.intr_conf.lsc ? true : false;
+
+ if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+ return -E_RTE_SECONDARY;
+
+ memset(&link, 0, sizeof(link));
+ memset(&old, 0, sizeof(old));
+ memset(&link_status, 0, sizeof(link_status));
+ ice_atomic_read_link_status(dev, &old);
+
+ do {
+ /* Get link status information from hardware */
+ status = ice_aq_get_link_info(hw->port_info, enable_lse,
+ &link_status, NULL);
+ if (status != ICE_SUCCESS) {
+ link.link_speed = ETH_SPEED_NUM_100M;
+ link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ PMD_DRV_LOG(ERR, "Failed to get link info");
+ goto out;
+ }
+
+ link.link_status = link_status.link_info & ICE_AQ_LINK_UP;
+ if (!wait_to_complete || link.link_status)
+ break;
+
+ rte_delay_ms(CHECK_INTERVAL);
+ } while (--rep_cnt);
+
+ if (!link.link_status)
+ goto out;
+
+ /* Full-duplex operation at all supported speeds */
+ link.link_duplex = ETH_LINK_FULL_DUPLEX;
+
+ /* Parse the link status */
+ switch (link_status.link_speed) {
+ case ICE_AQ_LINK_SPEED_10MB:
+ link.link_speed = ETH_SPEED_NUM_10M;
+ break;
+ case ICE_AQ_LINK_SPEED_100MB:
+ link.link_speed = ETH_SPEED_NUM_100M;
+ break;
+ case ICE_AQ_LINK_SPEED_1000MB:
+ link.link_speed = ETH_SPEED_NUM_1G;
+ break;
+ case ICE_AQ_LINK_SPEED_2500MB:
+ link.link_speed = ETH_SPEED_NUM_2_5G;
+ break;
+ case ICE_AQ_LINK_SPEED_5GB:
+ link.link_speed = ETH_SPEED_NUM_5G;
+ break;
+ case ICE_AQ_LINK_SPEED_10GB:
+ link.link_speed = ETH_SPEED_NUM_10G;
+ break;
+ case ICE_AQ_LINK_SPEED_20GB:
+ link.link_speed = ETH_SPEED_NUM_20G;
+ break;
+ case ICE_AQ_LINK_SPEED_25GB:
+ link.link_speed = ETH_SPEED_NUM_25G;
+ break;
+ case ICE_AQ_LINK_SPEED_40GB:
+ link.link_speed = ETH_SPEED_NUM_40G;
+ break;
+ case ICE_AQ_LINK_SPEED_UNKNOWN:
+ default:
+ PMD_DRV_LOG(ERR, "Unknown link speed");
+ link.link_speed = ETH_SPEED_NUM_NONE;
+ break;
+ }
+
+ link.link_autoneg = !(dev->data->dev_conf.link_speeds &
+ ETH_LINK_SPEED_FIXED);
+
+out:
+ ice_atomic_write_link_status(dev, &link);
+ if (link.link_status == old.link_status)
+ return -1;
+
+ return 0;
+}
--
1.9.3
Wenzhuo Lu
2018-11-23 06:56:07 UTC
Permalink
Add ops link_update.

Signed-off-by: Wenzhuo Lu <***@intel.com>
Signed-off-by: Qiming Yang <***@intel.com>
Signed-off-by: Xiaoyun Li <***@intel.com>
Signed-off-by: Jingjing Wu <***@intel.com>
---
drivers/net/ice/ice_ethdev.c | 37 +++++++++++++++++++++++++++++++++++++
1 file changed, 37 insertions(+)

diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 75246da..5beb356 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -23,6 +23,7 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
struct rte_eth_dev_info *dev_info);
static int ice_link_update(struct rte_eth_dev *dev,
int wait_to_complete);
+static int ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu);

static const struct rte_pci_id pci_id_ice_map[] = {
{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_BACKPLANE) },
@@ -48,6 +49,7 @@ static int ice_link_update(struct rte_eth_dev *dev,
.dev_infos_get = ice_dev_info_get,
.dev_supported_ptypes_get = ice_dev_supported_ptypes_get,
.link_update = ice_link_update,
+ .mtu_set = ice_mtu_set,
};

static void
@@ -1275,3 +1277,38 @@ static int ice_init_rss(struct ice_pf *pf)

return 0;
}
+
+static int
+ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct rte_eth_dev_data *dev_data = pf->dev_data;
+ uint32_t frame_size = mtu + ETHER_HDR_LEN
+ + ETHER_CRC_LEN + ICE_VLAN_TAG_SIZE;
+
+ if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+ return -E_RTE_SECONDARY;
+
+ /* check if mtu is within the allowed range */
+ if ((mtu < ETHER_MIN_MTU) || (frame_size > ICE_FRAME_SIZE_MAX))
+ return -EINVAL;
+
+ /* mtu setting is forbidden if port is start */
+ if (dev_data->dev_started) {
+ PMD_DRV_LOG(ERR,
+ "port %d must be stopped before configuration",
+ dev_data->port_id);
+ return -EBUSY;
+ }
+
+ if (frame_size > ETHER_MAX_LEN)
+ dev_data->dev_conf.rxmode.offloads |=
+ DEV_RX_OFFLOAD_JUMBO_FRAME;
+ else
+ dev_data->dev_conf.rxmode.offloads &=
+ ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+
+ dev_data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
+
+ return 0;
+}
--
1.9.3
Varghese, Vipin
2018-11-23 09:58:09 UTC
Permalink
HI Wenzhou,

Following is a thought but not an issue. Can you please let me know your thought?

<snipped>
Post by Wenzhuo Lu
+static int
+ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) {
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct rte_eth_dev_data *dev_data = pf->dev_data;
+ uint32_t frame_size = mtu + ETHER_HDR_LEN
+ + ETHER_CRC_LEN + ICE_VLAN_TAG_SIZE;
+
+ if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+ return -E_RTE_SECONDARY;
+
+ /* check if mtu is within the allowed range */
+ if ((mtu < ETHER_MIN_MTU) || (frame_size > ICE_FRAME_SIZE_MAX))
+ return -EINVAL;
+
Should we set MTU > 1500 (Jumbo frame) if device is not configured to run with jumbo frame? If no, should we check the jumbo config is enabled for the current device?
Post by Wenzhuo Lu
+ /* mtu setting is forbidden if port is start */
+ if (dev_data->dev_started) {
+ PMD_DRV_LOG(ERR,
+ "port %d must be stopped before configuration",
+ dev_data->port_id);
+ return -EBUSY;
+ }
+
+ if (frame_size > ETHER_MAX_LEN)
+ dev_data->dev_conf.rxmode.offloads |=
+ DEV_RX_OFFLOAD_JUMBO_FRAME;
+ else
+ dev_data->dev_conf.rxmode.offloads &=
+ ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+
+ dev_data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
+
+ return 0;
+}
--
1.9.3
Yang, Qiming
2018-11-26 03:38:32 UTC
Permalink
Hi, Vipin
Not sure understand your question.
We have no need to configure jumbo frame, because jumbo frame offload will be enable when mtu>1500.

Qiming
-----Original Message-----
From: Varghese, Vipin
Sent: Friday, November 23, 2018 5:58 PM
To: Lu, Wenzhuo <***@intel.com>; ***@dpdk.org
Cc: Lu, Wenzhuo <***@intel.com>; Yang, Qiming <***@intel.com>; Li, Xiaoyun <***@intel.com>; Wu, Jingjing <***@intel.com>
Subject: RE: [dpdk-dev] [PATCH 07/19] net/ice: support MTU setting

HI Wenzhou,

Following is a thought but not an issue. Can you please let me know your thought?

<snipped>
Post by Wenzhuo Lu
+static int
+ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) {
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct rte_eth_dev_data *dev_data = pf->dev_data;
+ uint32_t frame_size = mtu + ETHER_HDR_LEN
+ + ETHER_CRC_LEN + ICE_VLAN_TAG_SIZE;
+
+ if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+ return -E_RTE_SECONDARY;
+
+ /* check if mtu is within the allowed range */
+ if ((mtu < ETHER_MIN_MTU) || (frame_size > ICE_FRAME_SIZE_MAX))
+ return -EINVAL;
+
Should we set MTU > 1500 (Jumbo frame) if device is not configured to run with jumbo frame? If no, should we check the jumbo config is enabled for the current device?
Post by Wenzhuo Lu
+ /* mtu setting is forbidden if port is start */
+ if (dev_data->dev_started) {
+ PMD_DRV_LOG(ERR,
+ "port %d must be stopped before configuration",
+ dev_data->port_id);
+ return -EBUSY;
+ }
+
+ if (frame_size > ETHER_MAX_LEN)
+ dev_data->dev_conf.rxmode.offloads |=
+ DEV_RX_OFFLOAD_JUMBO_FRAME;
+ else
+ dev_data->dev_conf.rxmode.offloads &=
+ ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+
+ dev_data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
+
+ return 0;
+}
--
1.9.3
Varghese, Vipin
2018-11-26 03:58:45 UTC
Permalink
Hi Qiming
Post by Yang, Qiming
-----Original Message-----
From: Yang, Qiming
Sent: Monday, November 26, 2018 9:09 AM
Subject: RE: [dpdk-dev] [PATCH 07/19] net/ice: support MTU setting
Hi, Vipin
Not sure understand your question.
We have no need to configure jumbo frame, because jumbo frame offload will be
enable when mtu>1500.
Apologies if the question is not clear. Let me try to explain what I am trying to ask below
Post by Yang, Qiming
Qiming
-----Original Message-----
From: Varghese, Vipin
Sent: Friday, November 23, 2018 5:58 PM
Subject: RE: [dpdk-dev] [PATCH 07/19] net/ice: support MTU setting
HI Wenzhou,
Following is a thought but not an issue. Can you please let me know your thought?
<snipped>
Post by Wenzhuo Lu
+static int
+ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) {
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct rte_eth_dev_data *dev_data = pf->dev_data;
+ uint32_t frame_size = mtu + ETHER_HDR_LEN
+ + ETHER_CRC_LEN + ICE_VLAN_TAG_SIZE;
+
+ if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+ return -E_RTE_SECONDARY;
+
+ /* check if mtu is within the allowed range */
+ if ((mtu < ETHER_MIN_MTU) || (frame_size > ICE_FRAME_SIZE_MAX))
+ return -EINVAL;
+
Should we set MTU > 1500 (Jumbo frame) if device is not configured to run with
jumbo frame? If no, should we check the jumbo config is enabled for the current
device?
1. If by default JUMBO enabled for this device?
2. There is RX offload which is configured with value 'DEV_RX_OFFLOAD_JUMBO_FRAME'. Should this be enabled for supporting JUMBO frames?
3. There is per RX queue offload flag too 'DEV_RX_OFFLOAD_JUMBO_FRAME'. If port_conf is set for 'DEV_RX_OFFLOAD_JUMBO_FRAME' but disabled for specific RX queue 'DEV_RX_OFFLOAD_JUMBO_FRAME', what is the JUMBO setting for the device?

Hence do we need to check 'mtu_set' for whether actually device is configured for JUMBO processing or not?
Post by Yang, Qiming
Post by Wenzhuo Lu
+ /* mtu setting is forbidden if port is start */
+ if (dev_data->dev_started) {
+ PMD_DRV_LOG(ERR,
+ "port %d must be stopped before configuration",
+ dev_data->port_id);
+ return -EBUSY;
+ }
+
+ if (frame_size > ETHER_MAX_LEN)
+ dev_data->dev_conf.rxmode.offloads |=
+ DEV_RX_OFFLOAD_JUMBO_FRAME;
+ else
+ dev_data->dev_conf.rxmode.offloads &=
+ ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+
+ dev_data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
+
+ return 0;
+}
--
1.9.3
Wenzhuo Lu
2018-11-23 06:56:08 UTC
Permalink
Add below ops,
mac_addr_set
mac_addr_add
mac_addr_remove

Signed-off-by: Wenzhuo Lu <***@intel.com>
Signed-off-by: Qiming Yang <***@intel.com>
Signed-off-by: Xiaoyun Li <***@intel.com>
Signed-off-by: Jingjing Wu <***@intel.com>
---
drivers/net/ice/ice_ethdev.c | 242 +++++++++++++++++++++++++++++++++++++++++++
1 file changed, 242 insertions(+)

diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 5beb356..afcbad1 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -24,6 +24,13 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
static int ice_link_update(struct rte_eth_dev *dev,
int wait_to_complete);
static int ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu);
+static int ice_macaddr_set(struct rte_eth_dev *dev,
+ struct ether_addr *mac_addr);
+static int ice_macaddr_add(struct rte_eth_dev *dev,
+ struct ether_addr *mac_addr,
+ __rte_unused uint32_t index,
+ uint32_t pool);
+static void ice_macaddr_remove(struct rte_eth_dev *dev, uint32_t index);

static const struct rte_pci_id pci_id_ice_map[] = {
{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_BACKPLANE) },
@@ -50,6 +57,9 @@ static int ice_link_update(struct rte_eth_dev *dev,
.dev_supported_ptypes_get = ice_dev_supported_ptypes_get,
.link_update = ice_link_update,
.mtu_set = ice_mtu_set,
+ .mac_addr_set = ice_macaddr_set,
+ .mac_addr_add = ice_macaddr_add,
+ .mac_addr_remove = ice_macaddr_remove,
};

static void
@@ -335,6 +345,130 @@ static int ice_link_update(struct rte_eth_dev *dev,
return 0;
}

+/* Find out specific MAC filter */
+static struct ice_mac_filter *
+ice_find_mac_filter(struct ice_vsi *vsi, struct ether_addr *macaddr)
+{
+ struct ice_mac_filter *f;
+
+ TAILQ_FOREACH(f, &vsi->mac_list, next) {
+ if (is_same_ether_addr(macaddr, &f->mac_info.mac_addr))
+ return f;
+ }
+
+ return NULL;
+}
+
+static int
+ice_add_mac_filter(struct ice_vsi *vsi, struct ether_addr *mac_addr)
+{
+ struct ice_fltr_list_entry *m_list_itr = NULL;
+ struct ice_mac_filter *f;
+ struct LIST_HEAD_TYPE list_head;
+ struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+ int ret = 0;
+
+ /* If it's added and configured, return */
+ f = ice_find_mac_filter(vsi, mac_addr);
+ if (f) {
+ PMD_DRV_LOG(INFO, "This MAC filter already exists.");
+ return 0;
+ }
+
+ INIT_LIST_HEAD(&list_head);
+
+ m_list_itr = (struct ice_fltr_list_entry *)
+ ice_malloc(hw, sizeof(*m_list_itr));
+ if (!m_list_itr) {
+ ret = -ENOMEM;
+ goto DONE;
+ }
+ ice_memcpy(m_list_itr->fltr_info.l_data.mac.mac_addr,
+ mac_addr, ETH_ALEN, ICE_NONDMA_TO_NONDMA);
+ m_list_itr->fltr_info.src_id = ICE_SRC_ID_VSI;
+ m_list_itr->fltr_info.fltr_act = ICE_FWD_TO_VSI;
+ m_list_itr->fltr_info.lkup_type = ICE_SW_LKUP_MAC;
+ m_list_itr->fltr_info.flag = ICE_FLTR_TX;
+ m_list_itr->fltr_info.vsi_handle = vsi->idx;
+
+ LIST_ADD(&m_list_itr->list_entry, &list_head);
+
+ /* Add the mac */
+ ret = ice_add_mac(hw, &list_head);
+ if (ret != ICE_SUCCESS) {
+ PMD_DRV_LOG(ERR, "Failed to add MAC filter");
+ ret = -EINVAL;
+ goto DONE;
+ }
+ /* Add the mac addr into mac list */
+ f = rte_zmalloc("mac_filter", sizeof(*f), 0);
+ if (!f) {
+ PMD_DRV_LOG(ERR, "failed to allocate memory");
+ ret = -ENOMEM;
+ goto DONE;
+ }
+ rte_memcpy(&f->mac_info.mac_addr, mac_addr, ETH_ADDR_LEN);
+ TAILQ_INSERT_TAIL(&vsi->mac_list, f, next);
+ vsi->mac_num++;
+
+ ret = 0;
+
+DONE:
+ rte_free(m_list_itr);
+ return ret;
+}
+
+static int
+ice_remove_mac_filter(struct ice_vsi *vsi, struct ether_addr *mac_addr)
+{
+ struct ice_fltr_list_entry *m_list_itr = NULL;
+ struct ice_mac_filter *f;
+ struct LIST_HEAD_TYPE list_head;
+ struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+ int ret = 0;
+
+ /* Can't find it, return an error */
+ f = ice_find_mac_filter(vsi, mac_addr);
+ if (!f)
+ return -EINVAL;
+
+ INIT_LIST_HEAD(&list_head);
+
+ m_list_itr = (struct ice_fltr_list_entry *)
+ ice_malloc(hw, sizeof(*m_list_itr));
+ if (!m_list_itr) {
+ ret = -ENOMEM;
+ goto DONE;
+ }
+ ice_memcpy(m_list_itr->fltr_info.l_data.mac.mac_addr,
+ mac_addr, ETH_ALEN, ICE_NONDMA_TO_NONDMA);
+ m_list_itr->fltr_info.src_id = ICE_SRC_ID_VSI;
+ m_list_itr->fltr_info.fltr_act = ICE_FWD_TO_VSI;
+ m_list_itr->fltr_info.lkup_type = ICE_SW_LKUP_MAC;
+ m_list_itr->fltr_info.flag = ICE_FLTR_TX;
+ m_list_itr->fltr_info.vsi_handle = vsi->idx;
+
+ LIST_ADD(&m_list_itr->list_entry, &list_head);
+
+ /* remove the mac filter */
+ ret = ice_remove_mac(hw, &list_head);
+ if (ret != ICE_SUCCESS) {
+ PMD_DRV_LOG(ERR, "Failed to remove MAC filter");
+ ret = -EINVAL;
+ goto DONE;
+ }
+
+ /* Remove the mac addr from mac list */
+ TAILQ_REMOVE(&vsi->mac_list, f, next);
+ rte_free(f);
+ vsi->mac_num--;
+
+ ret = 0;
+DONE:
+ rte_free(m_list_itr);
+ return ret;
+}
+
/* Enable IRQ0 */
static void
ice_pf_enable_irq0(struct ice_hw *hw)
@@ -543,6 +677,9 @@ static int ice_link_update(struct rte_eth_dev *dev,
struct ice_vsi *vsi = NULL;
struct ice_vsi_ctx vsi_ctx;
int ret;
+ struct ether_addr broadcast = {
+ .addr_bytes = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff} };
+ struct ether_addr mac_addr;
uint16_t max_txqs[ICE_MAX_TRAFFIC_CLASS] = { 0 };
uint8_t tc_bitmap = 0x1;

@@ -628,6 +765,21 @@ static int ice_link_update(struct rte_eth_dev *dev,
pf->vsis_allocated = vsi_ctx.vsis_allocd;
pf->vsis_unallocated = vsi_ctx.vsis_unallocated;

+ /* MAC configuration */
+ rte_memcpy(pf->dev_addr.addr_bytes,
+ hw->port_info->mac.perm_addr,
+ ETH_ADDR_LEN);
+
+ rte_memcpy(&mac_addr, &pf->dev_addr, ETHER_ADDR_LEN);
+ ret = ice_add_mac_filter(vsi, &mac_addr);
+ if (ret != ICE_SUCCESS)
+ PMD_INIT_LOG(ERR, "Failed to add dflt MAC filter");
+
+ rte_memcpy(&mac_addr, &broadcast, ETHER_ADDR_LEN);
+ ret = ice_add_mac_filter(vsi, &mac_addr);
+ if (ret != ICE_SUCCESS)
+ PMD_INIT_LOG(ERR, "Failed to add MAC filter");
+
/* At the beginning, only TC0. */
/* What we need here is the maximam number of the TX queues.
* Currently vsi->nb_qps means it.
@@ -1312,3 +1464,93 @@ static int ice_init_rss(struct ice_pf *pf)

return 0;
}
+
+static int ice_macaddr_set(struct rte_eth_dev *dev,
+ struct ether_addr *mac_addr)
+{
+ struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_vsi *vsi = pf->main_vsi;
+ struct ice_mac_filter *f;
+ uint8_t flags = 0;
+ int ret;
+
+ if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+ return -E_RTE_SECONDARY;
+
+ if (!is_valid_assigned_ether_addr(mac_addr)) {
+ PMD_DRV_LOG(ERR, "Tried to set invalid MAC address.");
+ return -EINVAL;
+ }
+
+ TAILQ_FOREACH(f, &vsi->mac_list, next) {
+ if (is_same_ether_addr(&pf->dev_addr, &f->mac_info.mac_addr))
+ break;
+ }
+
+ if (!f) {
+ PMD_DRV_LOG(ERR, "Failed to find filter for default mac");
+ return -EIO;
+ }
+
+ ret = ice_remove_mac_filter(vsi, &f->mac_info.mac_addr);
+ if (ret != ICE_SUCCESS) {
+ PMD_DRV_LOG(ERR, "Failed to delete mac filter");
+ return -EIO;
+ }
+ ret = ice_add_mac_filter(vsi, mac_addr);
+ if (ret != ICE_SUCCESS) {
+ PMD_DRV_LOG(ERR, "Failed to add mac filter");
+ return -EIO;
+ }
+ memcpy(&pf->dev_addr, mac_addr, ETH_ADDR_LEN);
+
+ flags = ICE_AQC_MAN_MAC_UPDATE_LAA_WOL;
+ ice_aq_manage_mac_write(hw, mac_addr->addr_bytes, flags, NULL);
+
+ return 0;
+}
+
+/* Add a MAC address, and update filters */
+static int
+ice_macaddr_add(struct rte_eth_dev *dev,
+ struct ether_addr *mac_addr,
+ __rte_unused uint32_t index,
+ __rte_unused uint32_t pool)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_vsi *vsi = pf->main_vsi;
+ int ret;
+
+ if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+ return -E_RTE_SECONDARY;
+
+ ret = ice_add_mac_filter(vsi, mac_addr);
+ if (ret != ICE_SUCCESS) {
+ PMD_DRV_LOG(ERR, "Failed to add MAC filter");
+ return -EINVAL;
+ }
+
+ return ICE_SUCCESS;
+}
+
+/* Remove a MAC address, and update filters */
+static void
+ice_macaddr_remove(struct rte_eth_dev *dev, uint32_t index)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_vsi *vsi = pf->main_vsi;
+ struct rte_eth_dev_data *data = dev->data;
+ struct ether_addr *macaddr;
+ int ret;
+
+ if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+ return;
+
+ macaddr = &data->mac_addrs[index];
+ ret = ice_remove_mac_filter(vsi, macaddr);
+ if (ret) {
+ PMD_DRV_LOG(ERR, "Failed to remove MAC filter");
+ return;
+ }
+}
--
1.9.3
Wenzhuo Lu
2018-11-23 06:56:09 UTC
Permalink
Add below ops,
ice_vlan_filter_set
ice_vlan_offload_set
ice_vlan_tpid_set
ice_vlan_pvid_set

Signed-off-by: Wenzhuo Lu <***@intel.com>
Signed-off-by: Qiming Yang <***@intel.com>
Signed-off-by: Xiaoyun Li <***@intel.com>
Signed-off-by: Jingjing Wu <***@intel.com>
---
drivers/net/ice/ice_ethdev.c | 602 +++++++++++++++++++++++++++++++++++++++++++
1 file changed, 602 insertions(+)

diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index afcbad1..c6267fa 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -24,6 +24,13 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
static int ice_link_update(struct rte_eth_dev *dev,
int wait_to_complete);
static int ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu);
+static int ice_vlan_offload_set(struct rte_eth_dev *dev, int mask);
+static int ice_vlan_tpid_set(struct rte_eth_dev *dev,
+ enum rte_vlan_type vlan_type,
+ uint16_t tpid);
+static int ice_vlan_filter_set(struct rte_eth_dev *dev,
+ uint16_t vlan_id,
+ int on);
static int ice_macaddr_set(struct rte_eth_dev *dev,
struct ether_addr *mac_addr);
static int ice_macaddr_add(struct rte_eth_dev *dev,
@@ -31,6 +38,8 @@ static int ice_macaddr_add(struct rte_eth_dev *dev,
__rte_unused uint32_t index,
uint32_t pool);
static void ice_macaddr_remove(struct rte_eth_dev *dev, uint32_t index);
+static int ice_vlan_pvid_set(struct rte_eth_dev *dev,
+ uint16_t pvid, int on);

static const struct rte_pci_id pci_id_ice_map[] = {
{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_BACKPLANE) },
@@ -60,6 +69,10 @@ static int ice_macaddr_add(struct rte_eth_dev *dev,
.mac_addr_set = ice_macaddr_set,
.mac_addr_add = ice_macaddr_add,
.mac_addr_remove = ice_macaddr_remove,
+ .vlan_filter_set = ice_vlan_filter_set,
+ .vlan_offload_set = ice_vlan_offload_set,
+ .vlan_tpid_set = ice_vlan_tpid_set,
+ .vlan_pvid_set = ice_vlan_pvid_set,
};

static void
@@ -469,6 +482,297 @@ static int ice_macaddr_add(struct rte_eth_dev *dev,
return ret;
}

+/* Find out specific VLAN filter */
+static struct ice_vlan_filter *
+ice_find_vlan_filter(struct ice_vsi *vsi, uint16_t vlan_id)
+{
+ struct ice_vlan_filter *f;
+
+ TAILQ_FOREACH(f, &vsi->vlan_list, next) {
+ if (vlan_id == f->vlan_info.vlan_id)
+ return f;
+ }
+
+ return NULL;
+}
+
+static int
+ice_add_vlan_filter(struct ice_vsi *vsi, uint16_t vlan_id)
+{
+ struct ice_fltr_list_entry *v_list_itr = NULL;
+ struct ice_vlan_filter *f;
+ struct LIST_HEAD_TYPE list_head;
+ struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+ int ret = 0;
+
+ if (!vsi || vlan_id > ETHER_MAX_VLAN_ID)
+ return -EINVAL;
+
+ /* If it's added and configured, return. */
+ f = ice_find_vlan_filter(vsi, vlan_id);
+ if (f) {
+ PMD_DRV_LOG(INFO, "This VLAN filter already exists.");
+ return 0;
+ }
+
+ if (!vsi->vlan_anti_spoof_on && !vsi->vlan_filter_on)
+ return 0;
+
+ INIT_LIST_HEAD(&list_head);
+
+ v_list_itr = (struct ice_fltr_list_entry *)
+ ice_malloc(hw, sizeof(*v_list_itr));
+ if (!v_list_itr) {
+ ret = -ENOMEM;
+ goto DONE;
+ }
+ v_list_itr->fltr_info.l_data.vlan.vlan_id = vlan_id;
+ v_list_itr->fltr_info.src_id = ICE_SRC_ID_VSI;
+ v_list_itr->fltr_info.fltr_act = ICE_FWD_TO_VSI;
+ v_list_itr->fltr_info.lkup_type = ICE_SW_LKUP_VLAN;
+ v_list_itr->fltr_info.flag = ICE_FLTR_TX;
+ v_list_itr->fltr_info.vsi_handle = vsi->idx;
+
+ LIST_ADD(&v_list_itr->list_entry, &list_head);
+
+ /* Add the vlan */
+ ret = ice_add_vlan(hw, &list_head);
+ if (ret != ICE_SUCCESS) {
+ PMD_DRV_LOG(ERR, "Failed to add VLAN filter");
+ ret = -EINVAL;
+ goto DONE;
+ }
+
+ /* Add vlan into vlan list */
+ f = rte_zmalloc("vlan_filter", sizeof(*f), 0);
+ if (!f) {
+ PMD_DRV_LOG(ERR, "failed to allocate memory");
+ ret = -ENOMEM;
+ goto DONE;
+ }
+ f->vlan_info.vlan_id = vlan_id;
+ TAILQ_INSERT_TAIL(&vsi->vlan_list, f, next);
+ vsi->vlan_num++;
+
+ ret = 0;
+
+DONE:
+ rte_free(v_list_itr);
+ return ret;
+}
+
+static int
+ice_remove_vlan_filter(struct ice_vsi *vsi, uint16_t vlan_id)
+{
+ struct ice_fltr_list_entry *v_list_itr = NULL;
+ struct ice_vlan_filter *f;
+ struct LIST_HEAD_TYPE list_head;
+ struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+ int ret = 0;
+
+ /**
+ * Vlan 0 is the generic filter for untagged packets
+ * and can't be removed.
+ */
+ if (!vsi || vlan_id == 0 || vlan_id > ETHER_MAX_VLAN_ID)
+ return -EINVAL;
+
+ /* Can't find it, return an error */
+ f = ice_find_vlan_filter(vsi, vlan_id);
+ if (!f)
+ return -EINVAL;
+
+ INIT_LIST_HEAD(&list_head);
+
+ v_list_itr = (struct ice_fltr_list_entry *)
+ ice_malloc(hw, sizeof(*v_list_itr));
+ if (!v_list_itr) {
+ ret = -ENOMEM;
+ goto DONE;
+ }
+
+ v_list_itr->fltr_info.l_data.vlan.vlan_id = vlan_id;
+ v_list_itr->fltr_info.src_id = ICE_SRC_ID_VSI;
+ v_list_itr->fltr_info.fltr_act = ICE_FWD_TO_VSI;
+ v_list_itr->fltr_info.lkup_type = ICE_SW_LKUP_VLAN;
+ v_list_itr->fltr_info.flag = ICE_FLTR_TX;
+ v_list_itr->fltr_info.vsi_handle = vsi->idx;
+
+ LIST_ADD(&v_list_itr->list_entry, &list_head);
+
+ /* remove the vlan filter */
+ ret = ice_remove_vlan(hw, &list_head);
+ if (ret != ICE_SUCCESS) {
+ PMD_DRV_LOG(ERR, "Failed to remove VLAN filter");
+ ret = -EINVAL;
+ goto DONE;
+ }
+
+ /* Remove the vlan id from vlan list */
+ TAILQ_REMOVE(&vsi->vlan_list, f, next);
+ rte_free(f);
+ vsi->vlan_num--;
+
+ ret = 0;
+DONE:
+ rte_free(v_list_itr);
+ return ret;
+}
+
+static int
+ice_remove_all_mac_vlan_filters(struct ice_vsi *vsi)
+{
+ struct ice_mac_filter *m_f;
+ struct ice_vlan_filter *v_f;
+ int ret = 0;
+
+ if (!vsi || !vsi->mac_num)
+ return -EINVAL;
+
+ TAILQ_FOREACH(m_f, &vsi->mac_list, next) {
+ ret = ice_remove_mac_filter(vsi, &m_f->mac_info.mac_addr);
+ if (ret != ICE_SUCCESS) {
+ ret = -EINVAL;
+ goto DONE;
+ }
+ }
+
+ if (vsi->vlan_num == 0)
+ return 0;
+
+ TAILQ_FOREACH(v_f, &vsi->vlan_list, next) {
+ ret = ice_remove_vlan_filter(vsi, v_f->vlan_info.vlan_id);
+ if (ret != ICE_SUCCESS) {
+ ret = -EINVAL;
+ goto DONE;
+ }
+ }
+
+DONE:
+ return ret;
+}
+
+static int
+ice_vsi_config_qinq_insertion(struct ice_vsi *vsi, bool on)
+{
+ struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+ struct ice_vsi_ctx ctxt;
+ uint8_t qinq_flags;
+ int ret = 0;
+
+ /* Check if it has been already on or off */
+ if (vsi->info.valid_sections &
+ rte_cpu_to_le_16(ICE_AQ_VSI_PROP_OUTER_TAG_VALID)) {
+ if (on) {
+ if ((vsi->info.outer_tag_flags &
+ ICE_AQ_VSI_OUTER_TAG_ACCEPT_HOST) ==
+ ICE_AQ_VSI_OUTER_TAG_ACCEPT_HOST)
+ return 0; /* already on */
+ } else {
+ if (!(vsi->info.outer_tag_flags &
+ ICE_AQ_VSI_OUTER_TAG_ACCEPT_HOST))
+ return 0; /* already off */
+ }
+ }
+
+ if (on)
+ qinq_flags = ICE_AQ_VSI_OUTER_TAG_ACCEPT_HOST;
+ else
+ qinq_flags = 0;
+ /* clear global insertion and use per packet insertion */
+ vsi->info.outer_tag_flags &= ~(ICE_AQ_VSI_OUTER_TAG_INSERT);
+ vsi->info.outer_tag_flags &= ~(ICE_AQ_VSI_OUTER_TAG_ACCEPT_HOST);
+ vsi->info.outer_tag_flags |= qinq_flags;
+ /* use default vlan type 0x8100 */
+ vsi->info.outer_tag_flags &= ~(ICE_AQ_VSI_OUTER_TAG_TYPE_M);
+ vsi->info.outer_tag_flags |= ICE_DFLT_OUTER_TAG_TYPE <<
+ ICE_AQ_VSI_OUTER_TAG_TYPE_S;
+ (void)rte_memcpy(&ctxt.info, &vsi->info, sizeof(vsi->info));
+ ctxt.info.valid_sections =
+ rte_cpu_to_le_16(ICE_AQ_VSI_PROP_OUTER_TAG_VALID);
+ ctxt.vsi_num = vsi->vsi_id;
+ ret = ice_update_vsi(hw, vsi->idx, &ctxt, NULL);
+ if (ret) {
+ PMD_DRV_LOG(INFO,
+ "Update VSI failed to %s qinq stripping",
+ on ? "enable" : "disable");
+ return -EINVAL;
+ }
+
+ vsi->info.valid_sections |=
+ rte_cpu_to_le_16(ICE_AQ_VSI_PROP_OUTER_TAG_VALID);
+
+ return ret;
+}
+
+static int
+ice_vsi_config_qinq_stripping(struct ice_vsi *vsi, bool on)
+{
+ struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+ struct ice_vsi_ctx ctxt;
+ uint8_t qinq_flags;
+ int ret = 0;
+
+ /* Check if it has been already on or off */
+ if (vsi->info.valid_sections &
+ rte_cpu_to_le_16(ICE_AQ_VSI_PROP_OUTER_TAG_VALID)) {
+ if (on) {
+ if ((vsi->info.outer_tag_flags &
+ ICE_AQ_VSI_OUTER_TAG_MODE_M) ==
+ ICE_AQ_VSI_OUTER_TAG_COPY)
+ return 0; /* already on */
+ } else {
+ if ((vsi->info.outer_tag_flags &
+ ICE_AQ_VSI_OUTER_TAG_MODE_M) ==
+ ICE_AQ_VSI_OUTER_TAG_NOTHING)
+ return 0; /* already off */
+ }
+ }
+
+ if (on)
+ qinq_flags = ICE_AQ_VSI_OUTER_TAG_COPY;
+ else
+ qinq_flags = ICE_AQ_VSI_OUTER_TAG_NOTHING;
+ vsi->info.outer_tag_flags &= ~(ICE_AQ_VSI_OUTER_TAG_MODE_M);
+ vsi->info.outer_tag_flags |= qinq_flags;
+ /* use default vlan type 0x8100 */
+ vsi->info.outer_tag_flags &= ~(ICE_AQ_VSI_OUTER_TAG_TYPE_M);
+ vsi->info.outer_tag_flags |= ICE_DFLT_OUTER_TAG_TYPE <<
+ ICE_AQ_VSI_OUTER_TAG_TYPE_S;
+ (void)rte_memcpy(&ctxt.info, &vsi->info, sizeof(vsi->info));
+ ctxt.info.valid_sections =
+ rte_cpu_to_le_16(ICE_AQ_VSI_PROP_OUTER_TAG_VALID);
+ ctxt.vsi_num = vsi->vsi_id;
+ ret = ice_update_vsi(hw, vsi->idx, &ctxt, NULL);
+ if (ret) {
+ PMD_DRV_LOG(INFO,
+ "Update VSI failed to %s qinq stripping",
+ on ? "enable" : "disable");
+ return -EINVAL;
+ }
+
+ vsi->info.valid_sections |=
+ rte_cpu_to_le_16(ICE_AQ_VSI_PROP_OUTER_TAG_VALID);
+
+ return ret;
+}
+
+static int
+ice_vsi_config_double_vlan(struct ice_vsi *vsi, int on)
+{
+ int ret;
+
+ ret = ice_vsi_config_qinq_stripping(vsi, on);
+ if (ret)
+ PMD_DRV_LOG(ERR, "Fail to set qinq stripping - %d", ret);
+
+ ret = ice_vsi_config_qinq_insertion(vsi, on);
+ if (ret)
+ PMD_DRV_LOG(ERR, "Fail to set qinq insertion - %d", ret);
+
+ return ret;
+}
+
/* Enable IRQ0 */
static void
ice_pf_enable_irq0(struct ice_hw *hw)
@@ -828,6 +1132,7 @@ static int ice_macaddr_add(struct rte_eth_dev *dev,
struct rte_intr_handle *intr_handle;
struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_vsi *vsi;
int ret;

dev->dev_ops = &ice_eth_dev_ops;
@@ -881,6 +1186,11 @@ static int ice_macaddr_add(struct rte_eth_dev *dev,
goto err_pf_setup;
}

+ vsi = pf->main_vsi;
+
+ /* Disable double vlan by default */
+ ice_vsi_config_double_vlan(vsi, FALSE);
+
/* register callback func to eal lib */
rte_intr_callback_register(intr_handle,
ice_interrupt_handler, dev);
@@ -916,6 +1226,8 @@ static int ice_macaddr_add(struct rte_eth_dev *dev,

hw = ICE_VSI_TO_HW(vsi);

+ ice_remove_all_mac_vlan_filters(vsi);
+
memset(&vsi_ctx, 0, sizeof(vsi_ctx));

vsi_ctx.vsi_num = vsi->vsi_id;
@@ -1554,3 +1866,293 @@ static int ice_macaddr_set(struct rte_eth_dev *dev,
return;
}
}
+
+static int
+ice_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_vsi *vsi = pf->main_vsi;
+ int ret;
+
+ if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+ return -E_RTE_SECONDARY;
+
+ PMD_INIT_FUNC_TRACE();
+
+ if (on) {
+ ret = ice_add_vlan_filter(vsi, vlan_id);
+ if (ret < 0) {
+ PMD_DRV_LOG(ERR, "Failed to add vlan filter");
+ return -EINVAL;
+ }
+ } else {
+ ret = ice_remove_vlan_filter(vsi, vlan_id);
+ if (ret < 0) {
+ PMD_DRV_LOG(ERR, "Failed to remove vlan filter");
+ return -EINVAL;
+ }
+ }
+
+ return 0;
+}
+
+/* Configure vlan filter on or off */
+static int
+ice_vsi_config_vlan_filter(struct ice_vsi *vsi, bool on)
+{
+ struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+ struct ice_vsi_ctx ctxt;
+ uint8_t sec_flags, sw_flags2;
+ int ret = 0;
+
+ sec_flags = ICE_AQ_VSI_SEC_TX_VLAN_PRUNE_ENA <<
+ ICE_AQ_VSI_SEC_TX_PRUNE_ENA_S;
+ sw_flags2 = ICE_AQ_VSI_SW_FLAG_RX_VLAN_PRUNE_ENA;
+
+ if (on) {
+ vsi->info.sec_flags |= sec_flags;
+ vsi->info.sw_flags2 |= sw_flags2;
+ } else {
+ vsi->info.sec_flags &= ~sec_flags;
+ vsi->info.sw_flags2 &= ~sw_flags2;
+ }
+ vsi->info.sw_id = hw->port_info->sw_id;
+ (void)rte_memcpy(&ctxt.info, &vsi->info, sizeof(vsi->info));
+ ctxt.info.valid_sections =
+ rte_cpu_to_le_16(ICE_AQ_VSI_PROP_SW_VALID |
+ ICE_AQ_VSI_PROP_SECURITY_VALID);
+ ctxt.vsi_num = vsi->vsi_id;
+
+ ret = ice_update_vsi(hw, vsi->idx, &ctxt, NULL);
+ if (ret) {
+ PMD_DRV_LOG(INFO, "Update VSI failed to %s vlan rx pruning",
+ on ? "enable" : "disable");
+ ret = -EINVAL;
+ } else {
+ vsi->info.valid_sections |=
+ rte_cpu_to_le_16(ICE_AQ_VSI_PROP_SW_VALID |
+ ICE_AQ_VSI_PROP_SECURITY_VALID);
+ }
+
+ return ret;
+}
+
+static int
+ice_vsi_config_vlan_stripping(struct ice_vsi *vsi, bool on)
+{
+ struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+ struct ice_vsi_ctx ctxt;
+ uint8_t vlan_flags;
+ int ret = 0;
+
+ /* Check if it has been already on or off */
+ if (vsi->info.valid_sections &
+ rte_cpu_to_le_16(ICE_AQ_VSI_PROP_VLAN_VALID)) {
+ if (on) {
+ if ((vsi->info.vlan_flags &
+ ICE_AQ_VSI_VLAN_EMOD_M) ==
+ ICE_AQ_VSI_VLAN_EMOD_STR_BOTH)
+ return 0; /* already on */
+ } else {
+ if ((vsi->info.vlan_flags &
+ ICE_AQ_VSI_VLAN_EMOD_M) ==
+ ICE_AQ_VSI_VLAN_EMOD_NOTHING)
+ return 0; /* already off */
+ }
+ }
+
+ if (on)
+ vlan_flags = ICE_AQ_VSI_VLAN_EMOD_STR_BOTH;
+ else
+ vlan_flags = ICE_AQ_VSI_VLAN_EMOD_NOTHING;
+ vsi->info.vlan_flags &= ~(ICE_AQ_VSI_VLAN_EMOD_M);
+ vsi->info.vlan_flags |= vlan_flags;
+ (void)rte_memcpy(&ctxt.info, &vsi->info, sizeof(vsi->info));
+ ctxt.info.valid_sections =
+ rte_cpu_to_le_16(ICE_AQ_VSI_PROP_VLAN_VALID);
+ ctxt.vsi_num = vsi->vsi_id;
+ ret = ice_update_vsi(hw, vsi->idx, &ctxt, NULL);
+ if (ret) {
+ PMD_DRV_LOG(INFO, "Update VSI failed to %s vlan stripping",
+ on ? "enable" : "disable");
+ return -EINVAL;
+ }
+
+ vsi->info.valid_sections |=
+ rte_cpu_to_le_16(ICE_AQ_VSI_PROP_VLAN_VALID);
+
+ return ret;
+}
+
+static int
+ice_vlan_offload_set(struct rte_eth_dev *dev, int mask)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_vsi *vsi = pf->main_vsi;
+ struct rte_eth_rxmode *rxmode;
+
+ if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+ return -E_RTE_SECONDARY;
+
+ rxmode = &dev->data->dev_conf.rxmode;
+ if (mask & ETH_VLAN_FILTER_MASK) {
+ if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+ ice_vsi_config_vlan_filter(vsi, TRUE);
+ else
+ ice_vsi_config_vlan_filter(vsi, FALSE);
+ }
+
+ if (mask & ETH_VLAN_STRIP_MASK) {
+ if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ ice_vsi_config_vlan_stripping(vsi, TRUE);
+ else
+ ice_vsi_config_vlan_stripping(vsi, FALSE);
+ }
+
+ if (mask & ETH_VLAN_EXTEND_MASK) {
+ if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+ ice_vsi_config_double_vlan(vsi, TRUE);
+ else
+ ice_vsi_config_double_vlan(vsi, FALSE);
+ }
+
+ return 0;
+}
+
+static int
+ice_vlan_tpid_set(struct rte_eth_dev *dev,
+ enum rte_vlan_type vlan_type,
+ uint16_t tpid)
+{
+ struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ uint64_t reg_r = 0, reg_w = 0;
+ uint16_t reg_id = 0;
+ int ret = 0;
+ int qinq = dev->data->dev_conf.rxmode.offloads &
+ DEV_RX_OFFLOAD_VLAN_EXTEND;
+
+ if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+ return -E_RTE_SECONDARY;
+
+ switch (vlan_type) {
+ case ETH_VLAN_TYPE_OUTER:
+ if (qinq)
+ reg_id = 3;
+ else
+ reg_id = 5;
+ break;
+ case ETH_VLAN_TYPE_INNER:
+ if (qinq) {
+ reg_id = 5;
+ } else {
+ PMD_DRV_LOG(ERR,
+ "Unsupported vlan type in single vlan.");
+ return -EINVAL;
+ }
+ break;
+ default:
+ PMD_DRV_LOG(ERR, "Unsupported vlan type %d", vlan_type);
+ return -EINVAL;
+ }
+ reg_r = ICE_READ_REG(hw, GL_SWT_L2TAGCTRL(reg_id));
+ PMD_DRV_LOG(DEBUG, "Debug read from ICE GL_SWT_L2TAGCTRL[%d]: "
+ "0x%08"PRIx64"", reg_id, reg_r);
+
+ reg_w = reg_r & (~(GL_SWT_L2TAGCTRL_ETHERTYPE_M));
+ reg_w |= ((uint64_t)tpid << GL_SWT_L2TAGCTRL_ETHERTYPE_S);
+ if (reg_r == reg_w) {
+ PMD_DRV_LOG(DEBUG, "No need to write");
+ return 0;
+ }
+
+ ICE_WRITE_REG(hw, GL_SWT_L2TAGCTRL(reg_id), reg_w);
+ PMD_DRV_LOG(DEBUG, "Debug write 0x%08"PRIx64" to "
+ "ICE GL_SWT_L2TAGCTRL[%d]", reg_w, reg_id);
+
+ return ret;
+}
+
+static int
+ice_vsi_vlan_pvid_set(struct ice_vsi *vsi, struct ice_vsi_vlan_pvid_info *info)
+{
+ struct ice_hw *hw;
+ struct ice_vsi_ctx ctxt;
+ uint8_t vlan_flags = 0;
+ int ret;
+
+ if (!vsi || !info) {
+ PMD_DRV_LOG(ERR, "invalid parameters");
+ return -EINVAL;
+ }
+
+ if (info->on) {
+ vsi->info.pvid = info->config.pvid;
+ /**
+ * If insert pvid is enabled, only tagged pkts are
+ * allowed to be sent out.
+ */
+ vlan_flags = ICE_AQ_VSI_PVLAN_INSERT_PVID |
+ ICE_AQ_VSI_VLAN_MODE_UNTAGGED;
+ } else {
+ vsi->info.pvid = 0;
+ if (info->config.reject.tagged == 0)
+ vlan_flags |= ICE_AQ_VSI_VLAN_MODE_TAGGED;
+
+ if (info->config.reject.untagged == 0)
+ vlan_flags |= ICE_AQ_VSI_VLAN_MODE_UNTAGGED;
+ }
+ vsi->info.vlan_flags &= ~(ICE_AQ_VSI_PVLAN_INSERT_PVID |
+ ICE_AQ_VSI_VLAN_MODE_M);
+ vsi->info.vlan_flags |= vlan_flags;
+ memset(&ctxt, 0, sizeof(ctxt));
+ rte_memcpy(&ctxt.info, &vsi->info, sizeof(vsi->info));
+ ctxt.info.valid_sections =
+ rte_cpu_to_le_16(ICE_AQ_VSI_PROP_VLAN_VALID);
+ ctxt.vsi_num = vsi->vsi_id;
+
+ hw = ICE_VSI_TO_HW(vsi);
+ ret = ice_update_vsi(hw, vsi->idx, &ctxt, NULL);
+ if (ret != ICE_SUCCESS) {
+ PMD_DRV_LOG(ERR,
+ "update VSI for VLAN insert failed, err %d",
+ ret);
+ return -EINVAL;
+ }
+
+ vsi->info.valid_sections |=
+ rte_cpu_to_le_16(ICE_AQ_VSI_PROP_VLAN_VALID);
+
+ return ret;
+}
+
+static int
+ice_vlan_pvid_set(struct rte_eth_dev *dev, uint16_t pvid, int on)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_vsi *vsi = pf->main_vsi;
+ struct rte_eth_dev_data *data = pf->dev_data;
+ struct ice_vsi_vlan_pvid_info info;
+ int ret;
+
+ if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+ return -E_RTE_SECONDARY;
+
+ memset(&info, 0, sizeof(info));
+ info.on = on;
+ if (info.on) {
+ info.config.pvid = pvid;
+ } else {
+ info.config.reject.tagged =
+ data->dev_conf.txmode.hw_vlan_reject_tagged;
+ info.config.reject.untagged =
+ data->dev_conf.txmode.hw_vlan_reject_untagged;
+ }
+
+ ret = ice_vsi_vlan_pvid_set(vsi, &info);
+ if (ret < 0) {
+ PMD_DRV_LOG(ERR, "Failed to set pvid.");
+ return -EINVAL;
+ }
+
+ return 0;
+}
--
1.9.3
Wenzhuo Lu
2018-11-23 06:56:10 UTC
Permalink
Add below ops,
reta_update
reta_query
rss_hash_update
rss_hash_conf_get

Signed-off-by: Wenzhuo Lu <***@intel.com>
Signed-off-by: Qiming Yang <***@intel.com>
Signed-off-by: Xiaoyun Li <***@intel.com>
Signed-off-by: Jingjing Wu <***@intel.com>
---
drivers/net/ice/ice_ethdev.c | 248 +++++++++++++++++++++++++++++++++++++++++++
1 file changed, 248 insertions(+)

diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index c6267fa..31c77e4 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -28,6 +28,16 @@ static int ice_link_update(struct rte_eth_dev *dev,
static int ice_vlan_tpid_set(struct rte_eth_dev *dev,
enum rte_vlan_type vlan_type,
uint16_t tpid);
+static int ice_rss_reta_update(struct rte_eth_dev *dev,
+ struct rte_eth_rss_reta_entry64 *reta_conf,
+ uint16_t reta_size);
+static int ice_rss_reta_query(struct rte_eth_dev *dev,
+ struct rte_eth_rss_reta_entry64 *reta_conf,
+ uint16_t reta_size);
+static int ice_rss_hash_update(struct rte_eth_dev *dev,
+ struct rte_eth_rss_conf *rss_conf);
+static int ice_rss_hash_conf_get(struct rte_eth_dev *dev,
+ struct rte_eth_rss_conf *rss_conf);
static int ice_vlan_filter_set(struct rte_eth_dev *dev,
uint16_t vlan_id,
int on);
@@ -72,6 +82,10 @@ static int ice_vlan_pvid_set(struct rte_eth_dev *dev,
.vlan_filter_set = ice_vlan_filter_set,
.vlan_offload_set = ice_vlan_offload_set,
.vlan_tpid_set = ice_vlan_tpid_set,
+ .reta_update = ice_rss_reta_update,
+ .reta_query = ice_rss_reta_query,
+ .rss_hash_update = ice_rss_hash_update,
+ .rss_hash_conf_get = ice_rss_hash_conf_get,
.vlan_pvid_set = ice_vlan_pvid_set,
};

@@ -2073,6 +2087,240 @@ static int ice_macaddr_set(struct rte_eth_dev *dev,
}

static int
+ice_get_rss_lut(struct ice_vsi *vsi, uint8_t *lut, uint16_t lut_size)
+{
+ struct ice_pf *pf = ICE_VSI_TO_PF(vsi);
+ struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+ int ret;
+
+ if (!lut)
+ return -EINVAL;
+
+ if (pf->flags & ICE_FLAG_RSS_AQ_CAPABLE) {
+ ret = ice_aq_get_rss_lut(hw, vsi->vsi_id, TRUE,
+ lut, lut_size);
+ if (ret) {
+ PMD_DRV_LOG(ERR, "Failed to get RSS lookup table");
+ return -EINVAL;
+ }
+ } else {
+ uint64_t *lut_dw = (uint64_t *)lut;
+ uint16_t i, lut_size_dw = lut_size / 4;
+
+ for (i = 0; i < lut_size_dw; i++)
+ lut_dw[i] = ICE_READ_REG(hw, PFQF_HLUT(i));
+ }
+
+ return 0;
+}
+
+static int
+ice_set_rss_lut(struct ice_vsi *vsi, uint8_t *lut, uint16_t lut_size)
+{
+ struct ice_pf *pf = ICE_VSI_TO_PF(vsi);
+ struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+ int ret;
+
+ if (!vsi || !lut)
+ return -EINVAL;
+
+ if (pf->flags & ICE_FLAG_RSS_AQ_CAPABLE) {
+ ret = ice_aq_set_rss_lut(hw, vsi->vsi_id, TRUE,
+ lut, lut_size);
+ if (ret) {
+ PMD_DRV_LOG(ERR, "Failed to set RSS lookup table");
+ return -EINVAL;
+ }
+ } else {
+ uint64_t *lut_dw = (uint64_t *)lut;
+ uint16_t i, lut_size_dw = lut_size / 4;
+
+ for (i = 0; i < lut_size_dw; i++)
+ ICE_WRITE_REG(hw, PFQF_HLUT(i), lut_dw[i]);
+
+ ice_flush(hw);
+ }
+
+ return 0;
+}
+
+static int
+ice_rss_reta_update(struct rte_eth_dev *dev,
+ struct rte_eth_rss_reta_entry64 *reta_conf,
+ uint16_t reta_size)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ uint16_t i, lut_size = hw->func_caps.common_cap.rss_table_size;
+ uint16_t idx, shift;
+ uint8_t *lut;
+ int ret;
+
+ if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+ return -E_RTE_SECONDARY;
+
+ if (reta_size != lut_size ||
+ reta_size > ETH_RSS_RETA_SIZE_512) {
+ PMD_DRV_LOG(ERR,
+ "The size of hash lookup table configured (%d)"
+ "doesn't match the number hardware can "
+ "supported (%d)",
+ reta_size, lut_size);
+ return -EINVAL;
+ }
+
+ lut = rte_zmalloc("ice_rss_lut", reta_size, 0);
+ if (!lut) {
+ PMD_DRV_LOG(ERR, "No memory can be allocated");
+ return -ENOMEM;
+ }
+ ret = ice_get_rss_lut(pf->main_vsi, lut, reta_size);
+ if (ret)
+ goto out;
+
+ for (i = 0; i < reta_size; i++) {
+ idx = i / RTE_RETA_GROUP_SIZE;
+ shift = i % RTE_RETA_GROUP_SIZE;
+ if (reta_conf[idx].mask & (1ULL << shift))
+ lut[i] = reta_conf[idx].reta[shift];
+ }
+ ret = ice_set_rss_lut(pf->main_vsi, lut, reta_size);
+
+out:
+ rte_free(lut);
+
+ return ret;
+}
+
+static int
+ice_rss_reta_query(struct rte_eth_dev *dev,
+ struct rte_eth_rss_reta_entry64 *reta_conf,
+ uint16_t reta_size)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ uint16_t i, lut_size = hw->func_caps.common_cap.rss_table_size;
+ uint16_t idx, shift;
+ uint8_t *lut;
+ int ret;
+
+ if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+ return -E_RTE_SECONDARY;
+
+ if (reta_size != lut_size ||
+ reta_size > ETH_RSS_RETA_SIZE_512) {
+ PMD_DRV_LOG(ERR,
+ "The size of hash lookup table configured (%d)"
+ "doesn't match the number hardware can "
+ "supported (%d)",
+ reta_size, lut_size);
+ return -EINVAL;
+ }
+
+ lut = rte_zmalloc("ice_rss_lut", reta_size, 0);
+ if (!lut) {
+ PMD_DRV_LOG(ERR, "No memory can be allocated");
+ return -ENOMEM;
+ }
+
+ ret = ice_get_rss_lut(pf->main_vsi, lut, reta_size);
+ if (ret)
+ goto out;
+
+ for (i = 0; i < reta_size; i++) {
+ idx = i / RTE_RETA_GROUP_SIZE;
+ shift = i % RTE_RETA_GROUP_SIZE;
+ if (reta_conf[idx].mask & (1ULL << shift))
+ reta_conf[idx].reta[shift] = lut[i];
+ }
+
+out:
+ rte_free(lut);
+
+ return ret;
+}
+
+static int
+ice_set_rss_key(struct ice_vsi *vsi, uint8_t *key, uint8_t key_len)
+{
+ struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+ int ret = 0;
+
+ if (!key || key_len == 0) {
+ PMD_DRV_LOG(DEBUG, "No key to be configured");
+ return 0;
+ } else if (key_len != (VSIQF_HKEY_MAX_INDEX + 1) *
+ sizeof(uint32_t)) {
+ PMD_DRV_LOG(ERR, "Invalid key length %u", key_len);
+ return -EINVAL;
+ }
+
+ struct ice_aqc_get_set_rss_keys *key_dw =
+ (struct ice_aqc_get_set_rss_keys *)key;
+
+ ret = ice_aq_set_rss_key(hw, vsi->vsi_id, key_dw);
+ if (ret) {
+ PMD_DRV_LOG(ERR, "Failed to configure RSS key via AQ");
+ ret = -EINVAL;
+ }
+
+ return ret;
+}
+
+static int
+ice_get_rss_key(struct ice_vsi *vsi, uint8_t *key, uint8_t *key_len)
+{
+ struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+ int ret;
+
+ if (!key || !key_len)
+ return -EINVAL;
+
+ ret = ice_aq_get_rss_key(
+ hw, vsi->vsi_id,
+ (struct ice_aqc_get_set_rss_keys *)key);
+ if (ret) {
+ PMD_DRV_LOG(ERR, "Failed to get RSS key via AQ");
+ return -EINVAL;
+ }
+ *key_len = (VSIQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t);
+
+ return 0;
+}
+
+static int
+ice_rss_hash_update(struct rte_eth_dev *dev,
+ struct rte_eth_rss_conf *rss_conf)
+{
+ enum ice_status status = ICE_SUCCESS;
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_vsi *vsi = pf->main_vsi;
+
+ /* set hash key */
+ status = ice_set_rss_key(vsi, rss_conf->rss_key, rss_conf->rss_key_len);
+ if (status)
+ return status;
+
+ /* TODO: hash enable config, ice_add_rss_cfg */
+ return 0;
+}
+
+static int
+ice_rss_hash_conf_get(struct rte_eth_dev *dev,
+ struct rte_eth_rss_conf *rss_conf)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_vsi *vsi = pf->main_vsi;
+
+ ice_get_rss_key(vsi, rss_conf->rss_key,
+ &rss_conf->rss_key_len);
+
+ /* TODO: default set to 0 as hf config is not supported now */
+ rss_conf->rss_hf = 0;
+ return 0;
+}
+
+static int
ice_vsi_vlan_pvid_set(struct ice_vsi *vsi, struct ice_vsi_vlan_pvid_info *info)
{
struct ice_hw *hw;
--
1.9.3
Wenzhuo Lu
2018-11-23 06:56:11 UTC
Permalink
Add below ops,
rx_queue_intr_enable
rx_queue_intr_disable

Signed-off-by: Wenzhuo Lu <***@intel.com>
Signed-off-by: Qiming Yang <***@intel.com>
Signed-off-by: Xiaoyun Li <***@intel.com>
Signed-off-by: Jingjing Wu <***@intel.com>
---
drivers/net/ice/ice_ethdev.c | 236 +++++++++++++++++++++++++++++++++++++++++++
1 file changed, 236 insertions(+)

diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 31c77e4..8d41516 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -48,6 +48,10 @@ static int ice_macaddr_add(struct rte_eth_dev *dev,
__rte_unused uint32_t index,
uint32_t pool);
static void ice_macaddr_remove(struct rte_eth_dev *dev, uint32_t index);
+static int ice_rx_queue_intr_enable(struct rte_eth_dev *dev,
+ uint16_t queue_id);
+static int ice_rx_queue_intr_disable(struct rte_eth_dev *dev,
+ uint16_t queue_id);
static int ice_vlan_pvid_set(struct rte_eth_dev *dev,
uint16_t pvid, int on);

@@ -86,6 +90,8 @@ static int ice_vlan_pvid_set(struct rte_eth_dev *dev,
.reta_query = ice_rss_reta_query,
.rss_hash_update = ice_rss_hash_update,
.rss_hash_conf_get = ice_rss_hash_conf_get,
+ .rx_queue_intr_enable = ice_rx_queue_intr_enable,
+ .rx_queue_intr_disable = ice_rx_queue_intr_disable,
.vlan_pvid_set = ice_vlan_pvid_set,
};

@@ -1401,6 +1407,186 @@ static int ice_init_rss(struct ice_pf *pf)
return 0;
}

+static void
+__vsi_queues_bind_intr(struct ice_vsi *vsi, uint16_t msix_vect,
+ int base_queue, int nb_queue)
+{
+ struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+ uint32_t val, val_tx;
+ int i;
+
+ for (i = 0; i < nb_queue; i++) {
+ /*do actual bind*/
+ val = (msix_vect & QINT_RQCTL_MSIX_INDX_M) |
+ (0 < QINT_RQCTL_ITR_INDX_S) | QINT_RQCTL_CAUSE_ENA_M;
+ val_tx = (msix_vect & QINT_TQCTL_MSIX_INDX_M) |
+ (0 < QINT_TQCTL_ITR_INDX_S) | QINT_TQCTL_CAUSE_ENA_M;
+
+ PMD_DRV_LOG(INFO, "queue %d is binding to vect %d",
+ base_queue + i, msix_vect);
+ /* set ITR0 value */
+ ICE_WRITE_REG(hw, GLINT_ITR(0, msix_vect), 0x10);
+ ICE_WRITE_REG(hw, QINT_RQCTL(base_queue + i), val);
+ ICE_WRITE_REG(hw, QINT_TQCTL(base_queue + i), val_tx);
+ }
+}
+
+static void
+ice_vsi_queues_bind_intr(struct ice_vsi *vsi)
+{
+ struct rte_eth_dev *dev = vsi->adapter->eth_dev;
+ struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
+ struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+ uint16_t msix_vect = vsi->msix_intr;
+ uint16_t nb_msix = RTE_MIN(vsi->nb_msix, intr_handle->nb_efd);
+ uint16_t queue_idx = 0;
+ int record = 0;
+ int i;
+
+ /* clear Rx/Tx queue interrupt */
+ for (i = 0; i < vsi->nb_used_qps; i++) {
+ ICE_WRITE_REG(hw, QINT_TQCTL(vsi->base_queue + i), 0);
+ ICE_WRITE_REG(hw, QINT_RQCTL(vsi->base_queue + i), 0);
+ }
+
+ /* PF bind interrupt */
+ if (rte_intr_dp_is_en(intr_handle)) {
+ queue_idx = 0;
+ record = 1;
+ }
+
+ for (i = 0; i < vsi->nb_used_qps; i++) {
+ if (nb_msix <= 1) {
+ if (!rte_intr_allow_others(intr_handle))
+ msix_vect = ICE_MISC_VEC_ID;
+
+ /* uio mapping all queue to one msix_vect */
+ __vsi_queues_bind_intr(vsi, msix_vect,
+ vsi->base_queue + i,
+ vsi->nb_used_qps - i);
+
+ for (; !!record && i < vsi->nb_used_qps; i++)
+ intr_handle->intr_vec[queue_idx + i] =
+ msix_vect;
+ break;
+ }
+
+ /* vfio 1:1 queue/msix_vect mapping */
+ __vsi_queues_bind_intr(vsi, msix_vect,
+ vsi->base_queue + i, 1);
+
+ if (!!record)
+ intr_handle->intr_vec[queue_idx + i] = msix_vect;
+
+ msix_vect++;
+ nb_msix--;
+ }
+}
+
+static void
+ice_vsi_enable_queues_intr(struct ice_vsi *vsi)
+{
+ struct rte_eth_dev *dev = vsi->adapter->eth_dev;
+ struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
+ struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+ uint16_t msix_intr, i;
+
+ if (rte_intr_allow_others(intr_handle))
+ for (i = 0; i < vsi->nb_used_qps; i++) {
+ msix_intr = vsi->msix_intr + i;
+ ICE_WRITE_REG(hw, GLINT_DYN_CTL(msix_intr),
+ GLINT_DYN_CTL_INTENA_M |
+ GLINT_DYN_CTL_CLEARPBA_M |
+ GLINT_DYN_CTL_ITR_INDX_M |
+ GLINT_DYN_CTL_WB_ON_ITR_M);
+ }
+ else
+ ICE_WRITE_REG(hw, GLINT_DYN_CTL(0),
+ GLINT_DYN_CTL_INTENA_M |
+ GLINT_DYN_CTL_CLEARPBA_M |
+ GLINT_DYN_CTL_ITR_INDX_M |
+ GLINT_DYN_CTL_WB_ON_ITR_M);
+}
+
+static void
+ice_vsi_disable_queues_intr(struct ice_vsi *vsi)
+{
+ struct rte_eth_dev *dev = vsi->adapter->eth_dev;
+ struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
+ struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+ uint16_t msix_intr, i;
+
+ /* disable interrupt and also clear all the exist config */
+ for (i = 0; i < vsi->nb_qps; i++) {
+ ICE_WRITE_REG(hw, QINT_TQCTL(vsi->base_queue + i), 0);
+ ICE_WRITE_REG(hw, QINT_RQCTL(vsi->base_queue + i), 0);
+ rte_wmb();
+ }
+
+ if (rte_intr_allow_others(intr_handle))
+ /* vfio-pci */
+ for (i = 0; i < vsi->nb_msix; i++) {
+ msix_intr = vsi->msix_intr + i;
+ ICE_WRITE_REG(hw, GLINT_DYN_CTL(msix_intr),
+ GLINT_DYN_CTL_WB_ON_ITR_M);
+ }
+ else
+ /* igb_uio */
+ ICE_WRITE_REG(hw, GLINT_DYN_CTL(0), GLINT_DYN_CTL_WB_ON_ITR_M);
+}
+
+static int
+ice_rxq_intr_setup(struct rte_eth_dev *dev)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
+ struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct ice_vsi *vsi = pf->main_vsi;
+ uint32_t intr_vector = 0;
+
+ rte_intr_disable(intr_handle);
+
+ /* check and configure queue intr-vector mapping */
+ if ((rte_intr_cap_multiple(intr_handle) ||
+ !RTE_ETH_DEV_SRIOV(dev).active) &&
+ dev->data->dev_conf.intr_conf.rxq != 0) {
+ intr_vector = dev->data->nb_rx_queues;
+ if (intr_vector > ICE_MAX_INTR_QUEUE_NUM) {
+ PMD_DRV_LOG(ERR, "At most %d intr queues supported",
+ ICE_MAX_INTR_QUEUE_NUM);
+ return -ENOTSUP;
+ }
+ if (rte_intr_efd_enable(intr_handle, intr_vector))
+ return -1;
+ }
+
+ if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
+ intr_handle->intr_vec =
+ rte_zmalloc("intr_vec", dev->data->nb_rx_queues * sizeof(int),
+ 0);
+ if (!intr_handle->intr_vec) {
+ PMD_DRV_LOG(ERR,
+ "Failed to allocate %d rx_queues intr_vec",
+ dev->data->nb_rx_queues);
+ return -ENOMEM;
+ }
+ }
+
+ /* Map queues with MSIX interrupt */
+ vsi->nb_used_qps = dev->data->nb_rx_queues;
+ ice_vsi_queues_bind_intr(vsi);
+
+ /* Enable interrupts for all the queues */
+ ice_vsi_enable_queues_intr(vsi);
+
+ rte_intr_enable(intr_handle);
+
+ return 0;
+}
+
static int
ice_dev_start(struct rte_eth_dev *dev)
{
@@ -1438,6 +1624,10 @@ static int ice_init_rss(struct ice_pf *pf)
goto rx_err;
}

+ /* enable Rx interrput and mapping Rx queue to interrupt vector */
+ if (ice_rxq_intr_setup(dev))
+ return -EIO;
+
ret = ice_aq_set_event_mask(hw, hw->port_info->lport,
((u16)(ICE_AQ_LINK_EVENT_LINK_FAULT |
ICE_AQ_LINK_EVENT_PHY_TEMP_ALARM |
@@ -1472,6 +1662,7 @@ static int ice_init_rss(struct ice_pf *pf)
{
struct rte_eth_dev_data *data = dev->data;
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_vsi *main_vsi = pf->main_vsi;
struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
uint16_t i;
@@ -1491,6 +1682,9 @@ static int ice_init_rss(struct ice_pf *pf)
for (i = 0; i < data->nb_tx_queues; i++)
ice_tx_queue_stop(dev, i);

+ /* disable all queue interrupts */
+ ice_vsi_disable_queues_intr(main_vsi);
+
/* Clear all queues and release mbufs */
ice_clear_queues(dev);

@@ -2320,6 +2514,48 @@ static int ice_macaddr_set(struct rte_eth_dev *dev,
return 0;
}

+static int ice_rx_queue_intr_enable(struct rte_eth_dev *dev,
+ uint16_t queue_id)
+{
+ struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
+ struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ uint32_t val;
+ uint16_t msix_intr;
+
+ if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+ return -E_RTE_SECONDARY;
+
+ msix_intr = intr_handle->intr_vec[queue_id];
+
+ val = GLINT_DYN_CTL_INTENA_M | GLINT_DYN_CTL_CLEARPBA_M |
+ GLINT_DYN_CTL_ITR_INDX_M;
+ val &= ~GLINT_DYN_CTL_WB_ON_ITR_M;
+
+ ICE_WRITE_REG(hw, GLINT_DYN_CTL(msix_intr), val);
+ rte_intr_enable(&pci_dev->intr_handle);
+
+ return 0;
+}
+
+static int ice_rx_queue_intr_disable(struct rte_eth_dev *dev,
+ uint16_t queue_id)
+{
+ struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
+ struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ uint16_t msix_intr;
+
+ if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+ return -E_RTE_SECONDARY;
+
+ msix_intr = intr_handle->intr_vec[queue_id];
+
+ ICE_WRITE_REG(hw, GLINT_DYN_CTL(msix_intr), GLINT_DYN_CTL_WB_ON_ITR_M);
+
+ return 0;
+}
+
static int
ice_vsi_vlan_pvid_set(struct ice_vsi *vsi, struct ice_vsi_vlan_pvid_info *info)
{
--
1.9.3
Wenzhuo Lu
2018-11-23 06:56:12 UTC
Permalink
Add ops fw_version_get.

Signed-off-by: Wenzhuo Lu <***@intel.com>
Signed-off-by: Qiming Yang <***@intel.com>
Signed-off-by: Xiaoyun Li <***@intel.com>
Signed-off-by: Jingjing Wu <***@intel.com>
---
drivers/net/ice/ice_ethdev.c | 21 +++++++++++++++++++++
1 file changed, 21 insertions(+)

diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 8d41516..2d5e73d 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -52,6 +52,8 @@ static int ice_rx_queue_intr_enable(struct rte_eth_dev *dev,
uint16_t queue_id);
static int ice_rx_queue_intr_disable(struct rte_eth_dev *dev,
uint16_t queue_id);
+static int ice_fw_version_get(struct rte_eth_dev *dev, char *fw_version,
+ size_t fw_size);
static int ice_vlan_pvid_set(struct rte_eth_dev *dev,
uint16_t pvid, int on);

@@ -92,6 +94,7 @@ static int ice_vlan_pvid_set(struct rte_eth_dev *dev,
.rss_hash_conf_get = ice_rss_hash_conf_get,
.rx_queue_intr_enable = ice_rx_queue_intr_enable,
.rx_queue_intr_disable = ice_rx_queue_intr_disable,
+ .fw_version_get = ice_fw_version_get,
.vlan_pvid_set = ice_vlan_pvid_set,
};

@@ -2557,6 +2560,24 @@ static int ice_rx_queue_intr_disable(struct rte_eth_dev *dev,
}

static int
+ice_fw_version_get(struct rte_eth_dev *dev, char *fw_version, size_t fw_size)
+{
+ struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ int ret;
+
+ ret = snprintf(fw_version, fw_size, "%d.%d.%05d %d.%d",
+ hw->fw_maj_ver, hw->fw_min_ver, hw->fw_build,
+ hw->api_maj_ver, hw->api_min_ver);
+
+ /* add the size of '\0' */
+ ret += 1;
+ if (fw_size < (u32)ret)
+ return ret;
+ else
+ return 0;
+}
+
+static int
ice_vsi_vlan_pvid_set(struct ice_vsi *vsi, struct ice_vsi_vlan_pvid_info *info)
{
struct ice_hw *hw;
--
1.9.3
Wenzhuo Lu
2018-11-23 06:56:13 UTC
Permalink
Add below ops,
get_eeprom_length
get_eeprom

Signed-off-by: Wei Zhao <***@intel.com>
Signed-off-by: Wenzhuo Lu <***@intel.com>
---
drivers/net/ice/ice_ethdev.c | 45 ++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 45 insertions(+)

diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 2d5e73d..e54088e 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -56,6 +56,9 @@ static int ice_fw_version_get(struct rte_eth_dev *dev, char *fw_version,
size_t fw_size);
static int ice_vlan_pvid_set(struct rte_eth_dev *dev,
uint16_t pvid, int on);
+static int ice_get_eeprom_length(struct rte_eth_dev *dev);
+static int ice_get_eeprom(struct rte_eth_dev *dev,
+ struct rte_dev_eeprom_info *eeprom);

static const struct rte_pci_id pci_id_ice_map[] = {
{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_BACKPLANE) },
@@ -96,6 +99,8 @@ static int ice_vlan_pvid_set(struct rte_eth_dev *dev,
.rx_queue_intr_disable = ice_rx_queue_intr_disable,
.fw_version_get = ice_fw_version_get,
.vlan_pvid_set = ice_vlan_pvid_set,
+ .get_eeprom_length = ice_get_eeprom_length,
+ .get_eeprom = ice_get_eeprom,
};

static void
@@ -2661,3 +2666,43 @@ static int ice_rx_queue_intr_disable(struct rte_eth_dev *dev,

return 0;
}
+
+static int
+ice_get_eeprom_length(struct rte_eth_dev *dev)
+{
+ struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+ /* Convert word count to byte count */
+ return hw->nvm.sr_words << 1;
+}
+
+static int
+ice_get_eeprom(struct rte_eth_dev *dev,
+ struct rte_dev_eeprom_info *eeprom)
+{
+ struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ uint16_t *data = eeprom->data;
+ uint16_t offset, length, i;
+ enum ice_status ret_code = ICE_SUCCESS;
+
+ offset = eeprom->offset >> 1;
+ length = eeprom->length >> 1;
+
+ if (offset > hw->nvm.sr_words ||
+ offset + length > hw->nvm.sr_words) {
+ PMD_DRV_LOG(ERR, "Requested EEPROM bytes out of range.");
+ return -EINVAL;
+ }
+
+ eeprom->magic = hw->vendor_id | (hw->device_id << 16);
+
+ for (i = 0; i < length; i++) {
+ ret_code = ice_read_sr_word(hw, offset + i, &data[i]);
+ if (ret_code != ICE_SUCCESS) {
+ PMD_DRV_LOG(ERR, "EEPROM read failed.");
+ return -EIO;
+ }
+ }
+
+ return 0;
+}
--
1.9.3
Wenzhuo Lu
2018-11-23 06:56:14 UTC
Permalink
Add below ops,
stats_get
stats_reset
xstats_get
xstats_get_names
xstats_reset

Signed-off-by: Wenzhuo Lu <***@intel.com>
Signed-off-by: Jia Guo <***@intel.com>
---
drivers/net/ice/ice_ethdev.c | 574 +++++++++++++++++++++++++++++++++++++++++++
1 file changed, 574 insertions(+)

diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index e54088e..3500d5b 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -59,6 +59,14 @@ static int ice_vlan_pvid_set(struct rte_eth_dev *dev,
static int ice_get_eeprom_length(struct rte_eth_dev *dev);
static int ice_get_eeprom(struct rte_eth_dev *dev,
struct rte_dev_eeprom_info *eeprom);
+static int ice_stats_get(struct rte_eth_dev *dev,
+ struct rte_eth_stats *stats);
+static void ice_stats_reset(struct rte_eth_dev *dev);
+static int ice_xstats_get(struct rte_eth_dev *dev,
+ struct rte_eth_xstat *xstats, unsigned int n);
+static int ice_xstats_get_names(struct rte_eth_dev *dev,
+ struct rte_eth_xstat_name *xstats_names,
+ unsigned int limit);

static const struct rte_pci_id pci_id_ice_map[] = {
{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_BACKPLANE) },
@@ -101,8 +109,100 @@ static int ice_get_eeprom(struct rte_eth_dev *dev,
.vlan_pvid_set = ice_vlan_pvid_set,
.get_eeprom_length = ice_get_eeprom_length,
.get_eeprom = ice_get_eeprom,
+ .stats_get = ice_stats_get,
+ .stats_reset = ice_stats_reset,
+ .xstats_get = ice_xstats_get,
+ .xstats_get_names = ice_xstats_get_names,
+ .xstats_reset = ice_stats_reset,
};

+/* store statistics names and its offset in stats structure */
+struct ice_xstats_name_off {
+ char name[RTE_ETH_XSTATS_NAME_SIZE];
+ unsigned int offset;
+};
+
+static const struct ice_xstats_name_off ice_stats_strings[] = {
+ {"rx_unicast_packets", offsetof(struct ice_eth_stats, rx_unicast)},
+ {"rx_multicast_packets", offsetof(struct ice_eth_stats, rx_multicast)},
+ {"rx_broadcast_packets", offsetof(struct ice_eth_stats, rx_broadcast)},
+ {"rx_dropped", offsetof(struct ice_eth_stats, rx_discards)},
+ {"rx_unknown_protocol_packets", offsetof(struct ice_eth_stats,
+ rx_unknown_protocol)},
+ {"tx_unicast_packets", offsetof(struct ice_eth_stats, tx_unicast)},
+ {"tx_multicast_packets", offsetof(struct ice_eth_stats, tx_multicast)},
+ {"tx_broadcast_packets", offsetof(struct ice_eth_stats, tx_broadcast)},
+ {"tx_dropped", offsetof(struct ice_eth_stats, tx_discards)},
+};
+
+#define ICE_NB_ETH_XSTATS (sizeof(ice_stats_strings) / \
+ sizeof(ice_stats_strings[0]))
+
+static const struct ice_xstats_name_off ice_hw_port_strings[] = {
+ {"tx_link_down_dropped", offsetof(struct ice_hw_port_stats,
+ tx_dropped_link_down)},
+ {"rx_crc_errors", offsetof(struct ice_hw_port_stats, crc_errors)},
+ {"rx_illegal_byte_errors", offsetof(struct ice_hw_port_stats,
+ illegal_bytes)},
+ {"rx_error_bytes", offsetof(struct ice_hw_port_stats, error_bytes)},
+ {"mac_local_errors", offsetof(struct ice_hw_port_stats,
+ mac_local_faults)},
+ {"mac_remote_errors", offsetof(struct ice_hw_port_stats,
+ mac_remote_faults)},
+ {"rx_len_errors", offsetof(struct ice_hw_port_stats,
+ rx_len_errors)},
+ {"tx_xon_packets", offsetof(struct ice_hw_port_stats, link_xon_tx)},
+ {"rx_xon_packets", offsetof(struct ice_hw_port_stats, link_xon_rx)},
+ {"tx_xoff_packets", offsetof(struct ice_hw_port_stats, link_xoff_tx)},
+ {"rx_xoff_packets", offsetof(struct ice_hw_port_stats, link_xoff_rx)},
+ {"rx_size_64_packets", offsetof(struct ice_hw_port_stats, rx_size_64)},
+ {"rx_size_65_to_127_packets", offsetof(struct ice_hw_port_stats,
+ rx_size_127)},
+ {"rx_size_128_to_255_packets", offsetof(struct ice_hw_port_stats,
+ rx_size_255)},
+ {"rx_size_256_to_511_packets", offsetof(struct ice_hw_port_stats,
+ rx_size_511)},
+ {"rx_size_512_to_1023_packets", offsetof(struct ice_hw_port_stats,
+ rx_size_1023)},
+ {"rx_size_1024_to_1522_packets", offsetof(struct ice_hw_port_stats,
+ rx_size_1522)},
+ {"rx_size_1523_to_max_packets", offsetof(struct ice_hw_port_stats,
+ rx_size_big)},
+ {"rx_undersized_errors", offsetof(struct ice_hw_port_stats,
+ rx_undersize)},
+ {"rx_oversize_errors", offsetof(struct ice_hw_port_stats,
+ rx_oversize)},
+ {"rx_mac_short_pkt_dropped", offsetof(struct ice_hw_port_stats,
+ mac_short_pkt_dropped)},
+ {"rx_fragmented_errors", offsetof(struct ice_hw_port_stats,
+ rx_fragments)},
+ {"rx_jabber_errors", offsetof(struct ice_hw_port_stats, rx_jabber)},
+ {"tx_size_64_packets", offsetof(struct ice_hw_port_stats, tx_size_64)},
+ {"tx_size_65_to_127_packets", offsetof(struct ice_hw_port_stats,
+ tx_size_127)},
+ {"tx_size_128_to_255_packets", offsetof(struct ice_hw_port_stats,
+ tx_size_255)},
+ {"tx_size_256_to_511_packets", offsetof(struct ice_hw_port_stats,
+ tx_size_511)},
+ {"tx_size_512_to_1023_packets", offsetof(struct ice_hw_port_stats,
+ tx_size_1023)},
+ {"tx_size_1024_to_1522_packets", offsetof(struct ice_hw_port_stats,
+ tx_size_1522)},
+ {"tx_size_1523_to_max_packets", offsetof(struct ice_hw_port_stats,
+ tx_size_big)},
+ {"tx_low_power_idle_status", offsetof(struct ice_hw_port_stats,
+ tx_lpi_status)},
+ {"rx_low_power_idle_status", offsetof(struct ice_hw_port_stats,
+ rx_lpi_status)},
+ {"tx_low_power_idle_count", offsetof(struct ice_hw_port_stats,
+ tx_lpi_count)},
+ {"rx_low_power_idle_count", offsetof(struct ice_hw_port_stats,
+ rx_lpi_count)},
+};
+
+#define ICE_NB_HW_PORT_XSTATS (sizeof(ice_hw_port_strings) / \
+ sizeof(ice_hw_port_strings[0]))
+
static void
ice_init_controlq_parameter(struct ice_hw *hw)
{
@@ -2706,3 +2806,477 @@ static int ice_rx_queue_intr_disable(struct rte_eth_dev *dev,

return 0;
}
+
+static void
+ice_stat_update_32(struct ice_hw *hw,
+ uint32_t reg,
+ bool offset_loaded,
+ uint64_t *offset,
+ uint64_t *stat)
+{
+ uint64_t new_data;
+
+ new_data = (uint64_t)ICE_READ_REG(hw, reg);
+ if (!offset_loaded)
+ *offset = new_data;
+
+ if (new_data >= *offset)
+ *stat = (uint64_t)(new_data - *offset);
+ else
+ *stat = (uint64_t)((new_data +
+ ((uint64_t)1 << ICE_32_BIT_WIDTH))
+ - *offset);
+}
+
+static void
+ice_stat_update_40(struct ice_hw *hw,
+ uint32_t hireg,
+ uint32_t loreg,
+ bool offset_loaded,
+ uint64_t *offset,
+ uint64_t *stat)
+{
+ uint64_t new_data;
+
+ new_data = (uint64_t)ICE_READ_REG(hw, loreg);
+ new_data |= (uint64_t)(ICE_READ_REG(hw, hireg) & ICE_8_BIT_MASK) <<
+ ICE_32_BIT_WIDTH;
+
+ if (!offset_loaded)
+ *offset = new_data;
+
+ if (new_data >= *offset)
+ *stat = new_data - *offset;
+ else
+ *stat = (uint64_t)((new_data +
+ ((uint64_t)1 << ICE_40_BIT_WIDTH)) -
+ *offset);
+
+ *stat &= ICE_40_BIT_MASK;
+}
+
+/* Get all the statistics of a VSI */
+static void
+ice_update_vsi_stats(struct ice_vsi *vsi)
+{
+ struct ice_eth_stats *oes = &vsi->eth_stats_offset;
+ struct ice_eth_stats *nes = &vsi->eth_stats;
+ struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+ int idx = rte_le_to_cpu_16(vsi->vsi_id);
+
+ ice_stat_update_40(hw, GLV_GORCH(idx), GLV_GORCL(idx),
+ vsi->offset_loaded, &oes->rx_bytes,
+ &nes->rx_bytes);
+ ice_stat_update_40(hw, GLV_UPRCH(idx), GLV_UPRCL(idx),
+ vsi->offset_loaded, &oes->rx_unicast,
+ &nes->rx_unicast);
+ ice_stat_update_40(hw, GLV_MPRCH(idx), GLV_MPRCL(idx),
+ vsi->offset_loaded, &oes->rx_multicast,
+ &nes->rx_multicast);
+ ice_stat_update_40(hw, GLV_BPRCH(idx), GLV_BPRCL(idx),
+ vsi->offset_loaded, &oes->rx_broadcast,
+ &nes->rx_broadcast);
+ /* exclude CRC bytes */
+ nes->rx_bytes -= (nes->rx_unicast + nes->rx_multicast +
+ nes->rx_broadcast) * ETHER_CRC_LEN;
+
+ ice_stat_update_32(hw, GLV_RDPC(idx), vsi->offset_loaded,
+ &oes->rx_discards, &nes->rx_discards);
+ /* GLV_REPC not supported */
+ /* GLV_RMPC not supported */
+ ice_stat_update_32(hw, GLSWID_RUPP(idx), vsi->offset_loaded,
+ &oes->rx_unknown_protocol,
+ &nes->rx_unknown_protocol);
+ ice_stat_update_40(hw, GLV_GOTCH(idx), GLV_GOTCL(idx),
+ vsi->offset_loaded, &oes->tx_bytes,
+ &nes->tx_bytes);
+ ice_stat_update_40(hw, GLV_UPTCH(idx), GLV_UPTCL(idx),
+ vsi->offset_loaded, &oes->tx_unicast,
+ &nes->tx_unicast);
+ ice_stat_update_40(hw, GLV_MPTCH(idx), GLV_MPTCL(idx),
+ vsi->offset_loaded, &oes->tx_multicast,
+ &nes->tx_multicast);
+ ice_stat_update_40(hw, GLV_BPTCH(idx), GLV_BPTCL(idx),
+ vsi->offset_loaded, &oes->tx_broadcast,
+ &nes->tx_broadcast);
+ /* GLV_TDPC not supported */
+ ice_stat_update_32(hw, GLV_TEPC(idx), vsi->offset_loaded,
+ &oes->tx_errors, &nes->tx_errors);
+ vsi->offset_loaded = true;
+
+ PMD_DRV_LOG(DEBUG, "************** VSI[%u] stats start **************",
+ vsi->vsi_id);
+ PMD_DRV_LOG(DEBUG, "rx_bytes: %"PRIu64"", nes->rx_bytes);
+ PMD_DRV_LOG(DEBUG, "rx_unicast: %"PRIu64"", nes->rx_unicast);
+ PMD_DRV_LOG(DEBUG, "rx_multicast: %"PRIu64"", nes->rx_multicast);
+ PMD_DRV_LOG(DEBUG, "rx_broadcast: %"PRIu64"", nes->rx_broadcast);
+ PMD_DRV_LOG(DEBUG, "rx_discards: %"PRIu64"", nes->rx_discards);
+ PMD_DRV_LOG(DEBUG, "rx_unknown_protocol: %"PRIu64"",
+ nes->rx_unknown_protocol);
+ PMD_DRV_LOG(DEBUG, "tx_bytes: %"PRIu64"", nes->tx_bytes);
+ PMD_DRV_LOG(DEBUG, "tx_unicast: %"PRIu64"", nes->tx_unicast);
+ PMD_DRV_LOG(DEBUG, "tx_multicast: %"PRIu64"", nes->tx_multicast);
+ PMD_DRV_LOG(DEBUG, "tx_broadcast: %"PRIu64"", nes->tx_broadcast);
+ PMD_DRV_LOG(DEBUG, "tx_discards: %"PRIu64"", nes->tx_discards);
+ PMD_DRV_LOG(DEBUG, "tx_errors: %"PRIu64"", nes->tx_errors);
+ PMD_DRV_LOG(DEBUG, "************** VSI[%u] stats end ****************",
+ vsi->vsi_id);
+}
+
+static void
+ice_read_stats_registers(struct ice_pf *pf, struct ice_hw *hw)
+{
+ struct ice_hw_port_stats *ns = &pf->stats; /* new stats */
+ struct ice_hw_port_stats *os = &pf->stats_offset; /* old stats */
+
+ /* Get statistics of struct ice_eth_stats */
+ ice_stat_update_40(hw, GLPRT_GORCH(hw->port_info->lport),
+ GLPRT_GORCL(hw->port_info->lport),
+ pf->offset_loaded, &os->eth.rx_bytes,
+ &ns->eth.rx_bytes);
+ ice_stat_update_40(hw, GLPRT_UPRCH(hw->port_info->lport),
+ GLPRT_UPRCL(hw->port_info->lport),
+ pf->offset_loaded, &os->eth.rx_unicast,
+ &ns->eth.rx_unicast);
+ ice_stat_update_40(hw, GLPRT_MPRCH(hw->port_info->lport),
+ GLPRT_MPRCL(hw->port_info->lport),
+ pf->offset_loaded, &os->eth.rx_multicast,
+ &ns->eth.rx_multicast);
+ ice_stat_update_40(hw, GLPRT_BPRCH(hw->port_info->lport),
+ GLPRT_BPRCL(hw->port_info->lport),
+ pf->offset_loaded, &os->eth.rx_broadcast,
+ &ns->eth.rx_broadcast);
+ ice_stat_update_32(hw, PRTRPB_RDPC,
+ pf->offset_loaded, &os->eth.rx_discards,
+ &ns->eth.rx_discards);
+
+ /* Workaround: CRC size should not be included in byte statistics,
+ * so subtract ETHER_CRC_LEN from the byte counter for each rx packet.
+ */
+ ns->eth.rx_bytes -= (ns->eth.rx_unicast + ns->eth.rx_multicast +
+ ns->eth.rx_broadcast) * ETHER_CRC_LEN;
+
+ /* GLPRT_REPC not supported */
+ /* GLPRT_RMPC not supported */
+ ice_stat_update_32(hw, GLSWID_RUPP(hw->port_info->lport),
+ pf->offset_loaded,
+ &os->eth.rx_unknown_protocol,
+ &ns->eth.rx_unknown_protocol);
+ ice_stat_update_40(hw, GLPRT_GOTCH(hw->port_info->lport),
+ GLPRT_GOTCL(hw->port_info->lport),
+ pf->offset_loaded, &os->eth.tx_bytes,
+ &ns->eth.tx_bytes);
+ ice_stat_update_40(hw, GLPRT_UPTCH(hw->port_info->lport),
+ GLPRT_UPTCL(hw->port_info->lport),
+ pf->offset_loaded, &os->eth.tx_unicast,
+ &ns->eth.tx_unicast);
+ ice_stat_update_40(hw, GLPRT_MPTCH(hw->port_info->lport),
+ GLPRT_MPTCL(hw->port_info->lport),
+ pf->offset_loaded, &os->eth.tx_multicast,
+ &ns->eth.tx_multicast);
+ ice_stat_update_40(hw, GLPRT_BPTCH(hw->port_info->lport),
+ GLPRT_BPTCL(hw->port_info->lport),
+ pf->offset_loaded, &os->eth.tx_broadcast,
+ &ns->eth.tx_broadcast);
+ ns->eth.tx_bytes -= (ns->eth.tx_unicast + ns->eth.tx_multicast +
+ ns->eth.tx_broadcast) * ETHER_CRC_LEN;
+
+ /* GLPRT_TEPC not supported */
+
+ /* additional port specific stats */
+ ice_stat_update_32(hw, GLPRT_TDOLD(hw->port_info->lport),
+ pf->offset_loaded, &os->tx_dropped_link_down,
+ &ns->tx_dropped_link_down);
+ ice_stat_update_32(hw, GLPRT_CRCERRS(hw->port_info->lport),
+ pf->offset_loaded, &os->crc_errors,
+ &ns->crc_errors);
+ ice_stat_update_32(hw, GLPRT_ILLERRC(hw->port_info->lport),
+ pf->offset_loaded, &os->illegal_bytes,
+ &ns->illegal_bytes);
+ /* GLPRT_ERRBC not supported */
+ ice_stat_update_32(hw, GLPRT_MLFC(hw->port_info->lport),
+ pf->offset_loaded, &os->mac_local_faults,
+ &ns->mac_local_faults);
+ ice_stat_update_32(hw, GLPRT_MRFC(hw->port_info->lport),
+ pf->offset_loaded, &os->mac_remote_faults,
+ &ns->mac_remote_faults);
+
+ ice_stat_update_32(hw, GLPRT_RLEC(hw->port_info->lport),
+ pf->offset_loaded, &os->rx_len_errors,
+ &ns->rx_len_errors);
+
+ ice_stat_update_32(hw, GLPRT_LXONRXC(hw->port_info->lport),
+ pf->offset_loaded, &os->link_xon_rx,
+ &ns->link_xon_rx);
+ ice_stat_update_32(hw, GLPRT_LXOFFRXC(hw->port_info->lport),
+ pf->offset_loaded, &os->link_xoff_rx,
+ &ns->link_xoff_rx);
+ ice_stat_update_32(hw, GLPRT_LXONTXC(hw->port_info->lport),
+ pf->offset_loaded, &os->link_xon_tx,
+ &ns->link_xon_tx);
+ ice_stat_update_32(hw, GLPRT_LXOFFTXC(hw->port_info->lport),
+ pf->offset_loaded, &os->link_xoff_tx,
+ &ns->link_xoff_tx);
+ ice_stat_update_40(hw, GLPRT_PRC64H(hw->port_info->lport),
+ GLPRT_PRC64L(hw->port_info->lport),
+ pf->offset_loaded, &os->rx_size_64,
+ &ns->rx_size_64);
+ ice_stat_update_40(hw, GLPRT_PRC127H(hw->port_info->lport),
+ GLPRT_PRC127L(hw->port_info->lport),
+ pf->offset_loaded, &os->rx_size_127,
+ &ns->rx_size_127);
+ ice_stat_update_40(hw, GLPRT_PRC255H(hw->port_info->lport),
+ GLPRT_PRC255L(hw->port_info->lport),
+ pf->offset_loaded, &os->rx_size_255,
+ &ns->rx_size_255);
+ ice_stat_update_40(hw, GLPRT_PRC511H(hw->port_info->lport),
+ GLPRT_PRC511L(hw->port_info->lport),
+ pf->offset_loaded, &os->rx_size_511,
+ &ns->rx_size_511);
+ ice_stat_update_40(hw, GLPRT_PRC1023H(hw->port_info->lport),
+ GLPRT_PRC1023L(hw->port_info->lport),
+ pf->offset_loaded, &os->rx_size_1023,
+ &ns->rx_size_1023);
+ ice_stat_update_40(hw, GLPRT_PRC1522H(hw->port_info->lport),
+ GLPRT_PRC1522L(hw->port_info->lport),
+ pf->offset_loaded, &os->rx_size_1522,
+ &ns->rx_size_1522);
+ ice_stat_update_40(hw, GLPRT_PRC9522H(hw->port_info->lport),
+ GLPRT_PRC9522L(hw->port_info->lport),
+ pf->offset_loaded, &os->rx_size_big,
+ &ns->rx_size_big);
+ ice_stat_update_32(hw, GLPRT_RUC(hw->port_info->lport),
+ pf->offset_loaded, &os->rx_undersize,
+ &ns->rx_undersize);
+ ice_stat_update_32(hw, GLPRT_RFC(hw->port_info->lport),
+ pf->offset_loaded, &os->rx_fragments,
+ &ns->rx_fragments);
+ ice_stat_update_32(hw, GLPRT_ROC(hw->port_info->lport),
+ pf->offset_loaded, &os->rx_oversize,
+ &ns->rx_oversize);
+ ice_stat_update_32(hw, GLPRT_RJC(hw->port_info->lport),
+ pf->offset_loaded, &os->rx_jabber,
+ &ns->rx_jabber);
+ ice_stat_update_40(hw, GLPRT_PTC64H(hw->port_info->lport),
+ GLPRT_PTC64L(hw->port_info->lport),
+ pf->offset_loaded, &os->tx_size_64,
+ &ns->tx_size_64);
+ ice_stat_update_40(hw, GLPRT_PTC127H(hw->port_info->lport),
+ GLPRT_PTC127L(hw->port_info->lport),
+ pf->offset_loaded, &os->tx_size_127,
+ &ns->tx_size_127);
+ ice_stat_update_40(hw, GLPRT_PTC255H(hw->port_info->lport),
+ GLPRT_PTC255L(hw->port_info->lport),
+ pf->offset_loaded, &os->tx_size_255,
+ &ns->tx_size_255);
+ ice_stat_update_40(hw, GLPRT_PTC511H(hw->port_info->lport),
+ GLPRT_PTC511L(hw->port_info->lport),
+ pf->offset_loaded, &os->tx_size_511,
+ &ns->tx_size_511);
+ ice_stat_update_40(hw, GLPRT_PTC1023H(hw->port_info->lport),
+ GLPRT_PTC1023L(hw->port_info->lport),
+ pf->offset_loaded, &os->tx_size_1023,
+ &ns->tx_size_1023);
+ ice_stat_update_40(hw, GLPRT_PTC1522H(hw->port_info->lport),
+ GLPRT_PTC1522L(hw->port_info->lport),
+ pf->offset_loaded, &os->tx_size_1522,
+ &ns->tx_size_1522);
+ ice_stat_update_40(hw, GLPRT_PTC9522H(hw->port_info->lport),
+ GLPRT_PTC9522L(hw->port_info->lport),
+ pf->offset_loaded, &os->tx_size_big,
+ &ns->tx_size_big);
+
+ /* GLPRT_MSPDC not supported */
+ /* GLPRT_XEC not supported */
+
+ pf->offset_loaded = true;
+
+ if (pf->main_vsi)
+ ice_update_vsi_stats(pf->main_vsi);
+}
+
+/* Get all statistics of a port */
+static int
+ice_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct ice_hw_port_stats *ns = &pf->stats; /* new stats */
+
+ /* call read registers - updates values, now write them to struct */
+ ice_read_stats_registers(pf, hw);
+
+ stats->ipackets = ns->eth.rx_unicast +
+ ns->eth.rx_multicast +
+ ns->eth.rx_broadcast -
+ ns->eth.rx_discards -
+ pf->main_vsi->eth_stats.rx_discards;
+ stats->opackets = ns->eth.tx_unicast +
+ ns->eth.tx_multicast +
+ ns->eth.tx_broadcast;
+ stats->ibytes = ns->eth.rx_bytes;
+ stats->obytes = ns->eth.tx_bytes;
+ stats->oerrors = ns->eth.tx_errors +
+ pf->main_vsi->eth_stats.tx_errors;
+
+ /* Rx Errors */
+ stats->imissed = ns->eth.rx_discards +
+ pf->main_vsi->eth_stats.rx_discards;
+ stats->ierrors = ns->crc_errors +
+ ns->rx_undersize +
+ ns->rx_oversize + ns->rx_fragments + ns->rx_jabber;
+
+ PMD_DRV_LOG(DEBUG, "*************** PF stats start *****************");
+ PMD_DRV_LOG(DEBUG, "rx_bytes: %"PRIu64"", ns->eth.rx_bytes);
+ PMD_DRV_LOG(DEBUG, "rx_unicast: %"PRIu64"", ns->eth.rx_unicast);
+ PMD_DRV_LOG(DEBUG, "rx_multicast:%"PRIu64"", ns->eth.rx_multicast);
+ PMD_DRV_LOG(DEBUG, "rx_broadcast:%"PRIu64"", ns->eth.rx_broadcast);
+ PMD_DRV_LOG(DEBUG, "rx_discards:%"PRIu64"", ns->eth.rx_discards);
+ PMD_DRV_LOG(DEBUG, "vsi rx_discards:%"PRIu64"",
+ pf->main_vsi->eth_stats.rx_discards);
+ PMD_DRV_LOG(DEBUG, "rx_unknown_protocol: %"PRIu64"",
+ ns->eth.rx_unknown_protocol);
+ PMD_DRV_LOG(DEBUG, "tx_bytes: %"PRIu64"", ns->eth.tx_bytes);
+ PMD_DRV_LOG(DEBUG, "tx_unicast: %"PRIu64"", ns->eth.tx_unicast);
+ PMD_DRV_LOG(DEBUG, "tx_multicast:%"PRIu64"", ns->eth.tx_multicast);
+ PMD_DRV_LOG(DEBUG, "tx_broadcast:%"PRIu64"", ns->eth.tx_broadcast);
+ PMD_DRV_LOG(DEBUG, "tx_discards:%"PRIu64"", ns->eth.tx_discards);
+ PMD_DRV_LOG(DEBUG, "vsi tx_discards:%"PRIu64"",
+ pf->main_vsi->eth_stats.tx_discards);
+ PMD_DRV_LOG(DEBUG, "tx_errors: %"PRIu64"", ns->eth.tx_errors);
+
+ PMD_DRV_LOG(DEBUG, "tx_dropped_link_down: %"PRIu64"",
+ ns->tx_dropped_link_down);
+ PMD_DRV_LOG(DEBUG, "crc_errors: %"PRIu64"", ns->crc_errors);
+ PMD_DRV_LOG(DEBUG, "illegal_bytes: %"PRIu64"",
+ ns->illegal_bytes);
+ PMD_DRV_LOG(DEBUG, "error_bytes: %"PRIu64"", ns->error_bytes);
+ PMD_DRV_LOG(DEBUG, "mac_local_faults: %"PRIu64"",
+ ns->mac_local_faults);
+ PMD_DRV_LOG(DEBUG, "mac_remote_faults: %"PRIu64"",
+ ns->mac_remote_faults);
+ PMD_DRV_LOG(DEBUG, "link_xon_rx: %"PRIu64"", ns->link_xon_rx);
+ PMD_DRV_LOG(DEBUG, "link_xoff_rx: %"PRIu64"", ns->link_xoff_rx);
+ PMD_DRV_LOG(DEBUG, "link_xon_tx: %"PRIu64"", ns->link_xon_tx);
+ PMD_DRV_LOG(DEBUG, "link_xoff_tx: %"PRIu64"", ns->link_xoff_tx);
+ PMD_DRV_LOG(DEBUG, "rx_size_64: %"PRIu64"", ns->rx_size_64);
+ PMD_DRV_LOG(DEBUG, "rx_size_127: %"PRIu64"", ns->rx_size_127);
+ PMD_DRV_LOG(DEBUG, "rx_size_255: %"PRIu64"", ns->rx_size_255);
+ PMD_DRV_LOG(DEBUG, "rx_size_511: %"PRIu64"", ns->rx_size_511);
+ PMD_DRV_LOG(DEBUG, "rx_size_1023: %"PRIu64"", ns->rx_size_1023);
+ PMD_DRV_LOG(DEBUG, "rx_size_1522: %"PRIu64"", ns->rx_size_1522);
+ PMD_DRV_LOG(DEBUG, "rx_size_big: %"PRIu64"", ns->rx_size_big);
+ PMD_DRV_LOG(DEBUG, "rx_undersize: %"PRIu64"", ns->rx_undersize);
+ PMD_DRV_LOG(DEBUG, "rx_fragments: %"PRIu64"", ns->rx_fragments);
+ PMD_DRV_LOG(DEBUG, "rx_oversize: %"PRIu64"", ns->rx_oversize);
+ PMD_DRV_LOG(DEBUG, "rx_jabber: %"PRIu64"", ns->rx_jabber);
+ PMD_DRV_LOG(DEBUG, "tx_size_64: %"PRIu64"", ns->tx_size_64);
+ PMD_DRV_LOG(DEBUG, "tx_size_127: %"PRIu64"", ns->tx_size_127);
+ PMD_DRV_LOG(DEBUG, "tx_size_255: %"PRIu64"", ns->tx_size_255);
+ PMD_DRV_LOG(DEBUG, "tx_size_511: %"PRIu64"", ns->tx_size_511);
+ PMD_DRV_LOG(DEBUG, "tx_size_1023: %"PRIu64"", ns->tx_size_1023);
+ PMD_DRV_LOG(DEBUG, "tx_size_1522: %"PRIu64"", ns->tx_size_1522);
+ PMD_DRV_LOG(DEBUG, "tx_size_big: %"PRIu64"", ns->tx_size_big);
+ PMD_DRV_LOG(DEBUG, "rx_len_errors: %"PRIu64"", ns->rx_len_errors);
+ PMD_DRV_LOG(DEBUG, "************* PF stats end ****************");
+ return 0;
+}
+
+/* Reset the statistics */
+static void
+ice_stats_reset(struct rte_eth_dev *dev)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+ /* Mark PF and VSI stats to update the offset, aka "reset" */
+ pf->offset_loaded = false;
+ if (pf->main_vsi)
+ pf->main_vsi->offset_loaded = false;
+
+ /* read the stats, reading current register values into offset */
+ ice_read_stats_registers(pf, hw);
+}
+
+static uint32_t
+ice_xstats_calc_num(void)
+{
+ uint32_t num;
+
+ num = ICE_NB_ETH_XSTATS + ICE_NB_HW_PORT_XSTATS;
+
+ return num;
+}
+
+static int
+ice_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats,
+ unsigned int n)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ unsigned int i;
+ unsigned int count;
+ struct ice_hw_port_stats *hw_stats = &pf->stats;
+
+ count = ice_xstats_calc_num();
+ if (n < count)
+ return count;
+
+ ice_read_stats_registers(pf, hw);
+
+ if (!xstats)
+ return 0;
+
+ count = 0;
+
+ /* Get stats from ice_eth_stats struct */
+ for (i = 0; i < ICE_NB_ETH_XSTATS; i++) {
+ xstats[count].value =
+ *(uint64_t *)((char *)&hw_stats->eth +
+ ice_stats_strings[i].offset);
+ xstats[count].id = count;
+ count++;
+ }
+
+ /* Get individiual stats from ice_hw_port struct */
+ for (i = 0; i < ICE_NB_HW_PORT_XSTATS; i++) {
+ xstats[count].value =
+ *(uint64_t *)((char *)hw_stats +
+ ice_hw_port_strings[i].offset);
+ xstats[count].id = count;
+ count++;
+ }
+
+ return count;
+}
+
+static int ice_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
+ struct rte_eth_xstat_name *xstats_names,
+ __rte_unused unsigned int limit)
+{
+ unsigned int count = 0;
+ unsigned int i;
+
+ if (!xstats_names)
+ return ice_xstats_calc_num();
+
+ /* Note: limit checked in rte_eth_xstats_names() */
+
+ /* Get stats from ice_eth_stats struct */
+ for (i = 0; i < ICE_NB_ETH_XSTATS; i++) {
+ snprintf(xstats_names[count].name,
+ sizeof(xstats_names[count].name),
+ "%s", ice_stats_strings[i].name);
+ count++;
+ }
+
+ /* Get individiual stats from ice_hw_port struct */
+ for (i = 0; i < ICE_NB_HW_PORT_XSTATS; i++) {
+ snprintf(xstats_names[count].name,
+ sizeof(xstats_names[count].name),
+ "%s", ice_hw_port_strings[i].name);
+ count++;
+ }
+
+ return count;
+}
--
1.9.3
Wenzhuo Lu
2018-11-23 06:56:15 UTC
Permalink
Add below ops,
rxq_info_get
txq_info_get
rx_queue_count

Signed-off-by: Wenzhuo Lu <***@intel.com>
Signed-off-by: Qiming Yang <***@intel.com>
Signed-off-by: Xiaoyun Li <***@intel.com>
Signed-off-by: Jingjing Wu <***@intel.com>
---
drivers/net/ice/ice_ethdev.c | 3 ++
drivers/net/ice/ice_lan_rxtx.c | 66 ++++++++++++++++++++++++++++++++++++++++++
drivers/net/ice/ice_rxtx.h | 5 ++++
3 files changed, 74 insertions(+)

diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 3500d5b..6485577 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -107,8 +107,11 @@ static int ice_xstats_get_names(struct rte_eth_dev *dev,
.rx_queue_intr_disable = ice_rx_queue_intr_disable,
.fw_version_get = ice_fw_version_get,
.vlan_pvid_set = ice_vlan_pvid_set,
+ .rxq_info_get = ice_rxq_info_get,
+ .txq_info_get = ice_txq_info_get,
.get_eeprom_length = ice_get_eeprom_length,
.get_eeprom = ice_get_eeprom,
+ .rx_queue_count = ice_rx_queue_count,
.stats_get = ice_stats_get,
.stats_reset = ice_stats_reset,
.xstats_get = ice_xstats_get,
diff --git a/drivers/net/ice/ice_lan_rxtx.c b/drivers/net/ice/ice_lan_rxtx.c
index a9e8c03..d4b7277 100644
--- a/drivers/net/ice/ice_lan_rxtx.c
+++ b/drivers/net/ice/ice_lan_rxtx.c
@@ -945,6 +945,72 @@
}

void
+ice_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+ struct rte_eth_rxq_info *qinfo)
+{
+ struct ice_rx_queue *rxq;
+
+ rxq = dev->data->rx_queues[queue_id];
+
+ qinfo->mp = rxq->mp;
+ qinfo->scattered_rx = dev->data->scattered_rx;
+ qinfo->nb_desc = rxq->nb_rx_desc;
+
+ qinfo->conf.rx_free_thresh = rxq->rx_free_thresh;
+ qinfo->conf.rx_drop_en = rxq->drop_en;
+ qinfo->conf.rx_deferred_start = rxq->rx_deferred_start;
+}
+
+void
+ice_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+ struct rte_eth_txq_info *qinfo)
+{
+ struct ice_tx_queue *txq;
+
+ txq = dev->data->tx_queues[queue_id];
+
+ qinfo->nb_desc = txq->nb_tx_desc;
+
+ qinfo->conf.tx_thresh.pthresh = txq->pthresh;
+ qinfo->conf.tx_thresh.hthresh = txq->hthresh;
+ qinfo->conf.tx_thresh.wthresh = txq->wthresh;
+
+ qinfo->conf.tx_free_thresh = txq->tx_free_thresh;
+ qinfo->conf.tx_rs_thresh = txq->tx_rs_thresh;
+ qinfo->conf.offloads = txq->offloads;
+ qinfo->conf.tx_deferred_start = txq->tx_deferred_start;
+}
+
+uint32_t
+ice_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+#define ICE_RXQ_SCAN_INTERVAL 4
+ volatile union ice_rx_desc *rxdp;
+ struct ice_rx_queue *rxq;
+ uint16_t desc = 0;
+
+ rxq = dev->data->rx_queues[rx_queue_id];
+ rxdp = &rxq->rx_ring[rxq->rx_tail];
+ while ((desc < rxq->nb_rx_desc) &&
+ ((rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len) &
+ ICE_RXD_QW1_STATUS_M) >> ICE_RXD_QW1_STATUS_S) &
+ (1 << ICE_RX_DESC_STATUS_DD_S)) {
+ /**
+ * Check the DD bit of a rx descriptor of each 4 in a group,
+ * to avoid checking too frequently and downgrading performance
+ * too much.
+ */
+ desc += ICE_RXQ_SCAN_INTERVAL;
+ rxdp += ICE_RXQ_SCAN_INTERVAL;
+ if (rxq->rx_tail + desc >= rxq->nb_rx_desc)
+ rxdp = &(rxq->rx_ring[rxq->rx_tail +
+ desc - rxq->nb_rx_desc]);
+ }
+
+ return desc;
+}
+
+void
ice_clear_queues(struct rte_eth_dev *dev)
{
uint16_t i;
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
index 871646f..bad2b89 100644
--- a/drivers/net/ice/ice_rxtx.h
+++ b/drivers/net/ice/ice_rxtx.h
@@ -134,6 +134,11 @@ int ice_tx_queue_setup(struct rte_eth_dev *dev,
void ice_tx_queue_release(void *txq);
void ice_clear_queues(struct rte_eth_dev *dev);
void ice_free_queues(struct rte_eth_dev *dev);
+uint32_t ice_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);
void ice_set_default_ptype_table(struct rte_eth_dev *dev);
const uint32_t *ice_dev_supported_ptypes_get(struct rte_eth_dev *dev);
+void ice_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+ struct rte_eth_rxq_info *qinfo);
+void ice_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+ struct rte_eth_txq_info *qinfo);
#endif /* _ICE_RXTX_H_ */
--
1.9.3
Wenzhuo Lu
2018-11-23 06:56:16 UTC
Permalink
Signed-off-by: Wenzhuo Lu <***@intel.com>
Signed-off-by: Qiming Yang <***@intel.com>
Signed-off-by: Xiaoyun Li <***@intel.com>
Signed-off-by: Jingjing Wu <***@intel.com>
---
drivers/net/ice/ice_ethdev.c | 14 +
drivers/net/ice/ice_lan_rxtx.c | 568 ++++++++++++++++++++++++++++++++++++++++-
drivers/net/ice/ice_rxtx.h | 8 +
3 files changed, 588 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 6485577..21a251f 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -1267,7 +1267,19 @@ struct ice_xstats_name_off {
int ret;

dev->dev_ops = &ice_eth_dev_ops;
+ dev->rx_pkt_burst = ice_recv_pkts;
+ dev->tx_pkt_burst = ice_xmit_pkts;
+ dev->tx_pkt_prepare = ice_prep_pkts;

+ /* For secondary processes, we don't initialise any further as primary
+ * has already done this work. Only check we don't need a different
+ * RX function.
+ */
+ if (rte_eal_process_type() == RTE_PROC_SECONDARY) {
+ ice_set_rx_function(dev);
+ ice_set_tx_function(dev);
+ return 0;
+ }
ice_set_default_ptype_table(dev);
pci_dev = RTE_DEV_TO_PCI(dev->device);
intr_handle = &pci_dev->intr_handle;
@@ -1735,6 +1747,8 @@ static int ice_init_rss(struct ice_pf *pf)
goto rx_err;
}

+ ice_set_rx_function(dev);
+
/* enable Rx interrput and mapping Rx queue to interrupt vector */
if (ice_rxq_intr_setup(dev))
return -EIO;
diff --git a/drivers/net/ice/ice_lan_rxtx.c b/drivers/net/ice/ice_lan_rxtx.c
index d4b7277..71ce048 100644
--- a/drivers/net/ice/ice_lan_rxtx.c
+++ b/drivers/net/ice/ice_lan_rxtx.c
@@ -908,8 +908,81 @@
rte_free(q);
}

+/* Translate the rx descriptor status to pkt flags */
+static inline uint64_t
+ice_rxd_status_to_pkt_flags(uint64_t qword)
+{
+ uint64_t flags;
+
+ /* Check if RSS_HASH */
+ flags = (((qword >> ICE_RX_DESC_STATUS_FLTSTAT_S) &
+ ICE_RX_DESC_FLTSTAT_RSS_HASH) ==
+ ICE_RX_DESC_FLTSTAT_RSS_HASH) ? PKT_RX_RSS_HASH : 0;
+
+ return flags;
+}
+
+/* Rx L3/L4 checksum */
+static inline uint64_t
+ice_rxd_error_to_pkt_flags(uint64_t qword)
+{
+ uint64_t flags = 0;
+ uint64_t error_bits = (qword >> ICE_RXD_QW1_ERROR_S);
+
+ if (likely((error_bits & ICE_RX_ERR_BITS) == 0)) {
+ flags |= (PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD);
+ return flags;
+ }
+
+ if (unlikely(error_bits & (1 << ICE_RX_DESC_ERROR_IPE_S)))
+ flags |= PKT_RX_IP_CKSUM_BAD;
+ else
+ flags |= PKT_RX_IP_CKSUM_GOOD;
+
+ if (unlikely(error_bits & (1 << ICE_RX_DESC_ERROR_L4E_S)))
+ flags |= PKT_RX_L4_CKSUM_BAD;
+ else
+ flags |= PKT_RX_L4_CKSUM_GOOD;
+
+ if (unlikely(error_bits & (1 << ICE_RX_DESC_ERROR_EIPE_S)))
+ flags |= PKT_RX_EIP_CKSUM_BAD;
+
+ return flags;
+}
+
+static inline void
+ice_rxd_to_vlan_tci(struct rte_mbuf *mb, volatile union ice_rx_desc *rxdp)
+{
+ if (rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len) &
+ (1 << ICE_RX_DESC_STATUS_L2TAG1P_S)) {
+ mb->ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+ mb->vlan_tci =
+ rte_le_to_cpu_16(rxdp->wb.qword0.lo_dword.l2tag1);
+ PMD_RX_LOG(DEBUG, "Descriptor l2tag1: %u",
+ rte_le_to_cpu_16(rxdp->wb.qword0.lo_dword.l2tag1));
+ } else {
+ mb->vlan_tci = 0;
+ }
+
+#ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC
+ if (rte_le_to_cpu_16(rxdp->wb.qword2.ext_status) &
+ (1 << ICE_RX_DESC_EXT_STATUS_L2TAG2P_S)) {
+ mb->ol_flags |= PKT_RX_QINQ_STRIPPED | PKT_RX_QINQ |
+ PKT_RX_VLAN_STRIPPED | PKT_RX_VLAN;
+ mb->vlan_tci_outer = mb->vlan_tci;
+ mb->vlan_tci = rte_le_to_cpu_16(rxdp->wb.qword2.l2tag2_2);
+ PMD_RX_LOG(DEBUG, "Descriptor l2tag2_1: %u, l2tag2_2: %u",
+ rte_le_to_cpu_16(rxdp->wb.qword2.l2tag2_1),
+ rte_le_to_cpu_16(rxdp->wb.qword2.l2tag2_2));
+ } else {
+ mb->vlan_tci_outer = 0;
+ }
+#endif
+ PMD_RX_LOG(DEBUG, "Mbuf vlan_tci: %u, vlan_tci_outer: %u",
+ mb->vlan_tci, mb->vlan_tci_outer);
+}
const uint32_t *
-ice_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
+ice_dev_supported_ptypes_get(struct rte_eth_dev *dev)
{
static const uint32_t ptypes[] = {
/* refers to ice_get_default_pkt_type() */
@@ -941,7 +1014,9 @@
RTE_PTYPE_UNKNOWN
};

- return ptypes;
+ if (dev->rx_pkt_burst == ice_recv_pkts)
+ return ptypes;
+ return NULL;
}

void
@@ -1052,6 +1127,495 @@
dev->data->nb_tx_queues = 0;
}

+uint16_t
+ice_recv_pkts(void *rx_queue,
+ struct rte_mbuf **rx_pkts,
+ uint16_t nb_pkts)
+{
+ struct ice_rx_queue *rxq = rx_queue;
+ volatile union ice_rx_desc *rx_ring = rxq->rx_ring;
+ volatile union ice_rx_desc *rxdp;
+ union ice_rx_desc rxd;
+ struct ice_rx_entry *sw_ring = rxq->sw_ring;
+ struct ice_rx_entry *rxe;
+ struct rte_mbuf *nmb; /* new allocated mbuf */
+ struct rte_mbuf *rxm; /* pointer to store old mbuf in SW ring */
+ uint16_t rx_id = rxq->rx_tail;
+ uint16_t nb_rx = 0;
+ uint16_t nb_hold = 0;
+ uint16_t rx_packet_len;
+ uint32_t rx_status;
+ uint64_t qword1;
+ uint64_t dma_addr;
+ uint64_t pkt_flags = 0;
+ uint32_t *ptype_tbl = rxq->vsi->adapter->ptype_tbl;
+ struct rte_eth_dev *dev;
+
+ while (nb_rx < nb_pkts) {
+ rxdp = &rx_ring[rx_id];
+ qword1 = rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len);
+ rx_status = (qword1 & ICE_RXD_QW1_STATUS_M) >>
+ ICE_RXD_QW1_STATUS_S;
+
+ /* Check the DD bit first */
+ if (!(rx_status & (1 << ICE_RX_DESC_STATUS_DD_S)))
+ break;
+
+ /* allocate mbuf */
+ nmb = rte_mbuf_raw_alloc(rxq->mp);
+ if (unlikely(!nmb)) {
+ dev = ICE_VSI_TO_ETH_DEV(rxq->vsi);
+ dev->data->rx_mbuf_alloc_failed++;
+ break;
+ }
+ rxd = *rxdp; /* copy descriptor in ring to temp variable*/
+
+ nb_hold++;
+ rxe = &sw_ring[rx_id]; /* get corresponding mbuf in SW ring */
+ rx_id++;
+ if (unlikely(rx_id == rxq->nb_rx_desc))
+ rx_id = 0;
+ rxm = rxe->mbuf;
+ rxe->mbuf = nmb;
+ dma_addr =
+ rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb));
+
+ /**
+ * fill the read format of descriptor with physic address in
+ * new allocated mbuf: nmb
+ */
+ rxdp->read.hdr_addr = 0;
+ rxdp->read.pkt_addr = dma_addr;
+
+ /* calculate rx_packet_len of the received pkt */
+ rx_packet_len = ((qword1 & ICE_RXD_QW1_LEN_PBUF_M) >>
+ ICE_RXD_QW1_LEN_PBUF_S) - rxq->crc_len;
+
+ /* fill old mbuf with received descriptor: rxd */
+ rxm->data_off = RTE_PKTMBUF_HEADROOM;
+ rte_prefetch0(RTE_PTR_ADD(rxm->buf_addr, RTE_PKTMBUF_HEADROOM));
+ rxm->nb_segs = 1;
+ rxm->next = NULL;
+ rxm->pkt_len = rx_packet_len;
+ rxm->data_len = rx_packet_len;
+ rxm->port = rxq->port_id;
+ ice_rxd_to_vlan_tci(rxm, rxdp);
+ rxm->packet_type = ptype_tbl[(uint8_t)((qword1 &
+ ICE_RXD_QW1_PTYPE_M) >>
+ ICE_RXD_QW1_PTYPE_S)];
+ pkt_flags = ice_rxd_status_to_pkt_flags(qword1);
+ pkt_flags |= ice_rxd_error_to_pkt_flags(qword1);
+ if (pkt_flags & PKT_RX_RSS_HASH)
+ rxm->hash.rss =
+ rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
+ rxm->ol_flags |= pkt_flags;
+ /* copy old mbuf to rx_pkts */
+ rx_pkts[nb_rx++] = rxm;
+ }
+ rxq->rx_tail = rx_id;
+ /**
+ * If the number of free RX descriptors is greater than the RX free
+ * threshold of the queue, advance the receive tail register of queue.
+ * Update that register with the value of the last processed RX
+ * descriptor minus 1.
+ */
+ nb_hold = (uint16_t)(nb_hold + rxq->nb_rx_hold);
+ if (nb_hold > rxq->rx_free_thresh) {
+ rx_id = (uint16_t)(rx_id == 0 ?
+ (rxq->nb_rx_desc - 1) : (rx_id - 1));
+ /* write TAIL register */
+ ICE_PCI_REG_WRITE(rxq->qrx_tail, rx_id);
+ nb_hold = 0;
+ }
+ rxq->nb_rx_hold = nb_hold;
+
+ /* return received packet in the burst */
+ return nb_rx;
+}
+
+static inline void
+ice_txd_enable_checksum(uint64_t ol_flags,
+ uint32_t *td_cmd,
+ uint32_t *td_offset,
+ union ice_tx_offload tx_offload)
+{
+ /* L2 length must be set. */
+ *td_offset |= (tx_offload.l2_len >> 1) <<
+ ICE_TX_DESC_LEN_MACLEN_S;
+
+ /* Enable L3 checksum offloads */
+ if (ol_flags & PKT_TX_IP_CKSUM) {
+ *td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV4_CSUM;
+ *td_offset |= (tx_offload.l3_len >> 2) <<
+ ICE_TX_DESC_LEN_IPLEN_S;
+ } else if (ol_flags & PKT_TX_IPV4) {
+ *td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV4;
+ *td_offset |= (tx_offload.l3_len >> 2) <<
+ ICE_TX_DESC_LEN_IPLEN_S;
+ } else if (ol_flags & PKT_TX_IPV6) {
+ *td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV6;
+ *td_offset |= (tx_offload.l3_len >> 2) <<
+ ICE_TX_DESC_LEN_IPLEN_S;
+ }
+
+ if (ol_flags & PKT_TX_TCP_SEG) {
+ *td_cmd |= ICE_TX_DESC_CMD_L4T_EOFT_TCP;
+ *td_offset |= (tx_offload.l4_len >> 2) <<
+ ICE_TX_DESC_LEN_L4_LEN_S;
+ return;
+ }
+
+ /* Enable L4 checksum offloads */
+ switch (ol_flags & PKT_TX_L4_MASK) {
+ case PKT_TX_TCP_CKSUM:
+ *td_cmd |= ICE_TX_DESC_CMD_L4T_EOFT_TCP;
+ *td_offset |= (sizeof(struct tcp_hdr) >> 2) <<
+ ICE_TX_DESC_LEN_L4_LEN_S;
+ break;
+ case PKT_TX_SCTP_CKSUM:
+ *td_cmd |= ICE_TX_DESC_CMD_L4T_EOFT_SCTP;
+ *td_offset |= (sizeof(struct sctp_hdr) >> 2) <<
+ ICE_TX_DESC_LEN_L4_LEN_S;
+ break;
+ case PKT_TX_UDP_CKSUM:
+ *td_cmd |= ICE_TX_DESC_CMD_L4T_EOFT_UDP;
+ *td_offset |= (sizeof(struct udp_hdr) >> 2) <<
+ ICE_TX_DESC_LEN_L4_LEN_S;
+ break;
+ default:
+ break;
+ }
+}
+
+static inline int
+ice_xmit_cleanup(struct ice_tx_queue *txq)
+{
+ struct ice_tx_entry *sw_ring = txq->sw_ring;
+ volatile struct ice_tx_desc *txd = txq->tx_ring;
+ uint16_t last_desc_cleaned = txq->last_desc_cleaned;
+ uint16_t nb_tx_desc = txq->nb_tx_desc;
+ uint16_t desc_to_clean_to;
+ uint16_t nb_tx_to_clean;
+
+ /* Determine the last descriptor needing to be cleaned */
+ desc_to_clean_to = (uint16_t)(last_desc_cleaned + txq->tx_rs_thresh);
+ if (desc_to_clean_to >= nb_tx_desc)
+ desc_to_clean_to = (uint16_t)(desc_to_clean_to - nb_tx_desc);
+
+ /* Check to make sure the last descriptor to clean is done */
+ desc_to_clean_to = sw_ring[desc_to_clean_to].last_id;
+ if (!(txd[desc_to_clean_to].cmd_type_offset_bsz &
+ rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE))) {
+ PMD_TX_FREE_LOG(DEBUG, "TX descriptor %4u is not done "
+ "(port=%d queue=%d) value=0x%lx\n",
+ desc_to_clean_to,
+ txq->port_id, txq->queue_id,
+ txd[desc_to_clean_to].cmd_type_offset_bsz);
+ /* Failed to clean any descriptors */
+ return -1;
+ }
+
+ /* Figure out how many descriptors will be cleaned */
+ if (last_desc_cleaned > desc_to_clean_to)
+ nb_tx_to_clean = (uint16_t)((nb_tx_desc - last_desc_cleaned) +
+ desc_to_clean_to);
+ else
+ nb_tx_to_clean = (uint16_t)(desc_to_clean_to -
+ last_desc_cleaned);
+
+ /* The last descriptor to clean is done, so that means all the
+ * descriptors from the last descriptor that was cleaned
+ * up to the last descriptor with the RS bit set
+ * are done. Only reset the threshold descriptor.
+ */
+ txd[desc_to_clean_to].cmd_type_offset_bsz = 0;
+
+ /* Update the txq to reflect the last descriptor that was cleaned */
+ txq->last_desc_cleaned = desc_to_clean_to;
+ txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + nb_tx_to_clean);
+
+ return 0;
+}
+
+/* Check if the context descriptor is needed for TX offloading */
+static inline uint16_t
+ice_calc_context_desc(uint64_t flags)
+{
+ static uint64_t mask = PKT_TX_TCP_SEG | PKT_TX_QINQ_PKT;
+
+ return (flags & mask) ? 1 : 0;
+}
+
+/* set ice TSO context descriptor */
+static inline uint64_t
+ice_set_tso_ctx(struct rte_mbuf *mbuf, union ice_tx_offload tx_offload)
+{
+ uint64_t ctx_desc = 0;
+ uint32_t cd_cmd, hdr_len, cd_tso_len;
+
+ if (!tx_offload.l4_len) {
+ PMD_TX_LOG(DEBUG, "L4 length set to 0");
+ return ctx_desc;
+ }
+
+ /**
+ * in case of non tunneling packet, the outer_l2_len and
+ * outer_l3_len must be 0.
+ */
+ hdr_len = tx_offload.outer_l2_len +
+ tx_offload.outer_l3_len +
+ tx_offload.l2_len +
+ tx_offload.l3_len +
+ tx_offload.l4_len;
+
+ cd_cmd = ICE_TX_CTX_DESC_TSO;
+ cd_tso_len = mbuf->pkt_len - hdr_len;
+ ctx_desc |= ((uint64_t)cd_cmd << ICE_TXD_CTX_QW1_CMD_S) |
+ ((uint64_t)cd_tso_len << ICE_TXD_CTX_QW1_TSO_LEN_S) |
+ ((uint64_t)mbuf->tso_segsz << ICE_TXD_CTX_QW1_MSS_S);
+
+ return ctx_desc;
+}
+
+uint16_t
+ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+ struct ice_tx_queue *txq;
+ volatile struct ice_tx_desc *tx_ring;
+ volatile struct ice_tx_desc *txd;
+ struct ice_tx_entry *sw_ring;
+ struct ice_tx_entry *txe, *txn;
+ struct rte_mbuf *tx_pkt;
+ struct rte_mbuf *m_seg;
+ uint16_t tx_id;
+ uint16_t nb_tx;
+ uint16_t nb_used;
+ uint16_t nb_ctx;
+ uint32_t td_cmd = 0;
+ uint32_t td_offset = 0;
+ uint32_t td_tag = 0;
+ uint16_t tx_last;
+ uint64_t buf_dma_addr;
+ uint64_t ol_flags;
+ union ice_tx_offload tx_offload = {0};
+
+ txq = tx_queue;
+ sw_ring = txq->sw_ring;
+ tx_ring = txq->tx_ring;
+ tx_id = txq->tx_tail;
+ txe = &sw_ring[tx_id];
+
+ /* Check if the descriptor ring needs to be cleaned. */
+ if (txq->nb_tx_free < txq->tx_free_thresh)
+ ice_xmit_cleanup(txq);
+
+ for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) {
+ tx_pkt = *tx_pkts++;
+
+ td_cmd = 0;
+ ol_flags = tx_pkt->ol_flags;
+ tx_offload.l2_len = tx_pkt->l2_len;
+ tx_offload.l3_len = tx_pkt->l3_len;
+ tx_offload.outer_l2_len = tx_pkt->outer_l2_len;
+ tx_offload.outer_l3_len = tx_pkt->outer_l3_len;
+ tx_offload.l4_len = tx_pkt->l4_len;
+ tx_offload.tso_segsz = tx_pkt->tso_segsz;
+ /* Calculate the number of context descriptors needed. */
+ nb_ctx = ice_calc_context_desc(ol_flags);
+
+ /* The number of descriptors that must be allocated for
+ * a packet equals to the number of the segments of that
+ * packet plus the number of context descriptor if needed.
+ */
+ nb_used = (uint16_t)(tx_pkt->nb_segs + nb_ctx);
+ tx_last = (uint16_t)(tx_id + nb_used - 1);
+
+ /* Circular ring */
+ if (tx_last >= txq->nb_tx_desc)
+ tx_last = (uint16_t)(tx_last - txq->nb_tx_desc);
+
+ if (nb_used > txq->nb_tx_free) {
+ if (ice_xmit_cleanup(txq) != 0) {
+ if (nb_tx == 0)
+ return 0;
+ goto end_of_tx;
+ }
+ if (unlikely(nb_used > txq->tx_rs_thresh)) {
+ while (nb_used > txq->nb_tx_free) {
+ if (ice_xmit_cleanup(txq) != 0) {
+ if (nb_tx == 0)
+ return 0;
+ goto end_of_tx;
+ }
+ }
+ }
+ }
+
+ /* Descriptor based VLAN insertion */
+ if (ol_flags & (PKT_TX_VLAN_PKT | PKT_TX_QINQ_PKT)) {
+ td_cmd |= ICE_TX_DESC_CMD_IL2TAG1;
+ td_tag = tx_pkt->vlan_tci;
+ }
+
+ /* Enable checksum offloading */
+ if (ol_flags & ICE_TX_CKSUM_OFFLOAD_MASK) {
+ ice_txd_enable_checksum(ol_flags, &td_cmd,
+ &td_offset, tx_offload);
+ }
+
+ if (nb_ctx) {
+ /* Setup TX context descriptor if required */
+ volatile struct ice_tx_ctx_desc *ctx_txd =
+ (volatile struct ice_tx_ctx_desc *)
+ &tx_ring[tx_id];
+ uint16_t cd_l2tag2 = 0;
+ uint64_t cd_type_cmd_tso_mss = ICE_TX_DESC_DTYPE_CTX;
+
+ txn = &sw_ring[txe->next_id];
+ RTE_MBUF_PREFETCH_TO_FREE(txn->mbuf);
+ if (txe->mbuf) {
+ rte_pktmbuf_free_seg(txe->mbuf);
+ txe->mbuf = NULL;
+ }
+
+ if (ol_flags & PKT_TX_TCP_SEG)
+ cd_type_cmd_tso_mss |=
+ ice_set_tso_ctx(tx_pkt, tx_offload);
+
+ /* TX context descriptor based double VLAN insert */
+ if (ol_flags & PKT_TX_QINQ_PKT) {
+ cd_l2tag2 = tx_pkt->vlan_tci_outer;
+ cd_type_cmd_tso_mss |=
+ ((uint64_t)ICE_TX_CTX_DESC_IL2TAG2 <<
+ ICE_TXD_CTX_QW1_CMD_S);
+ }
+ ctx_txd->l2tag2 = rte_cpu_to_le_16(cd_l2tag2);
+ ctx_txd->qw1 =
+ rte_cpu_to_le_64(cd_type_cmd_tso_mss);
+
+ txe->last_id = tx_last;
+ tx_id = txe->next_id;
+ txe = txn;
+ }
+ m_seg = tx_pkt;
+
+ do {
+ txd = &tx_ring[tx_id];
+ txn = &sw_ring[txe->next_id];
+
+ if (txe->mbuf)
+ rte_pktmbuf_free_seg(txe->mbuf);
+ txe->mbuf = m_seg;
+
+ /* Setup TX Descriptor */
+ buf_dma_addr = rte_mbuf_data_iova(m_seg);
+ txd->buf_addr = rte_cpu_to_le_64(buf_dma_addr);
+ txd->cmd_type_offset_bsz =
+ rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DATA |
+ ((uint64_t)td_cmd << ICE_TXD_QW1_CMD_S) |
+ ((uint64_t)td_offset << ICE_TXD_QW1_OFFSET_S) |
+ ((uint64_t)m_seg->data_len <<
+ ICE_TXD_QW1_TX_BUF_SZ_S) |
+ ((uint64_t)td_tag << ICE_TXD_QW1_L2TAG1_S));
+
+ txe->last_id = tx_last;
+ tx_id = txe->next_id;
+ txe = txn;
+ m_seg = m_seg->next;
+ } while (m_seg);
+
+ /* fill the last descriptor with End of Packet (EOP) bit */
+ td_cmd |= ICE_TX_DESC_CMD_EOP;
+ txq->nb_tx_used = (uint16_t)(txq->nb_tx_used + nb_used);
+ txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_used);
+
+ /* set RS bit on the last descriptor of one packet */
+ if (txq->nb_tx_used >= txq->tx_rs_thresh) {
+ PMD_TX_FREE_LOG(DEBUG,
+ "Setting RS bit on TXD id="
+ "%4u (port=%d queue=%d)",
+ tx_last, txq->port_id, txq->queue_id);
+
+ td_cmd |= ICE_TX_DESC_CMD_RS;
+
+ /* Update txq RS bit counters */
+ txq->nb_tx_used = 0;
+ }
+ txd->cmd_type_offset_bsz |=
+ rte_cpu_to_le_64(((uint64_t)td_cmd) <<
+ ICE_TXD_QW1_CMD_S);
+ }
+end_of_tx:
+ rte_wmb();
+
+ /* update Tail register */
+ ICE_PCI_REG_WRITE(txq->qtx_tail, tx_id);
+ txq->tx_tail = tx_id;
+
+ return nb_tx;
+}
+
+void __attribute__((cold))
+ice_set_rx_function(struct rte_eth_dev *dev)
+{
+ dev->rx_pkt_burst = ice_recv_pkts;
+}
+
+/*********************************************************************
+ *
+ * TX prep functions
+ *
+ **********************************************************************/
+/* The default values of TSO MSS */
+#define ICE_MIN_TSO_MSS 64
+#define ICE_MAX_TSO_MSS 9728
+#define ICE_MAX_TSO_FRAME_SIZE 262144
+uint16_t
+ice_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
+ uint16_t nb_pkts)
+{
+ int i, ret;
+ uint64_t ol_flags;
+ struct rte_mbuf *m;
+
+ for (i = 0; i < nb_pkts; i++) {
+ m = tx_pkts[i];
+ ol_flags = m->ol_flags;
+
+ if (ol_flags & PKT_TX_TCP_SEG &&
+ (m->tso_segsz < ICE_MIN_TSO_MSS ||
+ m->tso_segsz > ICE_MAX_TSO_MSS ||
+ m->pkt_len > ICE_MAX_TSO_FRAME_SIZE)) {
+ /**
+ * MSS outside the range are considered malicious
+ */
+ rte_errno = -EINVAL;
+ return i;
+ }
+
+#ifdef RTE_LIBRTE_ETHDEV_DEBUG
+ ret = rte_validate_tx_offload(m);
+ if (ret != 0) {
+ rte_errno = ret;
+ return i;
+ }
+#endif
+ ret = rte_net_intel_cksum_prepare(m);
+ if (ret != 0) {
+ rte_errno = ret;
+ return i;
+ }
+ }
+ return i;
+}
+
+void __attribute__((cold))
+ice_set_tx_function(struct rte_eth_dev *dev)
+{
+ dev->tx_pkt_burst = ice_xmit_pkts;
+ dev->tx_pkt_prepare = ice_prep_pkts;
+}
+
/* For each value it means, datasheet of hardware can tell more details
*
* @note: fix ice_dev_supported_ptypes_get() if any change here.
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
index bad2b89..e0218b3 100644
--- a/drivers/net/ice/ice_rxtx.h
+++ b/drivers/net/ice/ice_rxtx.h
@@ -134,6 +134,14 @@ int ice_tx_queue_setup(struct rte_eth_dev *dev,
void ice_tx_queue_release(void *txq);
void ice_clear_queues(struct rte_eth_dev *dev);
void ice_free_queues(struct rte_eth_dev *dev);
+uint16_t ice_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+ uint16_t nb_pkts);
+uint16_t ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+ uint16_t nb_pkts);
+void ice_set_rx_function(struct rte_eth_dev *dev);
+uint16_t ice_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
+ uint16_t nb_pkts);
+void ice_set_tx_function(struct rte_eth_dev *dev);
uint32_t ice_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);
void ice_set_default_ptype_table(struct rte_eth_dev *dev);
const uint32_t *ice_dev_supported_ptypes_get(struct rte_eth_dev *dev);
--
1.9.3
Wenzhuo Lu
2018-11-23 06:56:17 UTC
Permalink
Add RX functions, scatter and bulk.
Add TX function, simple.

Signed-off-by: Wenzhuo Lu <***@intel.com>
Signed-off-by: Qiming Yang <***@intel.com>
Signed-off-by: Xiaoyun Li <***@intel.com>
Signed-off-by: Jingjing Wu <***@intel.com>
---
drivers/net/ice/ice_lan_rxtx.c | 660 ++++++++++++++++++++++++++++++++++++++++-
1 file changed, 658 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ice/ice_lan_rxtx.c b/drivers/net/ice/ice_lan_rxtx.c
index 71ce048..07ab677 100644
--- a/drivers/net/ice/ice_lan_rxtx.c
+++ b/drivers/net/ice/ice_lan_rxtx.c
@@ -981,6 +981,431 @@
PMD_RX_LOG(DEBUG, "Mbuf vlan_tci: %u, vlan_tci_outer: %u",
mb->vlan_tci, mb->vlan_tci_outer);
}
+
+#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC
+#define ICE_LOOK_AHEAD 8
+#if (ICE_LOOK_AHEAD != 8)
+#error "PMD ICE: ICE_LOOK_AHEAD must be 8\n"
+#endif
+static inline int
+ice_rx_scan_hw_ring(struct ice_rx_queue *rxq)
+{
+ volatile union ice_rx_desc *rxdp;
+ struct ice_rx_entry *rxep;
+ struct rte_mbuf *mb;
+ uint16_t pkt_len;
+ uint64_t qword1;
+ uint32_t rx_status;
+ int32_t s[ICE_LOOK_AHEAD], nb_dd;
+ int32_t i, j, nb_rx = 0;
+ uint64_t pkt_flags = 0;
+ uint32_t *ptype_tbl = rxq->vsi->adapter->ptype_tbl;
+
+ rxdp = &rxq->rx_ring[rxq->rx_tail];
+ rxep = &rxq->sw_ring[rxq->rx_tail];
+
+ qword1 = rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len);
+ rx_status = (qword1 & ICE_RXD_QW1_STATUS_M) >> ICE_RXD_QW1_STATUS_S;
+
+ /* Make sure there is at least 1 packet to receive */
+ if (!(rx_status & (1 << ICE_RX_DESC_STATUS_DD_S)))
+ return 0;
+
+ /**
+ * Scan LOOK_AHEAD descriptors at a time to determine which
+ * descriptors reference packets that are ready to be received.
+ */
+ for (i = 0; i < ICE_RX_MAX_BURST; i += ICE_LOOK_AHEAD,
+ rxdp += ICE_LOOK_AHEAD, rxep += ICE_LOOK_AHEAD) {
+ /* Read desc statuses backwards to avoid race condition */
+ for (j = ICE_LOOK_AHEAD - 1; j >= 0; j--) {
+ qword1 = rte_le_to_cpu_64(
+ rxdp[j].wb.qword1.status_error_len);
+ s[j] = (qword1 & ICE_RXD_QW1_STATUS_M) >>
+ ICE_RXD_QW1_STATUS_S;
+ }
+
+ rte_smp_rmb();
+
+ /* Compute how many status bits were set */
+ for (j = 0, nb_dd = 0; j < ICE_LOOK_AHEAD; j++)
+ nb_dd += s[j] & (1 << ICE_RX_DESC_STATUS_DD_S);
+
+ nb_rx += nb_dd;
+
+ /* Translate descriptor info to mbuf parameters */
+ for (j = 0; j < nb_dd; j++) {
+ mb = rxep[j].mbuf;
+ qword1 = rte_le_to_cpu_64(
+ rxdp[j].wb.qword1.status_error_len);
+ pkt_len = ((qword1 & ICE_RXD_QW1_LEN_PBUF_M) >>
+ ICE_RXD_QW1_LEN_PBUF_S) - rxq->crc_len;
+ mb->data_len = pkt_len;
+ mb->pkt_len = pkt_len;
+ mb->ol_flags = 0;
+ pkt_flags = ice_rxd_status_to_pkt_flags(qword1);
+ pkt_flags |= ice_rxd_error_to_pkt_flags(qword1);
+ if (pkt_flags & PKT_RX_RSS_HASH)
+ mb->hash.rss =
+ rte_le_to_cpu_32(
+ rxdp[j].wb.qword0.hi_dword.rss);
+ mb->packet_type = ptype_tbl[(uint8_t)(
+ (qword1 &
+ ICE_RXD_QW1_PTYPE_M) >>
+ ICE_RXD_QW1_PTYPE_S)];
+ ice_rxd_to_vlan_tci(mb, &rxdp[j]);
+
+ mb->ol_flags |= pkt_flags;
+ }
+
+ for (j = 0; j < ICE_LOOK_AHEAD; j++)
+ rxq->rx_stage[i + j] = rxep[j].mbuf;
+
+ if (nb_dd != ICE_LOOK_AHEAD)
+ break;
+ }
+
+ /* Clear software ring entries */
+ for (i = 0; i < nb_rx; i++)
+ rxq->sw_ring[rxq->rx_tail + i].mbuf = NULL;
+
+ PMD_RX_LOG(DEBUG, "ice_rx_scan_hw_ring: "
+ "port_id=%u, queue_id=%u, nb_rx=%d",
+ rxq->port_id, rxq->queue_id, nb_rx);
+
+ return nb_rx;
+}
+
+static inline uint16_t
+ice_rx_fill_from_stage(struct ice_rx_queue *rxq,
+ struct rte_mbuf **rx_pkts,
+ uint16_t nb_pkts)
+{
+ uint16_t i;
+ struct rte_mbuf **stage = &rxq->rx_stage[rxq->rx_next_avail];
+
+ nb_pkts = (uint16_t)RTE_MIN(nb_pkts, rxq->rx_nb_avail);
+
+ for (i = 0; i < nb_pkts; i++)
+ rx_pkts[i] = stage[i];
+
+ rxq->rx_nb_avail = (uint16_t)(rxq->rx_nb_avail - nb_pkts);
+ rxq->rx_next_avail = (uint16_t)(rxq->rx_next_avail + nb_pkts);
+
+ return nb_pkts;
+}
+
+static inline int
+ice_rx_alloc_bufs(struct ice_rx_queue *rxq)
+{
+ volatile union ice_rx_desc *rxdp;
+ struct ice_rx_entry *rxep;
+ struct rte_mbuf *mb;
+ uint16_t alloc_idx, i;
+ uint64_t dma_addr;
+ int diag;
+
+ /* Allocate buffers in bulk */
+ alloc_idx = (uint16_t)(rxq->rx_free_trigger -
+ (rxq->rx_free_thresh - 1));
+ rxep = &rxq->sw_ring[alloc_idx];
+ diag = rte_mempool_get_bulk(rxq->mp, (void *)rxep,
+ rxq->rx_free_thresh);
+ if (unlikely(diag != 0)) {
+ PMD_RX_LOG(ERR, "Failed to get mbufs in bulk");
+ return -ENOMEM;
+ }
+
+ rxdp = &rxq->rx_ring[alloc_idx];
+ for (i = 0; i < rxq->rx_free_thresh; i++) {
+ if (likely(i < (rxq->rx_free_thresh - 1)))
+ /* Prefetch next mbuf */
+ rte_prefetch0(rxep[i + 1].mbuf);
+
+ mb = rxep[i].mbuf;
+ rte_mbuf_refcnt_set(mb, 1);
+ mb->next = NULL;
+ mb->data_off = RTE_PKTMBUF_HEADROOM;
+ mb->nb_segs = 1;
+ mb->port = rxq->port_id;
+ dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(mb));
+ rxdp[i].read.hdr_addr = 0;
+ rxdp[i].read.pkt_addr = dma_addr;
+ }
+
+ /* Update rx tail regsiter */
+ rte_wmb();
+ ICE_PCI_REG_WRITE(rxq->qrx_tail, rxq->rx_free_trigger);
+
+ rxq->rx_free_trigger =
+ (uint16_t)(rxq->rx_free_trigger + rxq->rx_free_thresh);
+ if (rxq->rx_free_trigger >= rxq->nb_rx_desc)
+ rxq->rx_free_trigger = (uint16_t)(rxq->rx_free_thresh - 1);
+
+ return 0;
+}
+
+static inline uint16_t
+rx_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
+{
+ struct ice_rx_queue *rxq = (struct ice_rx_queue *)rx_queue;
+ uint16_t nb_rx = 0;
+ struct rte_eth_dev *dev;
+
+ if (!nb_pkts)
+ return 0;
+
+ if (rxq->rx_nb_avail)
+ return ice_rx_fill_from_stage(rxq, rx_pkts, nb_pkts);
+
+ nb_rx = (uint16_t)ice_rx_scan_hw_ring(rxq);
+ rxq->rx_next_avail = 0;
+ rxq->rx_nb_avail = nb_rx;
+ rxq->rx_tail = (uint16_t)(rxq->rx_tail + nb_rx);
+
+ if (rxq->rx_tail > rxq->rx_free_trigger) {
+ if (ice_rx_alloc_bufs(rxq) != 0) {
+ uint16_t i, j;
+
+ dev = ICE_VSI_TO_ETH_DEV(rxq->vsi);
+ dev->data->rx_mbuf_alloc_failed +=
+ rxq->rx_free_thresh;
+ PMD_RX_LOG(DEBUG, "Rx mbuf alloc failed for "
+ "port_id=%u, queue_id=%u",
+ rxq->port_id, rxq->queue_id);
+ rxq->rx_nb_avail = 0;
+ rxq->rx_tail = (uint16_t)(rxq->rx_tail - nb_rx);
+ for (i = 0, j = rxq->rx_tail; i < nb_rx; i++, j++)
+ rxq->sw_ring[j].mbuf = rxq->rx_stage[i];
+
+ return 0;
+ }
+ }
+
+ if (rxq->rx_tail >= rxq->nb_rx_desc)
+ rxq->rx_tail = 0;
+
+ if (rxq->rx_nb_avail)
+ return ice_rx_fill_from_stage(rxq, rx_pkts, nb_pkts);
+
+ return 0;
+}
+
+static uint16_t
+ice_recv_pkts_bulk_alloc(void *rx_queue,
+ struct rte_mbuf **rx_pkts,
+ uint16_t nb_pkts)
+{
+ uint16_t nb_rx = 0;
+ uint16_t n;
+ uint16_t count;
+
+ if (unlikely(nb_pkts == 0))
+ return nb_rx;
+
+ if (likely(nb_pkts <= ICE_RX_MAX_BURST))
+ return rx_recv_pkts(rx_queue, rx_pkts, nb_pkts);
+
+ while (nb_pkts) {
+ n = RTE_MIN(nb_pkts, ICE_RX_MAX_BURST);
+ count = rx_recv_pkts(rx_queue, &rx_pkts[nb_rx], n);
+ nb_rx = (uint16_t)(nb_rx + count);
+ nb_pkts = (uint16_t)(nb_pkts - count);
+ if (count < n)
+ break;
+ }
+
+ return nb_rx;
+}
+#else
+static uint16_t
+ice_recv_pkts_bulk_alloc(void __rte_unused *rx_queue,
+ struct rte_mbuf __rte_unused **rx_pkts,
+ uint16_t __rte_unused nb_pkts)
+{
+ return 0;
+}
+#endif /* RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC */
+
+static uint16_t
+ice_recv_scattered_pkts(void *rx_queue,
+ struct rte_mbuf **rx_pkts,
+ uint16_t nb_pkts)
+{
+ struct ice_rx_queue *rxq = rx_queue;
+ volatile union ice_rx_desc *rx_ring = rxq->rx_ring;
+ volatile union ice_rx_desc *rxdp;
+ union ice_rx_desc rxd;
+ struct ice_rx_entry *sw_ring = rxq->sw_ring;
+ struct ice_rx_entry *rxe;
+ struct rte_mbuf *first_seg = rxq->pkt_first_seg;
+ struct rte_mbuf *last_seg = rxq->pkt_last_seg;
+ struct rte_mbuf *nmb; /* new allocated mbuf */
+ struct rte_mbuf *rxm; /* pointer to store old mbuf in SW ring */
+ uint16_t rx_id = rxq->rx_tail;
+ uint16_t nb_rx = 0;
+ uint16_t nb_hold = 0;
+ uint16_t rx_packet_len;
+ uint32_t rx_status;
+ uint64_t qword1;
+ uint64_t dma_addr;
+ uint64_t pkt_flags = 0;
+ uint32_t *ptype_tbl = rxq->vsi->adapter->ptype_tbl;
+ struct rte_eth_dev *dev;
+
+ while (nb_rx < nb_pkts) {
+ rxdp = &rx_ring[rx_id];
+ qword1 = rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len);
+ rx_status = (qword1 & ICE_RXD_QW1_STATUS_M) >>
+ ICE_RXD_QW1_STATUS_S;
+
+ /* Check the DD bit first */
+ if (!(rx_status & (1 << ICE_RX_DESC_STATUS_DD_S)))
+ break;
+
+ /* allocate mbuf */
+ nmb = rte_mbuf_raw_alloc(rxq->mp);
+ if (unlikely(!nmb)) {
+ dev = ICE_VSI_TO_ETH_DEV(rxq->vsi);
+ dev->data->rx_mbuf_alloc_failed++;
+ break;
+ }
+ rxd = *rxdp; /* copy descriptor in ring to temp variable*/
+
+ nb_hold++;
+ rxe = &sw_ring[rx_id]; /* get corresponding mbuf in SW ring */
+ rx_id++;
+ if (unlikely(rx_id == rxq->nb_rx_desc))
+ rx_id = 0;
+
+ /* Prefetch next mbuf */
+ rte_prefetch0(sw_ring[rx_id].mbuf);
+
+ /**
+ * When next RX descriptor is on a cache line boundary,
+ * prefetch the next 4 RX descriptors and next 8 pointers
+ * to mbufs.
+ */
+ if ((rx_id & 0x3) == 0) {
+ rte_prefetch0(&rx_ring[rx_id]);
+ rte_prefetch0(&sw_ring[rx_id]);
+ }
+
+ rxm = rxe->mbuf;
+ rxe->mbuf = nmb;
+ dma_addr =
+ rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb));
+
+ /* Set data buffer address and data length of the mbuf */
+ rxdp->read.hdr_addr = 0;
+ rxdp->read.pkt_addr = dma_addr;
+ rx_packet_len = (qword1 & ICE_RXD_QW1_LEN_PBUF_M) >>
+ ICE_RXD_QW1_LEN_PBUF_S;
+ rxm->data_len = rx_packet_len;
+ rxm->data_off = RTE_PKTMBUF_HEADROOM;
+ ice_rxd_to_vlan_tci(rxm, rxdp);
+ rxm->packet_type = ptype_tbl[(uint8_t)((qword1 &
+ ICE_RXD_QW1_PTYPE_M) >>
+ ICE_RXD_QW1_PTYPE_S)];
+
+ /**
+ * If this is the first buffer of the received packet, set the
+ * pointer to the first mbuf of the packet and initialize its
+ * context. Otherwise, update the total length and the number
+ * of segments of the current scattered packet, and update the
+ * pointer to the last mbuf of the current packet.
+ */
+ if (!first_seg) {
+ first_seg = rxm;
+ first_seg->nb_segs = 1;
+ first_seg->pkt_len = rx_packet_len;
+ } else {
+ first_seg->pkt_len =
+ (uint16_t)(first_seg->pkt_len +
+ rx_packet_len);
+ first_seg->nb_segs++;
+ last_seg->next = rxm;
+ }
+
+ /**
+ * If this is not the last buffer of the received packet,
+ * update the pointer to the last mbuf of the current scattered
+ * packet and continue to parse the RX ring.
+ */
+ if (!(rx_status & (1 << ICE_RX_DESC_STATUS_EOF_S))) {
+ last_seg = rxm;
+ continue;
+ }
+
+ /**
+ * This is the last buffer of the received packet. If the CRC
+ * is not stripped by the hardware:
+ * - Subtract the CRC length from the total packet length.
+ * - If the last buffer only contains the whole CRC or a part
+ * of it, free the mbuf associated to the last buffer. If part
+ * of the CRC is also contained in the previous mbuf, subtract
+ * the length of that CRC part from the data length of the
+ * previous mbuf.
+ */
+ rxm->next = NULL;
+ if (unlikely(rxq->crc_len > 0)) {
+ first_seg->pkt_len -= ETHER_CRC_LEN;
+ if (rx_packet_len <= ETHER_CRC_LEN) {
+ rte_pktmbuf_free_seg(rxm);
+ first_seg->nb_segs--;
+ last_seg->data_len =
+ (uint16_t)(last_seg->data_len -
+ (ETHER_CRC_LEN - rx_packet_len));
+ last_seg->next = NULL;
+ } else
+ rxm->data_len = (uint16_t)(rx_packet_len -
+ ETHER_CRC_LEN);
+ }
+
+ first_seg->port = rxq->port_id;
+ first_seg->ol_flags = 0;
+
+ pkt_flags = ice_rxd_status_to_pkt_flags(qword1);
+ pkt_flags |= ice_rxd_error_to_pkt_flags(qword1);
+ if (pkt_flags & PKT_RX_RSS_HASH)
+ first_seg->hash.rss =
+ rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
+
+ first_seg->ol_flags |= pkt_flags;
+ /* Prefetch data of first segment, if configured to do so. */
+ rte_prefetch0(RTE_PTR_ADD(first_seg->buf_addr,
+ first_seg->data_off));
+ rx_pkts[nb_rx++] = first_seg;
+ first_seg = NULL;
+ }
+
+ /* Record index of the next RX descriptor to probe. */
+ rxq->rx_tail = rx_id;
+ rxq->pkt_first_seg = first_seg;
+ rxq->pkt_last_seg = last_seg;
+
+ /**
+ * If the number of free RX descriptors is greater than the RX free
+ * threshold of the queue, advance the Receive Descriptor Tail (RDT)
+ * register. Update the RDT with the value of the last processed RX
+ * descriptor minus 1, to guarantee that the RDT register is never
+ * equal to the RDH register, which creates a "full" ring situtation
+ * from the hardware point of view.
+ */
+ nb_hold = (uint16_t)(nb_hold + rxq->nb_rx_hold);
+ if (nb_hold > rxq->rx_free_thresh) {
+ rx_id = (uint16_t)(rx_id == 0 ?
+ (rxq->nb_rx_desc - 1) : (rx_id - 1));
+ /* write TAIL register */
+ ICE_PCI_REG_WRITE(rxq->qrx_tail, rx_id);
+ nb_hold = 0;
+ }
+ rxq->nb_rx_hold = nb_hold;
+
+ /* return received packet in the burst */
+ return nb_rx;
+}
+
const uint32_t *
ice_dev_supported_ptypes_get(struct rte_eth_dev *dev)
{
@@ -1014,7 +1439,11 @@
RTE_PTYPE_UNKNOWN
};

- if (dev->rx_pkt_burst == ice_recv_pkts)
+ if (dev->rx_pkt_burst == ice_recv_pkts ||
+#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC
+ dev->rx_pkt_burst == ice_recv_pkts_bulk_alloc ||
+#endif
+ dev->rx_pkt_burst == ice_recv_scattered_pkts)
return ptypes;
return NULL;
}
@@ -1337,6 +1766,20 @@
return 0;
}

+/* Construct the tx flags */
+static inline uint64_t
+ice_build_ctob(uint32_t td_cmd,
+ uint32_t td_offset,
+ uint16_t size,
+ uint32_t td_tag)
+{
+ return rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DATA |
+ ((uint64_t)td_cmd << ICE_TXD_QW1_CMD_S) |
+ ((uint64_t)td_offset << ICE_TXD_QW1_OFFSET_S) |
+ ((uint64_t)size << ICE_TXD_QW1_TX_BUF_SZ_S) |
+ ((uint64_t)td_tag << ICE_TXD_QW1_L2TAG1_S));
+}
+
/* Check if the context descriptor is needed for TX offloading */
static inline uint16_t
ice_calc_context_desc(uint64_t flags)
@@ -1555,10 +1998,213 @@
return nb_tx;
}

+static inline int __attribute__((always_inline))
+ice_tx_free_bufs(struct ice_tx_queue *txq)
+{
+ struct ice_tx_entry *txep;
+ uint16_t i;
+
+ if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
+ rte_cpu_to_le_64(ICE_TXD_QW1_DTYPE_M)) !=
+ rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE))
+ return 0;
+
+ txep = &txq->sw_ring[txq->tx_next_dd - (txq->tx_rs_thresh - 1)];
+
+ for (i = 0; i < txq->tx_rs_thresh; i++)
+ rte_prefetch0((txep + i)->mbuf);
+
+ if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) {
+ for (i = 0; i < txq->tx_rs_thresh; ++i, ++txep) {
+ rte_mempool_put(txep->mbuf->pool, txep->mbuf);
+ txep->mbuf = NULL;
+ }
+ } else {
+ for (i = 0; i < txq->tx_rs_thresh; ++i, ++txep) {
+ rte_pktmbuf_free_seg(txep->mbuf);
+ txep->mbuf = NULL;
+ }
+ }
+
+ txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh);
+ txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh);
+ if (txq->tx_next_dd >= txq->nb_tx_desc)
+ txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1);
+
+ return txq->tx_rs_thresh;
+}
+
+/* Populate 4 descriptors with data from 4 mbufs */
+static inline void
+tx4(volatile struct ice_tx_desc *txdp, struct rte_mbuf **pkts)
+{
+ uint64_t dma_addr;
+ uint32_t i;
+
+ for (i = 0; i < 4; i++, txdp++, pkts++) {
+ dma_addr = rte_mbuf_data_iova(*pkts);
+ txdp->buf_addr = rte_cpu_to_le_64(dma_addr);
+ txdp->cmd_type_offset_bsz =
+ ice_build_ctob((uint32_t)ICE_TD_CMD, 0,
+ (*pkts)->data_len, 0);
+ }
+}
+
+/* Populate 1 descriptor with data from 1 mbuf */
+static inline void
+tx1(volatile struct ice_tx_desc *txdp, struct rte_mbuf **pkts)
+{
+ uint64_t dma_addr;
+
+ dma_addr = rte_mbuf_data_iova(*pkts);
+ txdp->buf_addr = rte_cpu_to_le_64(dma_addr);
+ txdp->cmd_type_offset_bsz =
+ ice_build_ctob((uint32_t)ICE_TD_CMD, 0,
+ (*pkts)->data_len, 0);
+}
+
+static inline void
+ice_tx_fill_hw_ring(struct ice_tx_queue *txq, struct rte_mbuf **pkts,
+ uint16_t nb_pkts)
+{
+ volatile struct ice_tx_desc *txdp = &txq->tx_ring[txq->tx_tail];
+ struct ice_tx_entry *txep = &txq->sw_ring[txq->tx_tail];
+ const int N_PER_LOOP = 4;
+ const int N_PER_LOOP_MASK = N_PER_LOOP - 1;
+ int mainpart, leftover;
+ int i, j;
+
+ /**
+ * Process most of the packets in chunks of N pkts. Any
+ * leftover packets will get processed one at a time.
+ */
+ mainpart = nb_pkts & ((uint32_t)~N_PER_LOOP_MASK);
+ leftover = nb_pkts & ((uint32_t)N_PER_LOOP_MASK);
+ for (i = 0; i < mainpart; i += N_PER_LOOP) {
+ /* Copy N mbuf pointers to the S/W ring */
+ for (j = 0; j < N_PER_LOOP; ++j)
+ (txep + i + j)->mbuf = *(pkts + i + j);
+ tx4(txdp + i, pkts + i);
+ }
+
+ if (unlikely(leftover > 0)) {
+ for (i = 0; i < leftover; ++i) {
+ (txep + mainpart + i)->mbuf = *(pkts + mainpart + i);
+ tx1(txdp + mainpart + i, pkts + mainpart + i);
+ }
+ }
+}
+
+static inline uint16_t
+tx_xmit_pkts(struct ice_tx_queue *txq,
+ struct rte_mbuf **tx_pkts,
+ uint16_t nb_pkts)
+{
+ volatile struct ice_tx_desc *txr = txq->tx_ring;
+ uint16_t n = 0;
+
+ /**
+ * Begin scanning the H/W ring for done descriptors when the number
+ * of available descriptors drops below tx_free_thresh. For each done
+ * descriptor, free the associated buffer.
+ */
+ if (txq->nb_tx_free < txq->tx_free_thresh)
+ ice_tx_free_bufs(txq);
+
+ /* Use available descriptor only */
+ nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
+ if (unlikely(!nb_pkts))
+ return 0;
+
+ txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
+ if ((txq->tx_tail + nb_pkts) > txq->nb_tx_desc) {
+ n = (uint16_t)(txq->nb_tx_desc - txq->tx_tail);
+ ice_tx_fill_hw_ring(txq, tx_pkts, n);
+ txr[txq->tx_next_rs].cmd_type_offset_bsz |=
+ rte_cpu_to_le_64(((uint64_t)ICE_TX_DESC_CMD_RS) <<
+ ICE_TXD_QW1_CMD_S);
+ txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
+ txq->tx_tail = 0;
+ }
+
+ /* Fill hardware descriptor ring with mbuf data */
+ ice_tx_fill_hw_ring(txq, tx_pkts + n, (uint16_t)(nb_pkts - n));
+ txq->tx_tail = (uint16_t)(txq->tx_tail + (nb_pkts - n));
+
+ /* Determin if RS bit needs to be set */
+ if (txq->tx_tail > txq->tx_next_rs) {
+ txr[txq->tx_next_rs].cmd_type_offset_bsz |=
+ rte_cpu_to_le_64(((uint64_t)ICE_TX_DESC_CMD_RS) <<
+ ICE_TXD_QW1_CMD_S);
+ txq->tx_next_rs =
+ (uint16_t)(txq->tx_next_rs + txq->tx_rs_thresh);
+ if (txq->tx_next_rs >= txq->nb_tx_desc)
+ txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
+ }
+
+ if (txq->tx_tail >= txq->nb_tx_desc)
+ txq->tx_tail = 0;
+
+ /* Update the tx tail register */
+ rte_wmb();
+ ICE_PCI_REG_WRITE(txq->qtx_tail, txq->tx_tail);
+
+ return nb_pkts;
+}
+
+static uint16_t
+ice_xmit_pkts_simple(void *tx_queue,
+ struct rte_mbuf **tx_pkts,
+ uint16_t nb_pkts)
+{
+ uint16_t nb_tx = 0;
+
+ if (likely(nb_pkts <= ICE_TX_MAX_BURST))
+ return tx_xmit_pkts((struct ice_tx_queue *)tx_queue,
+ tx_pkts, nb_pkts);
+
+ while (nb_pkts) {
+ uint16_t ret, num = (uint16_t)RTE_MIN(nb_pkts,
+ ICE_TX_MAX_BURST);
+
+ ret = tx_xmit_pkts((struct ice_tx_queue *)tx_queue,
+ &tx_pkts[nb_tx], num);
+ nb_tx = (uint16_t)(nb_tx + ret);
+ nb_pkts = (uint16_t)(nb_pkts - ret);
+ if (ret < num)
+ break;
+ }
+
+ return nb_tx;
+}
+
void __attribute__((cold))
ice_set_rx_function(struct rte_eth_dev *dev)
{
- dev->rx_pkt_burst = ice_recv_pkts;
+ PMD_INIT_FUNC_TRACE();
+ struct ice_adapter *ad =
+ ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+
+ if (dev->data->scattered_rx) {
+ /* Set the non-LRO scattered function */
+ PMD_INIT_LOG(DEBUG,
+ "Using a Scattered function on port %d.",
+ dev->data->port_id);
+ dev->rx_pkt_burst = ice_recv_scattered_pkts;
+ } else if (ad->rx_bulk_alloc_allowed) {
+ PMD_INIT_LOG(DEBUG,
+ "Rx Burst Bulk Alloc Preconditions are "
+ "satisfied. Rx Burst Bulk Alloc function "
+ "will be used on port %d.",
+ dev->data->port_id);
+ dev->rx_pkt_burst = ice_recv_pkts_bulk_alloc;
+ } else {
+ PMD_INIT_LOG(DEBUG,
+ "Rx Burst Bulk Alloc Preconditions are not "
+ "satisfied, Normal Rx will be used on port %d.",
+ dev->data->port_id);
+ dev->rx_pkt_burst = ice_recv_pkts;
+ }
}

/*********************************************************************
@@ -1612,8 +2258,18 @@ void __attribute__((cold))
void __attribute__((cold))
ice_set_tx_function(struct rte_eth_dev *dev)
{
+ struct ice_adapter *ad =
+ ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+
+ if (ad->tx_simple_allowed) {
+ PMD_INIT_LOG(DEBUG, "Simple tx finally be used.");
+ dev->tx_pkt_burst = ice_xmit_pkts_simple;
+ dev->tx_pkt_prepare = NULL;
+ } else {
+ PMD_INIT_LOG(DEBUG, "Normal tx finally be used.");
dev->tx_pkt_burst = ice_xmit_pkts;
dev->tx_pkt_prepare = ice_prep_pkts;
+ }
}

/* For each value it means, datasheet of hardware can tell more details
--
1.9.3
Wenzhuo Lu
2018-11-23 06:56:18 UTC
Permalink
Add below ops,
rx_descriptor_done
rx_descriptor_status
tx_descriptor_status

Signed-off-by: Wenzhuo Lu <***@intel.com>
Signed-off-by: Qiming Yang <***@intel.com>
Signed-off-by: Xiaoyun Li <***@intel.com>
Signed-off-by: Jingjing Wu <***@intel.com>
---
drivers/net/ice/ice_ethdev.c | 3 ++
drivers/net/ice/ice_lan_rxtx.c | 84 ++++++++++++++++++++++++++++++++++++++++++
drivers/net/ice/ice_rxtx.h | 3 ++
3 files changed, 90 insertions(+)

diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 21a251f..c9dca15 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -112,6 +112,9 @@ static int ice_xstats_get_names(struct rte_eth_dev *dev,
.get_eeprom_length = ice_get_eeprom_length,
.get_eeprom = ice_get_eeprom,
.rx_queue_count = ice_rx_queue_count,
+ .rx_descriptor_done = ice_rx_descriptor_done,
+ .rx_descriptor_status = ice_rx_descriptor_status,
+ .tx_descriptor_status = ice_tx_descriptor_status,
.stats_get = ice_stats_get,
.stats_reset = ice_stats_reset,
.xstats_get = ice_xstats_get,
diff --git a/drivers/net/ice/ice_lan_rxtx.c b/drivers/net/ice/ice_lan_rxtx.c
index 07ab677..4e6c0ff 100644
--- a/drivers/net/ice/ice_lan_rxtx.c
+++ b/drivers/net/ice/ice_lan_rxtx.c
@@ -1514,6 +1514,90 @@
return desc;
}

+int
+ice_rx_descriptor_done(void *rx_queue, uint16_t offset)
+{
+ volatile union ice_rx_desc *rxdp;
+ struct ice_rx_queue *rxq = rx_queue;
+ uint16_t desc;
+ int ret;
+
+ if (unlikely(offset >= rxq->nb_rx_desc)) {
+ PMD_DRV_LOG(ERR, "Invalid RX descriptor id %u", offset);
+ return 0;
+ }
+
+ desc = rxq->rx_tail + offset;
+ if (desc >= rxq->nb_rx_desc)
+ desc -= rxq->nb_rx_desc;
+
+ rxdp = &rxq->rx_ring[desc];
+
+ ret = !!(((rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len) &
+ ICE_RXD_QW1_STATUS_M) >> ICE_RXD_QW1_STATUS_S) &
+ (1 << ICE_RX_DESC_STATUS_DD_S));
+
+ return ret;
+}
+
+int
+ice_rx_descriptor_status(void *rx_queue, uint16_t offset)
+{
+ struct ice_rx_queue *rxq = rx_queue;
+ volatile uint64_t *status;
+ uint64_t mask;
+ uint32_t desc;
+
+ if (unlikely(offset >= rxq->nb_rx_desc))
+ return -EINVAL;
+
+ if (offset >= rxq->nb_rx_desc - rxq->nb_rx_hold)
+ return RTE_ETH_RX_DESC_UNAVAIL;
+
+ desc = rxq->rx_tail + offset;
+ if (desc >= rxq->nb_rx_desc)
+ desc -= rxq->nb_rx_desc;
+
+ status = &rxq->rx_ring[desc].wb.qword1.status_error_len;
+ mask = rte_cpu_to_le_64((1ULL << ICE_RX_DESC_STATUS_DD_S) <<
+ ICE_RXD_QW1_STATUS_S);
+ if (*status & mask)
+ return RTE_ETH_RX_DESC_DONE;
+
+ return RTE_ETH_RX_DESC_AVAIL;
+}
+
+int
+ice_tx_descriptor_status(void *tx_queue, uint16_t offset)
+{
+ struct ice_tx_queue *txq = tx_queue;
+ volatile uint64_t *status;
+ uint64_t mask, expect;
+ uint32_t desc;
+
+ if (unlikely(offset >= txq->nb_tx_desc))
+ return -EINVAL;
+
+ desc = txq->tx_tail + offset;
+ /* go to next desc that has the RS bit */
+ desc = ((desc + txq->tx_rs_thresh - 1) / txq->tx_rs_thresh) *
+ txq->tx_rs_thresh;
+ if (desc >= txq->nb_tx_desc) {
+ desc -= txq->nb_tx_desc;
+ if (desc >= txq->nb_tx_desc)
+ desc -= txq->nb_tx_desc;
+ }
+
+ status = &txq->tx_ring[desc].cmd_type_offset_bsz;
+ mask = rte_cpu_to_le_64(ICE_TXD_QW1_DTYPE_M);
+ expect = rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE <<
+ ICE_TXD_QW1_DTYPE_S);
+ if ((*status & mask) == expect)
+ return RTE_ETH_TX_DESC_DONE;
+
+ return RTE_ETH_TX_DESC_FULL;
+}
+
void
ice_clear_queues(struct rte_eth_dev *dev)
{
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
index e0218b3..12ad383 100644
--- a/drivers/net/ice/ice_rxtx.h
+++ b/drivers/net/ice/ice_rxtx.h
@@ -143,6 +143,9 @@ uint16_t ice_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts);
void ice_set_tx_function(struct rte_eth_dev *dev);
uint32_t ice_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+int ice_rx_descriptor_done(void *rx_queue, uint16_t offset);
+int ice_rx_descriptor_status(void *rx_queue, uint16_t offset);
+int ice_tx_descriptor_status(void *tx_queue, uint16_t offset);
void ice_set_default_ptype_table(struct rte_eth_dev *dev);
const uint32_t *ice_dev_supported_ptypes_get(struct rte_eth_dev *dev);
void ice_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
--
1.9.3
Wenzhuo Lu
2018-11-23 06:56:19 UTC
Permalink
Signed-off-by: Wenzhuo Lu <***@intel.com>
---
MAINTAINERS | 1 +
doc/guides/nics/features/ice.ini | 39 ++++++++++++++++++++
doc/guides/nics/ice.rst | 78 ++++++++++++++++++++++++++++++++++++++++
3 files changed, 118 insertions(+)
create mode 100644 doc/guides/nics/features/ice.ini
create mode 100644 doc/guides/nics/ice.rst

diff --git a/MAINTAINERS b/MAINTAINERS
index 00a5e03..2e1a537 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -598,6 +598,7 @@ M: Qiming Yang <***@intel.com>
M: Wenzhuo Lu <***@intel.com>
T: git://dpdk.org/next/dpdk-next-net-intel
F: drivers/net/ice/
+F: doc/guides/nics/features/ice*.ini

Marvell mvpp2
M: Tomasz Duszynski <***@semihalf.com>
diff --git a/doc/guides/nics/features/ice.ini b/doc/guides/nics/features/ice.ini
new file mode 100644
index 0000000..2be52ca
--- /dev/null
+++ b/doc/guides/nics/features/ice.ini
@@ -0,0 +1,39 @@
+;
+; Supported features of the 'ice' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Speed capabilities = Y
+Link status = Y
+Link status event = Y
+Rx interrupt = Y
+Queue start/stop = Y
+MTU update = Y
+Jumbo frame = Y
+Scattered Rx = Y
+TSO = Y
+Unicast MAC filter = Y
+Multicast MAC filter = Y
+RSS hash = Y
+RSS key update = Y
+RSS reta update = Y
+VLAN filter = Y
+CRC offload = Y
+VLAN offload = Y
+QinQ offload = Y
+L3 checksum offload = Y
+L4 checksum offload = Y
+Packet type parsing = Y
+Rx descriptor status = Y
+Tx descriptor status = Y
+Basic stats = Y
+Extended stats = Y
+FW version = Y
+Module EEPROM dump = Y
+Multiprocess aware = Y
+BSD nic_uio = Y
+Linux UIO = Y
+Linux VFIO = Y
+x86-32 = Y
+x86-64 = Y
diff --git a/doc/guides/nics/ice.rst b/doc/guides/nics/ice.rst
new file mode 100644
index 0000000..12a12d2
--- /dev/null
+++ b/doc/guides/nics/ice.rst
@@ -0,0 +1,78 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2018 Intel Corporation.
+
+ICE Poll Mode Driver
+======================
+
+The ice PMD (librte_pmd_ice) provides poll mode driver support for
+10/25 Gbps Intel® Ethernet 810 Series Network Adapters based on
+the Intel Ethernet Controller E810.
+
+
+Prerequisites
+-------------
+
+- Identifying your adapter using `Intel Support
+ <http://www.intel.com/support>`_ and get the latest NVM/FW images.
+
+- Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup the basic DPDK environment.
+
+- To get better performance on Intel platforms, please follow the "How to get best performance with NICs on Intel platforms"
+ section of the :ref:`Getting Started Guide for Linux <linux_gsg>`.
+
+
+Pre-Installation Configuration
+------------------------------
+
+Config File Options
+~~~~~~~~~~~~~~~~~~~
+
+The following options can be modified in the ``config`` file.
+Please note that enabling debugging options may affect system performance.
+
+- ``CONFIG_RTE_LIBRTE_ICE_PMD`` (default ``y``)
+
+ Toggle compilation of the ``librte_pmd_ice`` driver.
+
+- ``CONFIG_RTE_LIBRTE_ICE_DEBUG_*`` (default ``n``)
+
+ Toggle display of generic debugging messages.
+
+- ``CONFIG_RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC`` (default ``y``)
+
+ Toggle bulk allocation for RX.
+
+- ``CONFIG_RTE_LIBRTE_ICE_16BYTE_RX_DESC`` (default ``n``)
+
+ Toggle to use a 16-byte RX descriptor, by default the RX descriptor is 32 byte.
+
+
+Driver compilation and testing
+------------------------------
+
+Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
+for details.
+
+
+Sample Application Notes
+------------------------
+
+Vlan filter
+~~~~~~~~~~~
+
+Vlan filter only works when Promiscuous mode is off.
+
+To start ``testpmd``, and add vlan 10 to port 0:
+
+.. code-block:: console
+
+ ./app/testpmd -l 0-15 -n 4 -- -i
+ ...
+
+ testpmd> rx_vlan add 10 0
+
+
+Limitations or Known issues
+---------------------------
+
+N/A
--
1.9.3
Varghese, Vipin
2018-11-23 07:45:29 UTC
Permalink
Hi Wenzhuo

<snipped>
Post by Wenzhuo Lu
---
MAINTAINERS | 1 +
doc/guides/nics/features/ice.ini | 39 ++++++++++++++++++++
doc/guides/nics/ice.rst | 78
++++++++++++++++++++++++++++++++++++++++
3 files changed, 118 insertions(+)
create mode 100644 doc/guides/nics/features/ice.ini create mode 100644
doc/guides/nics/ice.rst
diff --git a/MAINTAINERS b/MAINTAINERS
index 00a5e03..2e1a537 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -598,6 +598,7 @@ M: Qiming Yang <***@intel.com>
T: git://dpdk.org/next/dpdk-next-net-intel
F: drivers/net/ice/
+F: doc/guides/nics/features/ice*.ini
Marvell mvpp2
diff --git a/doc/guides/nics/features/ice.ini b/doc/guides/nics/features/ice.ini
new file mode 100644
index 0000000..2be52ca
--- /dev/null
+++ b/doc/guides/nics/features/ice.ini
@@ -0,0 +1,39 @@
+;
+; Supported features of the 'ice' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Speed capabilities = Y
+Link status = Y
+Link status event = Y
+Rx interrupt = Y
+Queue start/stop = Y
+MTU update = Y
+Jumbo frame = Y
+Scattered Rx = Y
+TSO = Y
+Unicast MAC filter = Y
+Multicast MAC filter = Y
+RSS hash = Y
+RSS key update = Y
+RSS reta update = Y
+VLAN filter = Y
+CRC offload = Y
+VLAN offload = Y
+QinQ offload = Y
+L3 checksum offload = Y
+L4 checksum offload = Y
+Packet type parsing = Y
+Rx descriptor status = Y
+Tx descriptor status = Y
+Basic stats = Y
+Extended stats = Y
+FW version = Y
+Module EEPROM dump = Y
+Multiprocess aware = Y
+BSD nic_uio = Y
+Linux UIO = Y
+Linux VFIO = Y
+x86-32 = Y
+x86-64 = Y
Is all the features also supported for secondary multiprocess?
Post by Wenzhuo Lu
diff --git a/doc/guides/nics/ice.rst b/doc/guides/nics/ice.rst new file mode
100644 index 0000000..12a12d2
--- /dev/null
+++ b/doc/guides/nics/ice.rst
@@ -0,0 +1,78 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2018 Intel Corporation.
+
+ICE Poll Mode Driver
+======================
+
+The ice PMD (librte_pmd_ice) provides poll mode driver support for
+10/25 Gbps Intel® Ethernet 810 Series Network Adapters based on the
+Intel Ethernet Controller E810.
+
+
+Prerequisites
+-------------
+
+- Identifying your adapter using `Intel Support
+ <http://www.intel.com/support>`_ and get the latest NVM/FW images.
+
+- Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup
the basic DPDK environment.
+
+- To get better performance on Intel platforms, please follow the "How to get
best performance with NICs on Intel platforms"
+ section of the :ref:`Getting Started Guide for Linux <linux_gsg>`.
+
+
+Pre-Installation Configuration
+------------------------------
+
+Config File Options
+~~~~~~~~~~~~~~~~~~~
+
+The following options can be modified in the ``config`` file.
+Please note that enabling debugging options may affect system performance.
+
+- ``CONFIG_RTE_LIBRTE_ICE_PMD`` (default ``y``)
+
+ Toggle compilation of the ``librte_pmd_ice`` driver.
+
+- ``CONFIG_RTE_LIBRTE_ICE_DEBUG_*`` (default ``n``)
+
+ Toggle display of generic debugging messages.
+
+- ``CONFIG_RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC`` (default ``y``)
+
+ Toggle bulk allocation for RX.
+
+- ``CONFIG_RTE_LIBRTE_ICE_16BYTE_RX_DESC`` (default ``n``)
+
+ Toggle to use a 16-byte RX descriptor, by default the RX descriptor is 32 byte.
+
+
+Driver compilation and testing
+------------------------------
+
+Refer to the document :ref:`compiling and testing a PMD for a NIC
+<pmd_build_and_test>` for details.
+
+
+Sample Application Notes
+------------------------
+
+Vlan filter
+~~~~~~~~~~~
+
+Vlan filter only works when Promiscuous mode is off.
+
+
+.. code-block:: console
+
+ ./app/testpmd -l 0-15 -n 4 -- -i
+ ...
+
+ testpmd> rx_vlan add 10 0
+
+
+Limitations or Known issues
+---------------------------
+
If there are features not supported in secondary, ca
Yang, Qiming
2018-11-26 03:42:56 UTC
Permalink
Hi, Vipin
Not all feature enabled for secondary process. We add secondary process check in each ops like below,

if (rte_eal_process_type() == RTE_PROC_SECONDARY)
return -E_RTE_SECONDARY;

Qiming

-----Original Message-----
From: dev [mailto:dev-***@dpdk.org] On Behalf Of Varghese, Vipin
Sent: Friday, November 23, 2018 3:45 PM
To: Lu, Wenzhuo <***@intel.com>; ***@dpdk.org
Cc: Lu, Wenzhuo <***@intel.com>
Subject: Re: [dpdk-dev] [PATCH 19/19] doc: add ICE description and update release note

Hi Wenzhuo

<snipped>
Post by Wenzhuo Lu
---
MAINTAINERS | 1 +
doc/guides/nics/features/ice.ini | 39 ++++++++++++++++++++
doc/guides/nics/ice.rst | 78
++++++++++++++++++++++++++++++++++++++++
3 files changed, 118 insertions(+)
create mode 100644 doc/guides/nics/features/ice.ini create mode
100644 doc/guides/nics/ice.rst
diff --git a/MAINTAINERS b/MAINTAINERS index 00a5e03..2e1a537 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -598,6 +598,7 @@ M: Qiming Yang <***@intel.com>
T: git://dpdk.org/next/dpdk-next-net-intel
F: drivers/net/ice/
+F: doc/guides/nics/features/ice*.ini
Marvell mvpp2
a/doc/guides/nics/features/ice.ini b/doc/guides/nics/features/ice.ini
new file mode 100644
index 0000000..2be52ca
--- /dev/null
+++ b/doc/guides/nics/features/ice.ini
@@ -0,0 +1,39 @@
+;
+; Supported features of the 'ice' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Speed capabilities = Y
+Link status = Y
+Link status event = Y
+Rx interrupt = Y
+Queue start/stop = Y
+MTU update = Y
+Jumbo frame = Y
+Scattered Rx = Y
+TSO = Y
+Unicast MAC filter = Y
+Multicast MAC filter = Y
+RSS hash = Y
+RSS key update = Y
+RSS reta update = Y
+VLAN filter = Y
+CRC offload = Y
+VLAN offload = Y
+QinQ offload = Y
+L3 checksum offload = Y
+L4 checksum offload = Y
+Packet type parsing = Y
+Rx descriptor status = Y
+Tx descriptor status = Y
+Basic stats = Y
+Extended stats = Y
+FW version = Y
+Module EEPROM dump = Y
+Multiprocess aware = Y
+BSD nic_uio = Y
+Linux UIO = Y
+Linux VFIO = Y
+x86-32 = Y
+x86-64 = Y
Is all the features also supported for secondary multiprocess?
Post by Wenzhuo Lu
diff --git a/doc/guides/nics/ice.rst b/doc/guides/nics/ice.rst new file mode
100644 index 0000000..12a12d2
--- /dev/null
+++ b/doc/guides/nics/ice.rst
@@ -0,0 +1,78 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2018 Intel Corporation.
+
+ICE Poll Mode Driver
+======================
+
+The ice PMD (librte_pmd_ice) provides poll mode driver support for
+10/25 Gbps Intel® Ethernet 810 Series Network Adapters based on the
+Intel Ethernet Controller E810.
+
+
+Prerequisites
+-------------
+
+- Identifying your adapter using `Intel Support
+ <http://www.intel.com/support>`_ and get the latest NVM/FW images.
+
+- Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>`
+to setup
the basic DPDK environment.
+
+- To get better performance on Intel platforms, please follow the
+"How to get
best performance with NICs on Intel platforms"
+ section of the :ref:`Getting Started Guide for Linux <linux_gsg>`.
+
+
+Pre-Installation Configuration
+------------------------------
+
+Config File Options
+~~~~~~~~~~~~~~~~~~~
+
+The following options can be modified in the ``config`` file.
+Please note that enabling debugging options may affect system performance.
+
+- ``CONFIG_RTE_LIBRTE_ICE_PMD`` (default ``y``)
+
+ Toggle compilation of the ``librte_pmd_ice`` driver.
+
+- ``CONFIG_RTE_LIBRTE_ICE_DEBUG_*`` (default ``n``)
+
+ Toggle display of generic debugging messages.
+
+- ``CONFIG_RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC`` (default ``y``)
+
+ Toggle bulk allocation for RX.
+
+- ``CONFIG_RTE_LIBRTE_ICE_16BYTE_RX_DESC`` (default ``n``)
+
+ Toggle to use a 16-byte RX descriptor, by default the RX descriptor is 32 byte.
+
+
+Driver compilation and testing
+------------------------------
+
+Refer to the document :ref:`compiling and testing a PMD for a NIC
+<pmd_build_and_test>` for details.
+
+
+Sample Application Notes
+------------------------
+
+Vlan filter
+~~~~~~~~~~~
+
+Vlan filter only works when Promiscuous mode is off.
+
+
+.. code-block:: console
+
+ ./app/testpmd -l 0-15 -n 4 -- -i
+ ...
+
+ testpmd> rx_vlan add 10 0
+
+
+Limitations or Known issues
+---------------------------
+
If there are features not supported in secondary, can you mention the same h
Varghese, Vipin
2018-11-26 03:59:49 UTC
Permalink
Hi Qiming,

Thanks for the update, please feel free to update the limitation in release notes which all are not supported too.

Note: do we plan to add test cases (unit) for the same?
Post by Yang, Qiming
-----Original Message-----
From: Yang, Qiming
Sent: Monday, November 26, 2018 9:13 AM
Subject: RE: [dpdk-dev] [PATCH 19/19] doc: add ICE description and update
release note
Hi, Vipin
Not all feature enabled for secondary process. We add secondary process check
in each ops like below,
if (rte_eal_process_type() == RTE_PROC_SECONDARY)
return -E_RTE_SECONDARY;
Qiming
-----Original Message-----
Sent: Friday, November 23, 2018 3:45 PM
Subject: Re: [dpdk-dev] [PATCH 19/19] doc: add ICE description and update release note
Hi Wenzhuo
<snipped>
Post by Wenzhuo Lu
---
MAINTAINERS | 1 +
doc/guides/nics/features/ice.ini | 39 ++++++++++++++++++++
doc/guides/nics/ice.rst | 78
++++++++++++++++++++++++++++++++++++++++
3 files changed, 118 insertions(+)
create mode 100644 doc/guides/nics/features/ice.ini create mode
100644 doc/guides/nics/ice.rst
diff --git a/MAINTAINERS b/MAINTAINERS index 00a5e03..2e1a537 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -598,6 +598,7 @@ M: Qiming Yang <***@intel.com>
T: git://dpdk.org/next/dpdk-next-net-intel
F: drivers/net/ice/
+F: doc/guides/nics/features/ice*.ini
Marvell mvpp2
a/doc/guides/nics/features/ice.ini b/doc/guides/nics/features/ice.ini
new file mode 100644
index 0000000..2be52ca
--- /dev/null
+++ b/doc/guides/nics/features/ice.ini
@@ -0,0 +1,39 @@
+;
+; Supported features of the 'ice' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Speed capabilities = Y
+Link status = Y
+Link status event = Y
+Rx interrupt = Y
+Queue start/stop = Y
+MTU update = Y
+Jumbo frame = Y
+Scattered Rx = Y
+TSO = Y
+Unicast MAC filter = Y
+Multicast MAC filter = Y
+RSS hash = Y
+RSS key update = Y
+RSS reta update = Y
+VLAN filter = Y
+CRC offload = Y
+VLAN offload = Y
+QinQ offload = Y
+L3 checksum offload = Y
+L4 checksum offload = Y
+Packet type parsing = Y
+Rx descriptor status = Y
+Tx descriptor status = Y
+Basic stats = Y
+Extended stats = Y
+FW version = Y
+Module EEPROM dump = Y
+Multiprocess aware = Y
+BSD nic_uio = Y
+Linux UIO = Y
+Linux VFIO = Y
+x86-32 = Y
+x86-64 = Y
Is all the features also supported for secondary multiprocess?
Post by Wenzhuo Lu
diff --git a/doc/guides/nics/ice.rst b/doc/guides/nics/ice.rst new file mode
100644 index 0000000..12a12d2
--- /dev/null
+++ b/doc/guides/nics/ice.rst
@@ -0,0 +1,78 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2018 Intel Corporation.
+
+ICE Poll Mode Driver
+======================
+
+The ice PMD (librte_pmd_ice) provides poll mode driver support for
+10/25 Gbps Intel® Ethernet 810 Series Network Adapters based on the
+Intel Ethernet Controller E810.
+
+
+Prerequisites
+-------------
+
+- Identifying your adapter using `Intel Support
+ <http://www.intel.com/support>`_ and get the latest NVM/FW images.
+
+- Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>`
+to setup
the basic DPDK environment.
+
+- To get better performance on Intel platforms, please follow the
+"How to get
best performance with NICs on Intel platforms"
+ section of the :ref:`Getting Started Guide for Linux <linux_gsg>`.
+
+
+Pre-Installation Configuration
+------------------------------
+
+Config File Options
+~~~~~~~~~~~~~~~~~~~
+
+The following options can be modified in the ``config`` file.
+Please note that enabling debugging options may affect system performance.
+
+- ``CONFIG_RTE_LIBRTE_ICE_PMD`` (default ``y``)
+
+ Toggle compilation of the ``librte_pmd_ice`` driver.
+
+- ``CONFIG_RTE_LIBRTE_ICE_DEBUG_*`` (default ``n``)
+
+ Toggle display of generic debugging messages.
+
+- ``CONFIG_RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC`` (default ``y``)
+
+ Toggle bulk allocation for RX.
+
+- ``CONFIG_RTE_LIBRTE_ICE_16BYTE_RX_DESC`` (default ``n``)
+
+ Toggle to use a 16-byte RX descriptor, by default the RX descriptor is 32 byte.
+
+
+Driver compilation and testing
+------------------------------
+
+Refer to the document :ref:`compiling and testing a PMD for a NIC
+<pmd_build_and_test>` for details.
+
+
+Sample Application Notes
+------------------------
+
+Vlan filter
+~~~~~~~~~~~
+
+Vlan filter only works when Promiscuous mode is off.
+
+
+.. code-block:: console
+
+ ./app/testpmd -l 0-15 -n 4 -- -i
+ ...
+
+ testpmd> rx_vlan add 10 0
+
+
+Limitations or Known issues
+---------------------------
+
If there are features not supported in secondary, can you mention the same
here?
Post by Wenzhuo Lu
+N/A
--
Wenzhuo Lu
2018-11-23 06:56:01 UTC
Permalink
Current version 2018.10.30.

Signed-off-by: Wenzhuo Lu <***@intel.com>
---
MAINTAINERS | 6 +
drivers/net/ice/base/README | 22 +
drivers/net/ice/base/ice_acl.c | 4 +
drivers/net/ice/base/ice_acl.h | 7 +
drivers/net/ice/base/ice_acl_ctrl.c | 4 +
drivers/net/ice/base/ice_adminq_cmd.h | 1724 ++++++
drivers/net/ice/base/ice_alloc.h | 22 +
drivers/net/ice/base/ice_bitops.h | 233 +
drivers/net/ice/base/ice_common.c | 3332 ++++++++++
drivers/net/ice/base/ice_common.h | 159 +
drivers/net/ice/base/ice_controlq.c | 1098 ++++
drivers/net/ice/base/ice_controlq.h | 97 +
drivers/net/ice/base/ice_devids.h | 17 +
drivers/net/ice/base/ice_flex_pipe.c | 5 +
drivers/net/ice/base/ice_flex_pipe.h | 83 +
drivers/net/ice/base/ice_flex_type.h | 19 +
drivers/net/ice/base/ice_flow.c | 4 +
drivers/net/ice/base/ice_flow.h | 8 +
drivers/net/ice/base/ice_hw_autogen.h | 9815 ++++++++++++++++++++++++++++++
drivers/net/ice/base/ice_impl_guide.c | 167 +
drivers/net/ice/base/ice_lan_tx_rx.h | 2290 +++++++
drivers/net/ice/base/ice_nvm.c | 388 ++
drivers/net/ice/base/ice_osdep.h | 491 ++
drivers/net/ice/base/ice_protocol_type.h | 237 +
drivers/net/ice/base/ice_sbq_cmd.h | 93 +
drivers/net/ice/base/ice_sched.c | 1715 ++++++
drivers/net/ice/base/ice_sched.h | 68 +
drivers/net/ice/base/ice_sriov.c | 129 +
drivers/net/ice/base/ice_sriov.h | 35 +
drivers/net/ice/base/ice_status.h | 45 +
drivers/net/ice/base/ice_switch.c | 2415 ++++++++
drivers/net/ice/base/ice_switch.h | 320 +
drivers/net/ice/base/ice_type.h | 789 +++
drivers/net/ice/base/virtchnl.h | 787 +++
34 files changed, 26628 insertions(+)
create mode 100644 drivers/net/ice/base/README
create mode 100644 drivers/net/ice/base/ice_acl.c
create mode 100644 drivers/net/ice/base/ice_acl.h
create mode 100644 drivers/net/ice/base/ice_acl_ctrl.c
create mode 100644 drivers/net/ice/base/ice_adminq_cmd.h
create mode 100644 drivers/net/ice/base/ice_alloc.h
create mode 100644 drivers/net/ice/base/ice_bitops.h
create mode 100644 drivers/net/ice/base/ice_common.c
create mode 100644 drivers/net/ice/base/ice_common.h
create mode 100644 drivers/net/ice/base/ice_controlq.c
create mode 100644 drivers/net/ice/base/ice_controlq.h
create mode 100644 drivers/net/ice/base/ice_devids.h
create mode 100644 drivers/net/ice/base/ice_flex_pipe.c
create mode 100644 drivers/net/ice/base/ice_flex_pipe.h
create mode 100644 drivers/net/ice/base/ice_flex_type.h
create mode 100644 drivers/net/ice/base/ice_flow.c
create mode 100644 drivers/net/ice/base/ice_flow.h
create mode 100644 drivers/net/ice/base/ice_hw_autogen.h
create mode 100644 drivers/net/ice/base/ice_impl_guide.c
create mode 100644 drivers/net/ice/base/ice_lan_tx_rx.h
create mode 100644 drivers/net/ice/base/ice_nvm.c
create mode 100644 drivers/net/ice/base/ice_osdep.h
create mode 100644 drivers/net/ice/base/ice_protocol_type.h
create mode 100644 drivers/net/ice/base/ice_sbq_cmd.h
create mode 100644 drivers/net/ice/base/ice_sched.c
create mode 100644 drivers/net/ice/base/ice_sched.h
create mode 100644 drivers/net/ice/base/ice_sriov.c
create mode 100644 drivers/net/ice/base/ice_sriov.h
create mode 100644 drivers/net/ice/base/ice_status.h
create mode 100644 drivers/net/ice/base/ice_switch.c
create mode 100644 drivers/net/ice/base/ice_switch.h
create mode 100644 drivers/net/ice/base/ice_type.h
create mode 100644 drivers/net/ice/base/virtchnl.h

diff --git a/MAINTAINERS b/MAINTAINERS
index 19353ac..00a5e03 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -593,6 +593,12 @@ F: drivers/net/ifc/
F: doc/guides/nics/ifc.rst
F: doc/guides/nics/features/ifc*.ini

+Intel ice
+M: Qiming Yang <***@intel.com>
+M: Wenzhuo Lu <***@intel.com>
+T: git://dpdk.org/next/dpdk-next-net-intel
+F: drivers/net/ice/
+
Marvell mvpp2
M: Tomasz Duszynski <***@semihalf.com>
M: Dmitri Epshtein <***@marvell.com>
diff --git a/drivers/net/ice/base/README b/drivers/net/ice/base/README
new file mode 100644
index 0000000..d8c7a9b
--- /dev/null
+++ b/drivers/net/ice/base/README
@@ -0,0 +1,22 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+Intel® ICE driver
+==================
+
+This directory contains source code of FreeBSD ice driver of version
+2018.10.30 released by the team which develops
+basic drivers for any ice NIC. The directory of base/ contains the
+original source package.
+This driver is valid for the product(s) listed below
+
+* Intel® Ethernet Network Adapters E810
+
+Updating the driver
+===================
+
+NOTE: The source code in this directory should not be modified apart from
+the following file(s):
+
+ ice_osdep.h
diff --git a/drivers/net/ice/base/ice_acl.c b/drivers/net/ice/base/ice_acl.c
new file mode 100644
index 0000000..49b22bc
--- /dev/null
+++ b/drivers/net/ice/base/ice_acl.c
@@ -0,0 +1,4 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
diff --git a/drivers/net/ice/base/ice_acl.h b/drivers/net/ice/base/ice_acl.h
new file mode 100644
index 0000000..730b593
--- /dev/null
+++ b/drivers/net/ice/base/ice_acl.h
@@ -0,0 +1,7 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_ACL_H_
+#define _ICE_ACL_H_
+#endif /* _ICE_ACL_H_ */
diff --git a/drivers/net/ice/base/ice_acl_ctrl.c b/drivers/net/ice/base/ice_acl_ctrl.c
new file mode 100644
index 0000000..49b22bc
--- /dev/null
+++ b/drivers/net/ice/base/ice_acl_ctrl.c
@@ -0,0 +1,4 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
diff --git a/drivers/net/ice/base/ice_adminq_cmd.h b/drivers/net/ice/base/ice_adminq_cmd.h
new file mode 100644
index 0000000..e711502
--- /dev/null
+++ b/drivers/net/ice/base/ice_adminq_cmd.h
@@ -0,0 +1,1724 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_ADMINQ_CMD_H_
+#define _ICE_ADMINQ_CMD_H_
+
+/* This header file defines the Admin Queue commands, error codes and
+ * descriptor format. It is shared between Firmware and Software.
+ */
+
+
+#define ICE_MAX_VSI 768
+#define ICE_AQC_TOPO_MAX_LEVEL_NUM 0x9
+#define ICE_AQ_SET_MAC_FRAME_SIZE_MAX 9728
+
+
+struct ice_aqc_generic {
+ __le32 param0;
+ __le32 param1;
+ __le32 addr_high;
+ __le32 addr_low;
+};
+
+
+/* Get version (direct 0x0001) */
+struct ice_aqc_get_ver {
+ __le32 rom_ver;
+ __le32 fw_build;
+ u8 fw_branch;
+ u8 fw_major;
+ u8 fw_minor;
+ u8 fw_patch;
+ u8 api_branch;
+ u8 api_major;
+ u8 api_minor;
+ u8 api_patch;
+};
+
+
+
+/* Queue Shutdown (direct 0x0003) */
+struct ice_aqc_q_shutdown {
+ __le32 driver_unloading;
+#define ICE_AQC_DRIVER_UNLOADING BIT(0)
+ u8 reserved[12];
+};
+
+
+
+
+/* Request resource ownership (direct 0x0008)
+ * Release resource ownership (direct 0x0009)
+ */
+struct ice_aqc_req_res {
+ __le16 res_id;
+#define ICE_AQC_RES_ID_NVM 1
+#define ICE_AQC_RES_ID_SDP 2
+#define ICE_AQC_RES_ID_CHNG_LOCK 3
+#define ICE_AQC_RES_ID_GLBL_LOCK 4
+ __le16 access_type;
+#define ICE_AQC_RES_ACCESS_READ 1
+#define ICE_AQC_RES_ACCESS_WRITE 2
+
+ /* Upon successful completion, FW writes this value and driver is
+ * expected to release resource before timeout. This value is provided
+ * in milliseconds.
+ */
+ __le32 timeout;
+#define ICE_AQ_RES_NVM_READ_DFLT_TIMEOUT_MS 3000
+#define ICE_AQ_RES_NVM_WRITE_DFLT_TIMEOUT_MS 180000
+#define ICE_AQ_RES_CHNG_LOCK_DFLT_TIMEOUT_MS 1000
+#define ICE_AQ_RES_GLBL_LOCK_DFLT_TIMEOUT_MS 3000
+ /* For SDP: pin id of the SDP */
+ __le32 res_number;
+ /* Status is only used for ICE_AQC_RES_ID_GLBL_LOCK */
+ __le16 status;
+#define ICE_AQ_RES_GLBL_SUCCESS 0
+#define ICE_AQ_RES_GLBL_IN_PROG 1
+#define ICE_AQ_RES_GLBL_DONE 2
+ u8 reserved[2];
+};
+
+
+/* Get function capabilities (indirect 0x000A)
+ * Get device capabilities (indirect 0x000B)
+ */
+struct ice_aqc_list_caps {
+ u8 cmd_flags;
+ u8 pf_index;
+ u8 reserved[2];
+ __le32 count;
+ __le32 addr_high;
+ __le32 addr_low;
+};
+
+
+/* Device/Function buffer entry, repeated per reported capability */
+struct ice_aqc_list_caps_elem {
+ __le16 cap;
+#define ICE_AQC_CAPS_VALID_FUNCTIONS 0x0005
+#define ICE_AQC_CAPS_SRIOV 0x0012
+#define ICE_AQC_CAPS_VF 0x0013
+#define ICE_AQC_CAPS_VSI 0x0017
+#define ICE_AQC_CAPS_RSS 0x0040
+#define ICE_AQC_CAPS_RXQS 0x0041
+#define ICE_AQC_CAPS_TXQS 0x0042
+#define ICE_AQC_CAPS_MSIX 0x0043
+#define ICE_AQC_CAPS_MAX_MTU 0x0047
+
+ u8 major_ver;
+ u8 minor_ver;
+ /* Number of resources described by this capability */
+ __le32 number;
+ /* Only meaningful for some types of resources */
+ __le32 logical_id;
+ /* Only meaningful for some types of resources */
+ __le32 phys_id;
+ __le64 rsvd1;
+ __le64 rsvd2;
+};
+
+
+/* Manage MAC address, read command - indirect (0x0107)
+ * This struct is also used for the response
+ */
+struct ice_aqc_manage_mac_read {
+ __le16 flags; /* Zeroed by device driver */
+#define ICE_AQC_MAN_MAC_LAN_ADDR_VALID BIT(4)
+#define ICE_AQC_MAN_MAC_SAN_ADDR_VALID BIT(5)
+#define ICE_AQC_MAN_MAC_PORT_ADDR_VALID BIT(6)
+#define ICE_AQC_MAN_MAC_WOL_ADDR_VALID BIT(7)
+#define ICE_AQC_MAN_MAC_READ_S 4
+#define ICE_AQC_MAN_MAC_READ_M (0xF << ICE_AQC_MAN_MAC_READ_S)
+ u8 lport_num;
+ u8 lport_num_valid;
+#define ICE_AQC_MAN_MAC_PORT_NUM_IS_VALID BIT(0)
+ u8 num_addr; /* Used in response */
+ u8 reserved[3];
+ __le32 addr_high;
+ __le32 addr_low;
+};
+
+
+/* Response buffer format for manage MAC read command */
+struct ice_aqc_manage_mac_read_resp {
+ u8 lport_num;
+ u8 addr_type;
+#define ICE_AQC_MAN_MAC_ADDR_TYPE_LAN 0
+#define ICE_AQC_MAN_MAC_ADDR_TYPE_WOL 1
+ u8 mac_addr[ETH_ALEN];
+};
+
+
+/* Manage MAC address, write command - direct (0x0108) */
+struct ice_aqc_manage_mac_write {
+ u8 port_num;
+ u8 flags;
+#define ICE_AQC_MAN_MAC_WR_MC_MAG_EN BIT(0)
+#define ICE_AQC_MAN_MAC_WR_WOL_LAA_PFR_KEEP BIT(1)
+#define ICE_AQC_MAN_MAC_WR_S 6
+#define ICE_AQC_MAN_MAC_WR_M (3 << ICE_AQC_MAN_MAC_WR_S)
+#define ICE_AQC_MAN_MAC_UPDATE_LAA 0
+#define ICE_AQC_MAN_MAC_UPDATE_LAA_WOL (BIT(0) << ICE_AQC_MAN_MAC_WR_S)
+ /* High 16 bits of MAC address in big endian order */
+ __be16 sah;
+ /* Low 32 bits of MAC address in big endian order */
+ __be32 sal;
+ __le32 addr_high;
+ __le32 addr_low;
+};
+
+
+/* Clear PXE Command and response (direct 0x0110) */
+struct ice_aqc_clear_pxe {
+ u8 rx_cnt;
+#define ICE_AQC_CLEAR_PXE_RX_CNT 0x2
+ u8 reserved[15];
+};
+
+
+/* Get switch configuration (0x0200) */
+struct ice_aqc_get_sw_cfg {
+ /* Reserved for command and copy of request flags for response */
+ __le16 flags;
+ /* First desc in case of command and next_elem in case of response
+ * In case of response, if it is not zero, means all the configuration
+ * was not returned and new command shall be sent with this value in
+ * the 'first desc' field
+ */
+ __le16 element;
+ /* Reserved for command, only used for response */
+ __le16 num_elems;
+ __le16 rsvd;
+ __le32 addr_high;
+ __le32 addr_low;
+};
+
+
+/* Each entry in the response buffer is of the following type: */
+struct ice_aqc_get_sw_cfg_resp_elem {
+ /* VSI/Port Number */
+ __le16 vsi_port_num;
+#define ICE_AQC_GET_SW_CONF_RESP_VSI_PORT_NUM_S 0
+#define ICE_AQC_GET_SW_CONF_RESP_VSI_PORT_NUM_M \
+ (0x3FF << ICE_AQC_GET_SW_CONF_RESP_VSI_PORT_NUM_S)
+#define ICE_AQC_GET_SW_CONF_RESP_TYPE_S 14
+#define ICE_AQC_GET_SW_CONF_RESP_TYPE_M (0x3 << ICE_AQC_GET_SW_CONF_RESP_TYPE_S)
+#define ICE_AQC_GET_SW_CONF_RESP_PHYS_PORT 0
+#define ICE_AQC_GET_SW_CONF_RESP_VIRT_PORT 1
+#define ICE_AQC_GET_SW_CONF_RESP_VSI 2
+
+ /* SWID VSI/Port belongs to */
+ __le16 swid;
+
+ /* Bit 14..0 : PF/VF number VSI belongs to
+ * Bit 15 : VF indication bit
+ */
+ __le16 pf_vf_num;
+#define ICE_AQC_GET_SW_CONF_RESP_FUNC_NUM_S 0
+#define ICE_AQC_GET_SW_CONF_RESP_FUNC_NUM_M \
+ (0x7FFF << ICE_AQC_GET_SW_CONF_RESP_FUNC_NUM_S)
+#define ICE_AQC_GET_SW_CONF_RESP_IS_VF BIT(15)
+};
+
+
+/* The response buffer is as follows. Note that the length of the
+ * elements array varies with the length of the command response.
+ */
+struct ice_aqc_get_sw_cfg_resp {
+ struct ice_aqc_get_sw_cfg_resp_elem elements[1];
+};
+
+
+
+/* These resource type defines are used for all switch resource
+ * commands where a resource type is required, such as:
+ * Get Resource Allocation command (indirect 0x0204)
+ * Allocate Resources command (indirect 0x0208)
+ * Free Resources command (indirect 0x0209)
+ * Get Allocated Resource Descriptors Command (indirect 0x020A)
+ */
+#define ICE_AQC_RES_TYPE_VSI_LIST_REP 0x03
+#define ICE_AQC_RES_TYPE_VSI_LIST_PRUNE 0x04
+
+#define ICE_AQC_RES_TYPE_FLAG_SHARED BIT(7)
+#define ICE_AQC_RES_TYPE_FLAG_SCAN_BOTTOM BIT(12)
+#define ICE_AQC_RES_TYPE_FLAG_IGNORE_INDEX BIT(13)
+
+#define ICE_AQC_RES_TYPE_FLAG_DEDICATED 0x00
+
+
+
+/* Allocate Resources command (indirect 0x0208)
+ * Free Resources command (indirect 0x0209)
+ */
+struct ice_aqc_alloc_free_res_cmd {
+ __le16 num_entries; /* Number of Resource entries */
+ u8 reserved[6];
+ __le32 addr_high;
+ __le32 addr_low;
+};
+
+
+/* Resource descriptor */
+struct ice_aqc_res_elem {
+ union {
+ __le16 sw_resp;
+ __le16 flu_resp;
+ } e;
+};
+
+
+/* Buffer for Allocate/Free Resources commands */
+struct ice_aqc_alloc_free_res_elem {
+ __le16 res_type; /* Types defined above cmd 0x0204 */
+#define ICE_AQC_RES_TYPE_VSI_PRUNE_LIST_S 8
+#define ICE_AQC_RES_TYPE_VSI_PRUNE_LIST_M \
+ (0xF << ICE_AQC_RES_TYPE_VSI_PRUNE_LIST_S)
+ __le16 num_elems;
+ struct ice_aqc_res_elem elem[1];
+};
+
+
+
+
+/* Add VSI (indirect 0x0210)
+ * Update VSI (indirect 0x0211)
+ * Get VSI (indirect 0x0212)
+ * Free VSI (indirect 0x0213)
+ */
+struct ice_aqc_add_get_update_free_vsi {
+ __le16 vsi_num;
+#define ICE_AQ_VSI_NUM_S 0
+#define ICE_AQ_VSI_NUM_M (0x03FF << ICE_AQ_VSI_NUM_S)
+#define ICE_AQ_VSI_IS_VALID BIT(15)
+ __le16 cmd_flags;
+#define ICE_AQ_VSI_KEEP_ALLOC 0x1
+ u8 vf_id;
+ u8 reserved;
+ __le16 vsi_flags;
+#define ICE_AQ_VSI_TYPE_S 0
+#define ICE_AQ_VSI_TYPE_M (0x3 << ICE_AQ_VSI_TYPE_S)
+#define ICE_AQ_VSI_TYPE_VF 0x0
+#define ICE_AQ_VSI_TYPE_VMDQ2 0x1
+#define ICE_AQ_VSI_TYPE_PF 0x2
+#define ICE_AQ_VSI_TYPE_EMP_MNG 0x3
+ __le32 addr_high;
+ __le32 addr_low;
+};
+
+
+/* Response descriptor for:
+ * Add VSI (indirect 0x0210)
+ * Update VSI (indirect 0x0211)
+ * Free VSI (indirect 0x0213)
+ */
+struct ice_aqc_add_update_free_vsi_resp {
+ __le16 vsi_num;
+ __le16 ext_status;
+ __le16 vsi_used;
+ __le16 vsi_free;
+ __le32 addr_high;
+ __le32 addr_low;
+};
+
+
+
+struct ice_aqc_vsi_props {
+ __le16 valid_sections;
+#define ICE_AQ_VSI_PROP_SW_VALID BIT(0)
+#define ICE_AQ_VSI_PROP_SECURITY_VALID BIT(1)
+#define ICE_AQ_VSI_PROP_VLAN_VALID BIT(2)
+#define ICE_AQ_VSI_PROP_OUTER_TAG_VALID BIT(3)
+#define ICE_AQ_VSI_PROP_INGRESS_UP_VALID BIT(4)
+#define ICE_AQ_VSI_PROP_EGRESS_UP_VALID BIT(5)
+#define ICE_AQ_VSI_PROP_RXQ_MAP_VALID BIT(6)
+#define ICE_AQ_VSI_PROP_Q_OPT_VALID BIT(7)
+#define ICE_AQ_VSI_PROP_OUTER_UP_VALID BIT(8)
+#define ICE_AQ_VSI_PROP_FLOW_DIR_VALID BIT(11)
+#define ICE_AQ_VSI_PROP_PASID_VALID BIT(12)
+ /* switch section */
+ u8 sw_id;
+ u8 sw_flags;
+#define ICE_AQ_VSI_SW_FLAG_ALLOW_LB BIT(5)
+#define ICE_AQ_VSI_SW_FLAG_LOCAL_LB BIT(6)
+#define ICE_AQ_VSI_SW_FLAG_SRC_PRUNE BIT(7)
+ u8 sw_flags2;
+#define ICE_AQ_VSI_SW_FLAG_RX_PRUNE_EN_S 0
+#define ICE_AQ_VSI_SW_FLAG_RX_PRUNE_EN_M \
+ (0xF << ICE_AQ_VSI_SW_FLAG_RX_PRUNE_EN_S)
+#define ICE_AQ_VSI_SW_FLAG_RX_VLAN_PRUNE_ENA BIT(0)
+#define ICE_AQ_VSI_SW_FLAG_LAN_ENA BIT(4)
+ u8 veb_stat_id;
+#define ICE_AQ_VSI_SW_VEB_STAT_ID_S 0
+#define ICE_AQ_VSI_SW_VEB_STAT_ID_M (0x1F << ICE_AQ_VSI_SW_VEB_STAT_ID_S)
+#define ICE_AQ_VSI_SW_VEB_STAT_ID_VALID BIT(5)
+ /* security section */
+ u8 sec_flags;
+#define ICE_AQ_VSI_SEC_FLAG_ALLOW_DEST_OVRD BIT(0)
+#define ICE_AQ_VSI_SEC_FLAG_ENA_MAC_ANTI_SPOOF BIT(2)
+#define ICE_AQ_VSI_SEC_TX_PRUNE_ENA_S 4
+#define ICE_AQ_VSI_SEC_TX_PRUNE_ENA_M (0xF << ICE_AQ_VSI_SEC_TX_PRUNE_ENA_S)
+#define ICE_AQ_VSI_SEC_TX_VLAN_PRUNE_ENA BIT(0)
+ u8 sec_reserved;
+ /* VLAN section */
+ __le16 pvid; /* VLANS include priority bits */
+ u8 pvlan_reserved[2];
+ u8 vlan_flags;
+#define ICE_AQ_VSI_VLAN_MODE_S 0
+#define ICE_AQ_VSI_VLAN_MODE_M (0x3 << ICE_AQ_VSI_VLAN_MODE_S)
+#define ICE_AQ_VSI_VLAN_MODE_UNTAGGED 0x1
+#define ICE_AQ_VSI_VLAN_MODE_TAGGED 0x2
+#define ICE_AQ_VSI_VLAN_MODE_ALL 0x3
+#define ICE_AQ_VSI_PVLAN_INSERT_PVID BIT(2)
+#define ICE_AQ_VSI_VLAN_EMOD_S 3
+#define ICE_AQ_VSI_VLAN_EMOD_M (0x3 << ICE_AQ_VSI_VLAN_EMOD_S)
+#define ICE_AQ_VSI_VLAN_EMOD_STR_BOTH (0x0 << ICE_AQ_VSI_VLAN_EMOD_S)
+#define ICE_AQ_VSI_VLAN_EMOD_STR_UP (0x1 << ICE_AQ_VSI_VLAN_EMOD_S)
+#define ICE_AQ_VSI_VLAN_EMOD_STR (0x2 << ICE_AQ_VSI_VLAN_EMOD_S)
+#define ICE_AQ_VSI_VLAN_EMOD_NOTHING (0x3 << ICE_AQ_VSI_VLAN_EMOD_S)
+ u8 pvlan_reserved2[3];
+ /* ingress egress up sections */
+ __le32 ingress_table; /* bitmap, 3 bits per up */
+#define ICE_AQ_VSI_UP_TABLE_UP0_S 0
+#define ICE_AQ_VSI_UP_TABLE_UP0_M (0x7 << ICE_AQ_VSI_UP_TABLE_UP0_S)
+#define ICE_AQ_VSI_UP_TABLE_UP1_S 3
+#define ICE_AQ_VSI_UP_TABLE_UP1_M (0x7 << ICE_AQ_VSI_UP_TABLE_UP1_S)
+#define ICE_AQ_VSI_UP_TABLE_UP2_S 6
+#define ICE_AQ_VSI_UP_TABLE_UP2_M (0x7 << ICE_AQ_VSI_UP_TABLE_UP2_S)
+#define ICE_AQ_VSI_UP_TABLE_UP3_S 9
+#define ICE_AQ_VSI_UP_TABLE_UP3_M (0x7 << ICE_AQ_VSI_UP_TABLE_UP3_S)
+#define ICE_AQ_VSI_UP_TABLE_UP4_S 12
+#define ICE_AQ_VSI_UP_TABLE_UP4_M (0x7 << ICE_AQ_VSI_UP_TABLE_UP4_S)
+#define ICE_AQ_VSI_UP_TABLE_UP5_S 15
+#define ICE_AQ_VSI_UP_TABLE_UP5_M (0x7 << ICE_AQ_VSI_UP_TABLE_UP5_S)
+#define ICE_AQ_VSI_UP_TABLE_UP6_S 18
+#define ICE_AQ_VSI_UP_TABLE_UP6_M (0x7 << ICE_AQ_VSI_UP_TABLE_UP6_S)
+#define ICE_AQ_VSI_UP_TABLE_UP7_S 21
+#define ICE_AQ_VSI_UP_TABLE_UP7_M (0x7 << ICE_AQ_VSI_UP_TABLE_UP7_S)
+ __le32 egress_table; /* same defines as for ingress table */
+ /* outer tags section */
+ __le16 outer_tag;
+ u8 outer_tag_flags;
+#define ICE_AQ_VSI_OUTER_TAG_MODE_S 0
+#define ICE_AQ_VSI_OUTER_TAG_MODE_M (0x3 << ICE_AQ_VSI_OUTER_TAG_MODE_S)
+#define ICE_AQ_VSI_OUTER_TAG_NOTHING 0x0
+#define ICE_AQ_VSI_OUTER_TAG_REMOVE 0x1
+#define ICE_AQ_VSI_OUTER_TAG_COPY 0x2
+#define ICE_AQ_VSI_OUTER_TAG_TYPE_S 2
+#define ICE_AQ_VSI_OUTER_TAG_TYPE_M (0x3 << ICE_AQ_VSI_OUTER_TAG_TYPE_S)
+#define ICE_AQ_VSI_OUTER_TAG_NONE 0x0
+#define ICE_AQ_VSI_OUTER_TAG_STAG 0x1
+#define ICE_AQ_VSI_OUTER_TAG_VLAN_8100 0x2
+#define ICE_AQ_VSI_OUTER_TAG_VLAN_9100 0x3
+#define ICE_AQ_VSI_OUTER_TAG_INSERT BIT(4)
+#define ICE_AQ_VSI_OUTER_TAG_ACCEPT_HOST BIT(6)
+ u8 outer_tag_reserved;
+ /* queue mapping section */
+ __le16 mapping_flags;
+#define ICE_AQ_VSI_Q_MAP_CONTIG 0x0
+#define ICE_AQ_VSI_Q_MAP_NONCONTIG BIT(0)
+ __le16 q_mapping[16];
+#define ICE_AQ_VSI_Q_S 0
+#define ICE_AQ_VSI_Q_M (0x7FF << ICE_AQ_VSI_Q_S)
+ __le16 tc_mapping[8];
+#define ICE_AQ_VSI_TC_Q_OFFSET_S 0
+#define ICE_AQ_VSI_TC_Q_OFFSET_M (0x7FF << ICE_AQ_VSI_TC_Q_OFFSET_S)
+#define ICE_AQ_VSI_TC_Q_NUM_S 11
+#define ICE_AQ_VSI_TC_Q_NUM_M (0xF << ICE_AQ_VSI_TC_Q_NUM_S)
+ /* queueing option section */
+ u8 q_opt_rss;
+#define ICE_AQ_VSI_Q_OPT_RSS_LUT_S 0
+#define ICE_AQ_VSI_Q_OPT_RSS_LUT_M (0x3 << ICE_AQ_VSI_Q_OPT_RSS_LUT_S)
+#define ICE_AQ_VSI_Q_OPT_RSS_LUT_VSI 0x0
+#define ICE_AQ_VSI_Q_OPT_RSS_LUT_PF 0x2
+#define ICE_AQ_VSI_Q_OPT_RSS_LUT_GBL 0x3
+#define ICE_AQ_VSI_Q_OPT_RSS_GBL_LUT_S 2
+#define ICE_AQ_VSI_Q_OPT_RSS_GBL_LUT_M (0xF << ICE_AQ_VSI_Q_OPT_RSS_GBL_LUT_S)
+#define ICE_AQ_VSI_Q_OPT_RSS_HASH_S 6
+#define ICE_AQ_VSI_Q_OPT_RSS_HASH_M (0x3 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S)
+#define ICE_AQ_VSI_Q_OPT_RSS_TPLZ (0x0 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S)
+#define ICE_AQ_VSI_Q_OPT_RSS_SYM_TPLZ (0x1 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S)
+#define ICE_AQ_VSI_Q_OPT_RSS_XOR (0x2 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S)
+#define ICE_AQ_VSI_Q_OPT_RSS_JHASH (0x3 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S)
+ u8 q_opt_tc;
+#define ICE_AQ_VSI_Q_OPT_TC_OVR_S 0
+#define ICE_AQ_VSI_Q_OPT_TC_OVR_M (0x1F << ICE_AQ_VSI_Q_OPT_TC_OVR_S)
+#define ICE_AQ_VSI_Q_OPT_PROF_TC_OVR BIT(7)
+ u8 q_opt_flags;
+#define ICE_AQ_VSI_Q_OPT_PE_FLTR_EN BIT(0)
+ u8 q_opt_reserved[3];
+ /* outer up section */
+ __le32 outer_up_table; /* same structure and defines as ingress tbl */
+ /* section 10 */
+ __le16 sect_10_reserved;
+ /* flow director section */
+ __le16 fd_options;
+#define ICE_AQ_VSI_FD_ENABLE BIT(0)
+#define ICE_AQ_VSI_FD_TX_AUTO_ENABLE BIT(1)
+#define ICE_AQ_VSI_FD_PROG_ENABLE BIT(3)
+ __le16 max_fd_fltr_dedicated;
+ __le16 max_fd_fltr_shared;
+ __le16 fd_def_q;
+#define ICE_AQ_VSI_FD_DEF_Q_S 0
+#define ICE_AQ_VSI_FD_DEF_Q_M (0x7FF << ICE_AQ_VSI_FD_DEF_Q_S)
+#define ICE_AQ_VSI_FD_DEF_GRP_S 12
+#define ICE_AQ_VSI_FD_DEF_GRP_M (0x7 << ICE_AQ_VSI_FD_DEF_GRP_S)
+ __le16 fd_report_opt;
+#define ICE_AQ_VSI_FD_REPORT_Q_S 0
+#define ICE_AQ_VSI_FD_REPORT_Q_M (0x7FF << ICE_AQ_VSI_FD_REPORT_Q_S)
+#define ICE_AQ_VSI_FD_DEF_PRIORITY_S 12
+#define ICE_AQ_VSI_FD_DEF_PRIORITY_M (0x7 << ICE_AQ_VSI_FD_DEF_PRIORITY_S)
+#define ICE_AQ_VSI_FD_DEF_DROP BIT(15)
+ /* PASID section */
+ __le32 pasid_id;
+#define ICE_AQ_VSI_PASID_ID_S 0
+#define ICE_AQ_VSI_PASID_ID_M (0xFFFFF << ICE_AQ_VSI_PASID_ID_S)
+#define ICE_AQ_VSI_PASID_ID_VALID BIT(31)
+ u8 reserved[24];
+};
+
+
+
+#define ICE_MAX_NUM_RECIPES 64
+
+
+/* Add/Update/Remove/Get switch rules (indirect 0x02A0, 0x02A1, 0x02A2, 0x02A3)
+ */
+struct ice_aqc_sw_rules {
+ /* ops: add switch rules, referring the number of rules.
+ * ops: update switch rules, referring the number of filters
+ * ops: remove switch rules, referring the entry index.
+ * ops: get switch rules, referring to the number of filters.
+ */
+ __le16 num_rules_fltr_entry_index;
+ u8 reserved[6];
+ __le32 addr_high;
+ __le32 addr_low;
+};
+
+
+#pragma pack(1)
+/* Add/Update/Get/Remove lookup Rx/Tx command/response entry
+ * This structures describes the lookup rules and associated actions. "index"
+ * is returned as part of a response to a successful Add command, and can be
+ * used to identify the rule for Update/Get/Remove commands.
+ */
+struct ice_sw_rule_lkup_rx_tx {
+ __le16 recipe_id;
+#define ICE_SW_RECIPE_LOGICAL_PORT_FWD 10
+ /* Source port for LOOKUP_RX and source VSI in case of LOOKUP_TX */
+ __le16 src;
+ __le32 act;
+
+ /* Bit 0:1 - Action type */
+#define ICE_SINGLE_ACT_TYPE_S 0x00
+#define ICE_SINGLE_ACT_TYPE_M (0x3 << ICE_SINGLE_ACT_TYPE_S)
+
+ /* Bit 2 - Loop back enable
+ * Bit 3 - LAN enable
+ */
+#define ICE_SINGLE_ACT_LB_ENABLE BIT(2)
+#define ICE_SINGLE_ACT_LAN_ENABLE BIT(3)
+
+ /* Action type = 0 - Forward to VSI or VSI list */
+#define ICE_SINGLE_ACT_VSI_FORWARDING 0x0
+
+#define ICE_SINGLE_ACT_VSI_ID_S 4
+#define ICE_SINGLE_ACT_VSI_ID_M (0x3FF << ICE_SINGLE_ACT_VSI_ID_S)
+#define ICE_SINGLE_ACT_VSI_LIST_ID_S 4
+#define ICE_SINGLE_ACT_VSI_LIST_ID_M (0x3FF << ICE_SINGLE_ACT_VSI_LIST_ID_S)
+ /* This bit needs to be set if action is forward to VSI list */
+#define ICE_SINGLE_ACT_VSI_LIST BIT(14)
+#define ICE_SINGLE_ACT_VALID_BIT BIT(17)
+#define ICE_SINGLE_ACT_DROP BIT(18)
+
+ /* Action type = 1 - Forward to Queue of Queue group */
+#define ICE_SINGLE_ACT_TO_Q 0x1
+#define ICE_SINGLE_ACT_Q_INDEX_S 4
+#define ICE_SINGLE_ACT_Q_INDEX_M (0x7FF << ICE_SINGLE_ACT_Q_INDEX_S)
+#define ICE_SINGLE_ACT_Q_REGION_S 15
+#define ICE_SINGLE_ACT_Q_REGION_M (0x7 << ICE_SINGLE_ACT_Q_REGION_S)
+#define ICE_SINGLE_ACT_Q_PRIORITY BIT(18)
+
+ /* Action type = 2 - Prune */
+#define ICE_SINGLE_ACT_PRUNE 0x2
+#define ICE_SINGLE_ACT_EGRESS BIT(15)
+#define ICE_SINGLE_ACT_INGRESS BIT(16)
+#define ICE_SINGLE_ACT_PRUNET BIT(17)
+ /* Bit 18 should be set to 0 for this action */
+
+ /* Action type = 2 - Pointer */
+#define ICE_SINGLE_ACT_PTR 0x2
+#define ICE_SINGLE_ACT_PTR_VAL_S 4
+#define ICE_SINGLE_ACT_PTR_VAL_M (0x1FFF << ICE_SINGLE_ACT_PTR_VAL_S)
+ /* Bit 18 should be set to 1 */
+#define ICE_SINGLE_ACT_PTR_BIT BIT(18)
+
+ /* Action type = 3 - Other actions. Last two bits
+ * are other action identifier
+ */
+#define ICE_SINGLE_ACT_OTHER_ACTS 0x3
+#define ICE_SINGLE_OTHER_ACT_IDENTIFIER_S 17
+#define ICE_SINGLE_OTHER_ACT_IDENTIFIER_M \
+ (0x3 << \ ICE_SINGLE_OTHER_ACT_IDENTIFIER_S)
+
+ /* Bit 17:18 - Defines other actions */
+ /* Other action = 0 - Mirror VSI */
+#define ICE_SINGLE_OTHER_ACT_MIRROR 0
+#define ICE_SINGLE_ACT_MIRROR_VSI_ID_S 4
+#define ICE_SINGLE_ACT_MIRROR_VSI_ID_M \
+ (0x3FF << ICE_SINGLE_ACT_MIRROR_VSI_ID_S)
+
+ /* Other action = 3 - Set Stat count */
+#define ICE_SINGLE_OTHER_ACT_STAT_COUNT 3
+#define ICE_SINGLE_ACT_STAT_COUNT_INDEX_S 4
+#define ICE_SINGLE_ACT_STAT_COUNT_INDEX_M \
+ (0x7F << ICE_SINGLE_ACT_STAT_COUNT_INDEX_S)
+
+ __le16 index; /* The index of the rule in the lookup table */
+ /* Length and values of the header to be matched per recipe or
+ * lookup-type
+ */
+ __le16 hdr_len;
+ u8 hdr[1];
+};
+#pragma pack()
+
+
+/* Add/Update/Remove large action command/response entry
+ * "index" is returned as part of a response to a successful Add command, and
+ * can be used to identify the action for Update/Get/Remove commands.
+ */
+struct ice_sw_rule_lg_act {
+ __le16 index; /* Index in large action table */
+ __le16 size;
+ __le32 act[1]; /* array of size for actions */
+ /* Max number of large actions */
+#define ICE_MAX_LG_ACT 4
+ /* Bit 0:1 - Action type */
+#define ICE_LG_ACT_TYPE_S 0
+#define ICE_LG_ACT_TYPE_M (0x7 << ICE_LG_ACT_TYPE_S)
+
+ /* Action type = 0 - Forward to VSI or VSI list */
+#define ICE_LG_ACT_VSI_FORWARDING 0
+#define ICE_LG_ACT_VSI_ID_S 3
+#define ICE_LG_ACT_VSI_ID_M (0x3FF << ICE_LG_ACT_VSI_ID_S)
+#define ICE_LG_ACT_VSI_LIST_ID_S 3
+#define ICE_LG_ACT_VSI_LIST_ID_M (0x3FF << ICE_LG_ACT_VSI_LIST_ID_S)
+ /* This bit needs to be set if action is forward to VSI list */
+#define ICE_LG_ACT_VSI_LIST BIT(13)
+
+#define ICE_LG_ACT_VALID_BIT BIT(16)
+
+ /* Action type = 1 - Forward to Queue of Queue group */
+#define ICE_LG_ACT_TO_Q 0x1
+#define ICE_LG_ACT_Q_INDEX_S 3
+#define ICE_LG_ACT_Q_INDEX_M (0x7FF << ICE_LG_ACT_Q_INDEX_S)
+#define ICE_LG_ACT_Q_REGION_S 14
+#define ICE_LG_ACT_Q_REGION_M (0x7 << ICE_LG_ACT_Q_REGION_S)
+#define ICE_LG_ACT_Q_PRIORITY_SET BIT(17)
+
+ /* Action type = 2 - Prune */
+#define ICE_LG_ACT_PRUNE 0x2
+#define ICE_LG_ACT_EGRESS BIT(14)
+#define ICE_LG_ACT_INGRESS BIT(15)
+#define ICE_LG_ACT_PRUNET BIT(16)
+
+ /* Action type = 3 - Mirror VSI */
+#define ICE_LG_OTHER_ACT_MIRROR 0x3
+#define ICE_LG_ACT_MIRROR_VSI_ID_S 3
+#define ICE_LG_ACT_MIRROR_VSI_ID_M (0x3FF << ICE_LG_ACT_MIRROR_VSI_ID_S)
+
+ /* Action type = 5 - Generic Value */
+#define ICE_LG_ACT_GENERIC 0x5
+#define ICE_LG_ACT_GENERIC_VALUE_S 3
+#define ICE_LG_ACT_GENERIC_VALUE_M (0xFFFF << ICE_LG_ACT_GENERIC_VALUE_S)
+#define ICE_LG_ACT_GENERIC_OFFSET_S 19
+#define ICE_LG_ACT_GENERIC_OFFSET_M (0x7 << ICE_LG_ACT_GENERIC_OFFSET_S)
+#define ICE_LG_ACT_GENERIC_PRIORITY_S 22
+#define ICE_LG_ACT_GENERIC_PRIORITY_M (0x7 << ICE_LG_ACT_GENERIC_PRIORITY_S)
+#define ICE_LG_ACT_GENERIC_OFF_RX_DESC_PROF_IDX 7
+
+ /* Action = 7 - Set Stat count */
+#define ICE_LG_ACT_STAT_COUNT 0x7
+#define ICE_LG_ACT_STAT_COUNT_S 3
+#define ICE_LG_ACT_STAT_COUNT_M (0x7F << ICE_LG_ACT_STAT_COUNT_S)
+};
+
+
+/* Add/Update/Remove VSI list command/response entry
+ * "index" is returned as part of a response to a successful Add command, and
+ * can be used to identify the VSI list for Update/Get/Remove commands.
+ */
+struct ice_sw_rule_vsi_list {
+ __le16 index; /* Index of VSI/Prune list */
+ __le16 number_vsi;
+ __le16 vsi[1]; /* Array of number_vsi VSI numbers */
+};
+
+
+#pragma pack(1)
+/* Query VSI list command/response entry */
+struct ice_sw_rule_vsi_list_query {
+ __le16 index;
+ ice_declare_bitmap(vsi_list, ICE_MAX_VSI);
+};
+#pragma pack()
+
+
+/* Add switch rule response:
+ * Content of return buffer is same as the input buffer. The status field and
+ * LUT index are updated as part of the response
+ */
+struct ice_aqc_sw_rules_elem {
+ __le16 type; /* Switch rule type, one of T_... */
+#define ICE_AQC_SW_RULES_T_LKUP_RX 0x0
+#define ICE_AQC_SW_RULES_T_LKUP_TX 0x1
+#define ICE_AQC_SW_RULES_T_LG_ACT 0x2
+#define ICE_AQC_SW_RULES_T_VSI_LIST_SET 0x3
+#define ICE_AQC_SW_RULES_T_VSI_LIST_CLEAR 0x4
+#define ICE_AQC_SW_RULES_T_PRUNE_LIST_SET 0x5
+#define ICE_AQC_SW_RULES_T_PRUNE_LIST_CLEAR 0x6
+ __le16 status;
+ union {
+ struct ice_sw_rule_lkup_rx_tx lkup_tx_rx;
+ struct ice_sw_rule_lg_act lg_act;
+ struct ice_sw_rule_vsi_list vsi_list;
+ struct ice_sw_rule_vsi_list_query vsi_list_query;
+ } __packed pdata;
+};
+
+
+
+
+/* Get Default Topology (indirect 0x0400) */
+struct ice_aqc_get_topo {
+ u8 port_num;
+ u8 num_branches;
+ __le16 reserved1;
+ __le32 reserved2;
+ __le32 addr_high;
+ __le32 addr_low;
+};
+
+
+/* Update TSE (indirect 0x0403)
+ * Get TSE (indirect 0x0404)
+ * Add TSE (indirect 0x0401)
+ * Delete TSE (indirect 0x040F)
+ * Move TSE (indirect 0x0408)
+ * Suspend Nodes (indirect 0x0409)
+ * Resume Nodes (indirect 0x040A)
+ */
+struct ice_aqc_sched_elem_cmd {
+ __le16 num_elem_req; /* Used by commands */
+ __le16 num_elem_resp; /* Used by responses */
+ __le32 reserved;
+ __le32 addr_high;
+ __le32 addr_low;
+};
+
+
+/* This is the buffer for:
+ * Suspend Nodes (indirect 0x0409)
+ * Resume Nodes (indirect 0x040A)
+ */
+struct ice_aqc_suspend_resume_elem {
+ __le32 teid[1];
+};
+
+
+
+
+struct ice_aqc_elem_info_bw {
+ __le16 bw_profile_idx;
+ __le16 bw_alloc;
+};
+
+
+struct ice_aqc_txsched_elem {
+ u8 elem_type; /* Special field, reserved for some aq calls */
+#define ICE_AQC_ELEM_TYPE_UNDEFINED 0x0
+#define ICE_AQC_ELEM_TYPE_ROOT_PORT 0x1
+#define ICE_AQC_ELEM_TYPE_TC 0x2
+#define ICE_AQC_ELEM_TYPE_SE_GENERIC 0x3
+#define ICE_AQC_ELEM_TYPE_ENTRY_POINT 0x4
+#define ICE_AQC_ELEM_TYPE_LEAF 0x5
+#define ICE_AQC_ELEM_TYPE_SE_PADDED 0x6
+ u8 valid_sections;
+#define ICE_AQC_ELEM_VALID_GENERIC BIT(0)
+#define ICE_AQC_ELEM_VALID_CIR BIT(1)
+#define ICE_AQC_ELEM_VALID_EIR BIT(2)
+#define ICE_AQC_ELEM_VALID_SHARED BIT(3)
+ u8 generic;
+#define ICE_AQC_ELEM_GENERIC_MODE_M 0x1
+#define ICE_AQC_ELEM_GENERIC_PRIO_S 0x1
+#define ICE_AQC_ELEM_GENERIC_PRIO_M (0x7 << ICE_AQC_ELEM_GENERIC_PRIO_S)
+#define ICE_AQC_ELEM_GENERIC_SP_S 0x4
+#define ICE_AQC_ELEM_GENERIC_SP_M (0x1 << ICE_AQC_ELEM_GENERIC_SP_S)
+#define ICE_AQC_ELEM_GENERIC_ADJUST_VAL_S 0x5
+#define ICE_AQC_ELEM_GENERIC_ADJUST_VAL_M \
+ (0x3 << ICE_AQC_ELEM_GENERIC_ADJUST_VAL_S)
+ u8 flags; /* Special field, reserved for some aq calls */
+#define ICE_AQC_ELEM_FLAG_SUSPEND_M 0x1
+ struct ice_aqc_elem_info_bw cir_bw;
+ struct ice_aqc_elem_info_bw eir_bw;
+ __le16 srl_id;
+ __le16 reserved2;
+};
+
+
+struct ice_aqc_txsched_elem_data {
+ __le32 parent_teid;
+ __le32 node_teid;
+ struct ice_aqc_txsched_elem data;
+};
+
+
+struct ice_aqc_txsched_topo_grp_info_hdr {
+ __le32 parent_teid;
+ __le16 num_elems;
+ __le16 reserved2;
+};
+
+
+struct ice_aqc_add_elem {
+ struct ice_aqc_txsched_topo_grp_info_hdr hdr;
+ struct ice_aqc_txsched_elem_data generic[1];
+};
+
+
+
+struct ice_aqc_get_elem {
+ struct ice_aqc_txsched_elem_data generic[1];
+};
+
+
+struct ice_aqc_get_topo_elem {
+ struct ice_aqc_txsched_topo_grp_info_hdr hdr;
+ struct ice_aqc_txsched_elem_data
+ generic[ICE_AQC_TOPO_MAX_LEVEL_NUM];
+};
+
+
+struct ice_aqc_delete_elem {
+ struct ice_aqc_txsched_topo_grp_info_hdr hdr;
+ __le32 teid[1];
+};
+
+
+
+
+
+
+
+/* Query Scheduler Resource Allocation (indirect 0x0412)
+ * This indirect command retrieves the scheduler resources allocated by
+ * EMP Firmware to the given PF.
+ */
+struct ice_aqc_query_txsched_res {
+ u8 reserved[8];
+ __le32 addr_high;
+ __le32 addr_low;
+};
+
+
+struct ice_aqc_generic_sched_props {
+ __le16 phys_levels;
+ __le16 logical_levels;
+ u8 flattening_bitmap;
+ u8 max_device_cgds;
+ u8 max_pf_cgds;
+ u8 rsvd0;
+ __le16 rdma_qsets;
+ u8 rsvd1[22];
+};
+
+
+struct ice_aqc_layer_props {
+ u8 logical_layer;
+ u8 chunk_size;
+ __le16 max_device_nodes;
+ __le16 max_pf_nodes;
+ u8 rsvd0[4];
+ __le16 max_sibl_grp_sz;
+ __le16 max_cir_rl_profiles;
+ __le16 max_eir_rl_profiles;
+ __le16 max_srl_profiles;
+ u8 rsvd1[14];
+};
+
+
+struct ice_aqc_query_txsched_res_resp {
+ struct ice_aqc_generic_sched_props sched_props;
+ struct ice_aqc_layer_props layer_props[ICE_AQC_TOPO_MAX_LEVEL_NUM];
+};
+
+
+
+/* Get PHY capabilities (indirect 0x0600) */
+struct ice_aqc_get_phy_caps {
+ u8 lport_num;
+ u8 reserved;
+ __le16 param0;
+ /* 18.0 - Report qualified modules */
+#define ICE_AQC_GET_PHY_RQM BIT(0)
+ /* 18.1 - 18.2 : Report mode
+ * 00b - Report NVM capabilities
+ * 01b - Report topology capabilities
+ * 10b - Report SW configured
+ */
+#define ICE_AQC_REPORT_MODE_S 1
+#define ICE_AQC_REPORT_MODE_M (3 << ICE_AQC_REPORT_MODE_S)
+#define ICE_AQC_REPORT_NVM_CAP 0
+#define ICE_AQC_REPORT_TOPO_CAP BIT(1)
+#define ICE_AQC_REPORT_SW_CFG BIT(2)
+ __le32 reserved1;
+ __le32 addr_high;
+ __le32 addr_low;
+};
+
+
+/* This is #define of PHY type (Extended):
+ * The first set of defines is for phy_type_low.
+ */
+#define ICE_PHY_TYPE_LOW_100BASE_TX BIT_ULL(0)
+#define ICE_PHY_TYPE_LOW_100M_SGMII BIT_ULL(1)
+#define ICE_PHY_TYPE_LOW_1000BASE_T BIT_ULL(2)
+#define ICE_PHY_TYPE_LOW_1000BASE_SX BIT_ULL(3)
+#define ICE_PHY_TYPE_LOW_1000BASE_LX BIT_ULL(4)
+#define ICE_PHY_TYPE_LOW_1000BASE_KX BIT_ULL(5)
+#define ICE_PHY_TYPE_LOW_1G_SGMII BIT_ULL(6)
+#define ICE_PHY_TYPE_LOW_2500BASE_T BIT_ULL(7)
+#define ICE_PHY_TYPE_LOW_2500BASE_X BIT_ULL(8)
+#define ICE_PHY_TYPE_LOW_2500BASE_KX BIT_ULL(9)
+#define ICE_PHY_TYPE_LOW_5GBASE_T BIT_ULL(10)
+#define ICE_PHY_TYPE_LOW_5GBASE_KR BIT_ULL(11)
+#define ICE_PHY_TYPE_LOW_10GBASE_T BIT_ULL(12)
+#define ICE_PHY_TYPE_LOW_10G_SFI_DA BIT_ULL(13)
+#define ICE_PHY_TYPE_LOW_10GBASE_SR BIT_ULL(14)
+#define ICE_PHY_TYPE_LOW_10GBASE_LR BIT_ULL(15)
+#define ICE_PHY_TYPE_LOW_10GBASE_KR_CR1 BIT_ULL(16)
+#define ICE_PHY_TYPE_LOW_10G_SFI_AOC_ACC BIT_ULL(17)
+#define ICE_PHY_TYPE_LOW_10G_SFI_C2C BIT_ULL(18)
+#define ICE_PHY_TYPE_LOW_25GBASE_T BIT_ULL(19)
+#define ICE_PHY_TYPE_LOW_25GBASE_CR BIT_ULL(20)
+#define ICE_PHY_TYPE_LOW_25GBASE_CR_S BIT_ULL(21)
+#define ICE_PHY_TYPE_LOW_25GBASE_CR1 BIT_ULL(22)
+#define ICE_PHY_TYPE_LOW_25GBASE_SR BIT_ULL(23)
+#define ICE_PHY_TYPE_LOW_25GBASE_LR BIT_ULL(24)
+#define ICE_PHY_TYPE_LOW_25GBASE_KR BIT_ULL(25)
+#define ICE_PHY_TYPE_LOW_25GBASE_KR_S BIT_ULL(26)
+#define ICE_PHY_TYPE_LOW_25GBASE_KR1 BIT_ULL(27)
+#define ICE_PHY_TYPE_LOW_25G_AUI_AOC_ACC BIT_ULL(28)
+#define ICE_PHY_TYPE_LOW_25G_AUI_C2C BIT_ULL(29)
+#define ICE_PHY_TYPE_LOW_40GBASE_CR4 BIT_ULL(30)
+#define ICE_PHY_TYPE_LOW_40GBASE_SR4 BIT_ULL(31)
+#define ICE_PHY_TYPE_LOW_40GBASE_LR4 BIT_ULL(32)
+#define ICE_PHY_TYPE_LOW_40GBASE_KR4 BIT_ULL(33)
+#define ICE_PHY_TYPE_LOW_40G_XLAUI_AOC_ACC BIT_ULL(34)
+#define ICE_PHY_TYPE_LOW_40G_XLAUI BIT_ULL(35)
+#define ICE_PHY_TYPE_LOW_MAX_INDEX 63
+
+struct ice_aqc_get_phy_caps_data {
+ __le64 phy_type_low; /* Use values from ICE_PHY_TYPE_LOW_* */
+ __le64 reserved;
+ u8 caps;
+#define ICE_AQC_PHY_EN_TX_LINK_PAUSE BIT(0)
+#define ICE_AQC_PHY_EN_RX_LINK_PAUSE BIT(1)
+#define ICE_AQC_PHY_LOW_POWER_MODE BIT(2)
+#define ICE_AQC_PHY_EN_LINK BIT(3)
+#define ICE_AQC_PHY_AN_MODE BIT(4)
+#define ICE_AQC_PHY_EN_MOD_QUAL BIT(5)
+#define ICE_AQC_PHY_EN_LESM BIT(6)
+#define ICE_AQC_PHY_EN_AUTO_FEC BIT(7)
+#define ICE_AQC_PHY_CAPS_MASK MAKEMASK(0xff, 0)
+ u8 low_power_ctrl;
+#define ICE_AQC_PHY_EN_D3COLD_LOW_POWER_AUTONEG BIT(0)
+ __le16 eee_cap;
+#define ICE_AQC_PHY_EEE_EN_100BASE_TX BIT(0)
+#define ICE_AQC_PHY_EEE_EN_1000BASE_T BIT(1)
+#define ICE_AQC_PHY_EEE_EN_10GBASE_T BIT(2)
+#define ICE_AQC_PHY_EEE_EN_1000BASE_KX BIT(3)
+#define ICE_AQC_PHY_EEE_EN_10GBASE_KR BIT(4)
+#define ICE_AQC_PHY_EEE_EN_25GBASE_KR BIT(5)
+#define ICE_AQC_PHY_EEE_EN_40GBASE_KR4 BIT(6)
+ __le16 eeer_value;
+ u8 phy_id_oui[4]; /* PHY/Module ID connected on the port */
+ u8 phy_fw_ver[8];
+ u8 link_fec_options;
+#define ICE_AQC_PHY_FEC_10G_KR_40G_KR4_EN BIT(0)
+#define ICE_AQC_PHY_FEC_10G_KR_40G_KR4_REQ BIT(1)
+#define ICE_AQC_PHY_FEC_25G_RS_528_REQ BIT(2)
+#define ICE_AQC_PHY_FEC_25G_KR_REQ BIT(3)
+#define ICE_AQC_PHY_FEC_25G_RS_544_REQ BIT(4)
+#define ICE_AQC_PHY_FEC_25G_RS_CLAUSE91_EN BIT(6)
+#define ICE_AQC_PHY_FEC_25G_KR_CLAUSE74_EN BIT(7)
+#define ICE_AQC_PHY_FEC_MASK MAKEMASK(0xdf, 0)
+ u8 extended_compliance_code;
+#define ICE_MODULE_TYPE_TOTAL_BYTE 3
+ u8 module_type[ICE_MODULE_TYPE_TOTAL_BYTE];
+#define ICE_AQC_MOD_TYPE_BYTE0_SFP_PLUS 0xA0
+#define ICE_AQC_MOD_TYPE_BYTE0_QSFP_PLUS 0x80
+#define ICE_AQC_MOD_TYPE_BYTE1_SFP_PLUS_CU_PASSIVE BIT(0)
+#define ICE_AQC_MOD_TYPE_BYTE1_SFP_PLUS_CU_ACTIVE BIT(1)
+#define ICE_AQC_MOD_TYPE_BYTE1_10G_BASE_SR BIT(4)
+#define ICE_AQC_MOD_TYPE_BYTE1_10G_BASE_LR BIT(5)
+#define ICE_AQC_MOD_TYPE_BYTE1_10G_BASE_LRM BIT(6)
+#define ICE_AQC_MOD_TYPE_BYTE1_10G_BASE_ER BIT(7)
+#define ICE_AQC_MOD_TYPE_BYTE2_SFP_PLUS 0xA0
+#define ICE_AQC_MOD_TYPE_BYTE2_QSFP_PLUS 0x86
+ u8 qualified_module_count;
+#define ICE_AQC_QUAL_MOD_COUNT_MAX 16
+ struct {
+ u8 v_oui[3];
+ u8 rsvd3;
+ u8 v_part[16];
+ __le32 v_rev;
+ __le64 rsvd8;
+ } qual_modules[ICE_AQC_QUAL_MOD_COUNT_MAX];
+};
+
+
+/* Set PHY capabilities (direct 0x0601)
+ * NOTE: This command must be followed by setup link and restart auto-neg
+ */
+struct ice_aqc_set_phy_cfg {
+ u8 lport_num;
+ u8 reserved[7];
+ __le32 addr_high;
+ __le32 addr_low;
+};
+
+
+/* Set PHY config command data structure */
+struct ice_aqc_set_phy_cfg_data {
+ __le64 phy_type_low; /* Use values from ICE_PHY_TYPE_LOW_* */
+ __le64 rsvd0;
+ u8 caps;
+#define ICE_AQ_PHY_ENA_TX_PAUSE_ABILITY BIT(0)
+#define ICE_AQ_PHY_ENA_RX_PAUSE_ABILITY BIT(1)
+#define ICE_AQ_PHY_ENA_LOW_POWER BIT(2)
+#define ICE_AQ_PHY_ENA_LINK BIT(3)
+#define ICE_AQ_PHY_ENA_AUTO_LINK_UPDT BIT(5)
+#define ICE_AQ_PHY_ENA_LESM BIT(6)
+#define ICE_AQ_PHY_ENA_AUTO_FEC BIT(7)
+ u8 low_power_ctrl;
+ __le16 eee_cap; /* Value from ice_aqc_get_phy_caps */
+ __le16 eeer_value;
+ u8 link_fec_opt; /* Use defines from ice_aqc_get_phy_caps */
+ u8 rsvd1;
+};
+
+
+
+/* Restart AN command data structure (direct 0x0605)
+ * Also used for response, with only the lport_num field present.
+ */
+struct ice_aqc_restart_an {
+ u8 lport_num;
+ u8 reserved;
+ u8 cmd_flags;
+#define ICE_AQC_RESTART_AN_LINK_RESTART BIT(1)
+#define ICE_AQC_RESTART_AN_LINK_ENABLE BIT(2)
+ u8 reserved2[13];
+};
+
+
+/* Get link status (indirect 0x0607), also used for Link Status Event */
+struct ice_aqc_get_link_status {
+ u8 lport_num;
+ u8 reserved;
+ __le16 cmd_flags;
+#define ICE_AQ_LSE_M 0x3
+#define ICE_AQ_LSE_NOP 0x0
+#define ICE_AQ_LSE_DIS 0x2
+#define ICE_AQ_LSE_ENA 0x3
+ /* only response uses this flag */
+#define ICE_AQ_LSE_IS_ENABLED 0x1
+ __le32 reserved2;
+ __le32 addr_high;
+ __le32 addr_low;
+};
+
+
+/* Get link status response data structure, also used for Link Status Event */
+struct ice_aqc_get_link_status_data {
+ u8 topo_media_conflict;
+#define ICE_AQ_LINK_TOPO_CONFLICT BIT(0)
+#define ICE_AQ_LINK_MEDIA_CONFLICT BIT(1)
+#define ICE_AQ_LINK_TOPO_CORRUPT BIT(2)
+ u8 reserved1;
+ u8 link_info;
+#define ICE_AQ_LINK_UP BIT(0) /* Link Status */
+#define ICE_AQ_LINK_FAULT BIT(1)
+#define ICE_AQ_LINK_FAULT_TX BIT(2)
+#define ICE_AQ_LINK_FAULT_RX BIT(3)
+#define ICE_AQ_LINK_FAULT_REMOTE BIT(4)
+#define ICE_AQ_LINK_UP_PORT BIT(5) /* External Port Link Status */
+#define ICE_AQ_MEDIA_AVAILABLE BIT(6)
+#define ICE_AQ_SIGNAL_DETECT BIT(7)
+ u8 an_info;
+#define ICE_AQ_AN_COMPLETED BIT(0)
+#define ICE_AQ_LP_AN_ABILITY BIT(1)
+#define ICE_AQ_PD_FAULT BIT(2) /* Parallel Detection Fault */
+#define ICE_AQ_FEC_EN BIT(3)
+#define ICE_AQ_PHY_LOW_POWER BIT(4) /* Low Power State */
+#define ICE_AQ_LINK_PAUSE_TX BIT(5)
+#define ICE_AQ_LINK_PAUSE_RX BIT(6)
+#define ICE_AQ_QUALIFIED_MODULE BIT(7)
+ u8 ext_info;
+#define ICE_AQ_LINK_PHY_TEMP_ALARM BIT(0)
+#define ICE_AQ_LINK_EXCESSIVE_ERRORS BIT(1) /* Excessive Link Errors */
+ /* Port TX Suspended */
+#define ICE_AQ_LINK_TX_S 2
+#define ICE_AQ_LINK_TX_M (0x03 << ICE_AQ_LINK_TX_S)
+#define ICE_AQ_LINK_TX_ACTIVE 0
+#define ICE_AQ_LINK_TX_DRAINED 1
+#define ICE_AQ_LINK_TX_FLUSHED 3
+ u8 reserved2;
+ __le16 max_frame_size;
+ u8 cfg;
+#define ICE_AQ_LINK_25G_KR_FEC_EN BIT(0)
+#define ICE_AQ_LINK_25G_RS_528_FEC_EN BIT(1)
+#define ICE_AQ_LINK_25G_RS_544_FEC_EN BIT(2)
+#define ICE_AQ_FEC_MASK MAKEMASK(0x7, 0)
+ /* Pacing Config */
+#define ICE_AQ_CFG_PACING_S 3
+#define ICE_AQ_CFG_PACING_M (0xF << ICE_AQ_CFG_PACING_S)
+#define ICE_AQ_CFG_PACING_TYPE_M BIT(7)
+#define ICE_AQ_CFG_PACING_TYPE_AVG 0
+#define ICE_AQ_CFG_PACING_TYPE_FIXED ICE_AQ_CFG_PACING_TYPE_M
+ /* External Device Power Ability */
+ u8 power_desc;
+#define ICE_AQ_PWR_CLASS_M 0x3
+#define ICE_AQ_LINK_PWR_BASET_LOW_HIGH 0
+#define ICE_AQ_LINK_PWR_BASET_HIGH 1
+#define ICE_AQ_LINK_PWR_QSFP_CLASS_1 0
+#define ICE_AQ_LINK_PWR_QSFP_CLASS_2 1
+#define ICE_AQ_LINK_PWR_QSFP_CLASS_3 2
+#define ICE_AQ_LINK_PWR_QSFP_CLASS_4 3
+ __le16 link_speed;
+#define ICE_AQ_LINK_SPEED_10MB BIT(0)
+#define ICE_AQ_LINK_SPEED_100MB BIT(1)
+#define ICE_AQ_LINK_SPEED_1000MB BIT(2)
+#define ICE_AQ_LINK_SPEED_2500MB BIT(3)
+#define ICE_AQ_LINK_SPEED_5GB BIT(4)
+#define ICE_AQ_LINK_SPEED_10GB BIT(5)
+#define ICE_AQ_LINK_SPEED_20GB BIT(6)
+#define ICE_AQ_LINK_SPEED_25GB BIT(7)
+#define ICE_AQ_LINK_SPEED_40GB BIT(8)
+#define ICE_AQ_LINK_SPEED_UNKNOWN BIT(15)
+ __le32 reserved3; /* Aligns next field to 8-byte boundary */
+ __le64 phy_type_low; /* Use values from ICE_PHY_TYPE_LOW_* */
+ __le64 reserved4;
+};
+
+
+/* Set event mask command (direct 0x0613) */
+struct ice_aqc_set_event_mask {
+ u8 lport_num;
+ u8 reserved[7];
+ __le16 event_mask;
+#define ICE_AQ_LINK_EVENT_UPDOWN BIT(1)
+#define ICE_AQ_LINK_EVENT_MEDIA_NA BIT(2)
+#define ICE_AQ_LINK_EVENT_LINK_FAULT BIT(3)
+#define ICE_AQ_LINK_EVENT_PHY_TEMP_ALARM BIT(4)
+#define ICE_AQ_LINK_EVENT_EXCESSIVE_ERRORS BIT(5)
+#define ICE_AQ_LINK_EVENT_SIGNAL_DETECT BIT(6)
+#define ICE_AQ_LINK_EVENT_AN_COMPLETED BIT(7)
+#define ICE_AQ_LINK_EVENT_MODULE_QUAL_FAIL BIT(8)
+#define ICE_AQ_LINK_EVENT_PORT_TX_SUSPENDED BIT(9)
+ u8 reserved1[6];
+};
+
+
+
+/* Set MAC Loopback command (direct 0x0620) */
+struct ice_aqc_set_mac_lb {
+ u8 lb_mode;
+#define ICE_AQ_MAC_LB_EN BIT(0)
+#define ICE_AQ_MAC_LB_OSC_CLK BIT(1)
+ u8 reserved[15];
+};
+
+
+
+
+
+/* Set Port Identification LED (direct, 0x06E9) */
+struct ice_aqc_set_port_id_led {
+ u8 lport_num;
+ u8 lport_num_valid;
+#define ICE_AQC_PORT_ID_PORT_NUM_VALID BIT(0)
+ u8 ident_mode;
+#define ICE_AQC_PORT_IDENT_LED_BLINK BIT(0)
+#define ICE_AQC_PORT_IDENT_LED_ORIG 0
+ u8 rsvd[13];
+};
+
+
+
+/* NVM Read command (indirect 0x0701)
+ * NVM Erase commands (direct 0x0702)
+ * NVM Update commands (indirect 0x0703)
+ */
+struct ice_aqc_nvm {
+ __le16 offset_low;
+ u8 offset_high;
+ u8 cmd_flags;
+#define ICE_AQC_NVM_LAST_CMD BIT(0)
+#define ICE_AQC_NVM_PCIR_REQ BIT(0) /* Used by NVM Update reply */
+#define ICE_AQC_NVM_PRESERVATION_S 1
+#define ICE_AQC_NVM_PRESERVATION_M (3 << ICE_AQC_NVM_PRESERVATION_S)
+#define ICE_AQC_NVM_NO_PRESERVATION (0 << ICE_AQC_NVM_PRESERVATION_S)
+#define ICE_AQC_NVM_PRESERVE_ALL BIT(1)
+#define ICE_AQC_NVM_FACTORY_DEFAULT (2 << ICE_AQC_NVM_PRESERVATION_S)
+#define ICE_AQC_NVM_PRESERVE_SELECTED (3 << ICE_AQC_NVM_PRESERVATION_S)
+#define ICE_AQC_NVM_FLASH_ONLY BIT(7)
+ __le16 module_typeid;
+ __le16 length;
+#define ICE_AQC_NVM_ERASE_LEN 0xFFFF
+ __le32 addr_high;
+ __le32 addr_low;
+};
+
+
+/* Used for 0x0704 as well as for 0x0705 commands */
+struct ice_aqc_nvm_cfg {
+ u8 cmd_flags;
+#define ICE_AQC_ANVM_MULTIPLE_ELEMS BIT(0)
+#define ICE_AQC_ANVM_IMMEDIATE_FIELD BIT(1)
+#define ICE_AQC_ANVM_NEW_CFG BIT(2)
+ u8 reserved;
+ __le16 count;
+ __le16 id;
+ u8 reserved1[2];
+ __le32 addr_high;
+ __le32 addr_low;
+};
+
+
+struct ice_aqc_nvm_cfg_data {
+ __le16 field_id;
+ __le16 field_options;
+ __le16 field_value;
+};
+
+
+/* NVM Checksum Command (direct, 0x0706) */
+struct ice_aqc_nvm_checksum {
+ u8 flags;
+#define ICE_AQC_NVM_CHECKSUM_VERIFY BIT(0)
+#define ICE_AQC_NVM_CHECKSUM_RECALC BIT(1)
+ u8 rsvd;
+ __le16 checksum; /* Used only by response */
+#define ICE_AQC_NVM_CHECKSUM_CORRECT 0xBABA
+ u8 rsvd2[12];
+};
+
+
+/**
+ * Send to PF command (indirect 0x0801) id is only used by PF
+ *
+ * Send to VF command (indirect 0x0802) id is only used by PF
+ *
+ */
+struct ice_aqc_pf_vf_msg {
+ __le32 id;
+ u32 reserved;
+ __le32 addr_high;
+ __le32 addr_low;
+};
+
+
+
+
+/* Get/Set RSS key (indirect 0x0B04/0x0B02) */
+struct ice_aqc_get_set_rss_key {
+#define ICE_AQC_GSET_RSS_KEY_VSI_VALID BIT(15)
+#define ICE_AQC_GSET_RSS_KEY_VSI_ID_S 0
+#define ICE_AQC_GSET_RSS_KEY_VSI_ID_M (0x3FF << ICE_AQC_GSET_RSS_KEY_VSI_ID_S)
+ __le16 vsi_id;
+ u8 reserved[6];
+ __le32 addr_high;
+ __le32 addr_low;
+};
+
+
+#define ICE_AQC_GET_SET_RSS_KEY_DATA_RSS_KEY_SIZE 0x28
+#define ICE_AQC_GET_SET_RSS_KEY_DATA_HASH_KEY_SIZE 0xC
+
+struct ice_aqc_get_set_rss_keys {
+ u8 standard_rss_key[ICE_AQC_GET_SET_RSS_KEY_DATA_RSS_KEY_SIZE];
+ u8 extended_hash_key[ICE_AQC_GET_SET_RSS_KEY_DATA_HASH_KEY_SIZE];
+};
+
+
+/* Get/Set RSS LUT (indirect 0x0B05/0x0B03) */
+struct ice_aqc_get_set_rss_lut {
+#define ICE_AQC_GSET_RSS_LUT_VSI_VALID BIT(15)
+#define ICE_AQC_GSET_RSS_LUT_VSI_ID_S 0
+#define ICE_AQC_GSET_RSS_LUT_VSI_ID_M (0x1FF << ICE_AQC_GSET_RSS_LUT_VSI_ID_S)
+ __le16 vsi_id;
+#define ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_S 0
+#define ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_M \
+ (0x3 << ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_S)
+
+#define ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_VSI 0
+#define ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_PF 1
+#define ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_GLOBAL 2
+
+#define ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_S 2
+#define ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_M \
+ (0x3 << ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_S)
+
+#define ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_128 128
+#define ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_128_FLAG 0
+#define ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_512 512
+#define ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_512_FLAG 1
+#define ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_2K 2048
+#define ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_2K_FLAG 2
+
+#define ICE_AQC_GSET_RSS_LUT_GLOBAL_IDX_S 4
+#define ICE_AQC_GSET_RSS_LUT_GLOBAL_IDX_M \
+ (0xF << ICE_AQC_GSET_RSS_LUT_GLOBAL_IDX_S)
+
+ __le16 flags;
+ __le32 reserved;
+ __le32 addr_high;
+ __le32 addr_low;
+};
+
+
+
+
+
+/* Add TX LAN Queues (indirect 0x0C30) */
+struct ice_aqc_add_txqs {
+ u8 num_qgrps;
+ u8 reserved[3];
+ __le32 reserved1;
+ __le32 addr_high;
+ __le32 addr_low;
+};
+
+
+/* This is the descriptor of each queue entry for the Add TX LAN Queues
+ * command (0x0C30). Only used within struct ice_aqc_add_tx_qgrp.
+ */
+struct ice_aqc_add_txqs_perq {
+ __le16 txq_id;
+ u8 rsvd[2];
+ __le32 q_teid;
+ u8 txq_ctx[22];
+ u8 rsvd2[2];
+ struct ice_aqc_txsched_elem info;
+};
+
+
+/* The format of the command buffer for Add TX LAN Queues (0x0C30)
+ * is an array of the following structs. Please note that the length of
+ * each struct ice_aqc_add_tx_qgrp is variable due
+ * to the variable number of queues in each group!
+ */
+struct ice_aqc_add_tx_qgrp {
+ __le32 parent_teid;
+ u8 num_txqs;
+ u8 rsvd[3];
+ struct ice_aqc_add_txqs_perq txqs[1];
+};
+
+
+/* Disable TX LAN Queues (indirect 0x0C31) */
+struct ice_aqc_dis_txqs {
+ u8 cmd_type;
+#define ICE_AQC_Q_DIS_CMD_S 0
+#define ICE_AQC_Q_DIS_CMD_M (0x3 << ICE_AQC_Q_DIS_CMD_S)
+#define ICE_AQC_Q_DIS_CMD_NO_FUNC_RESET (0 << ICE_AQC_Q_DIS_CMD_S)
+#define ICE_AQC_Q_DIS_CMD_VM_RESET BIT(ICE_AQC_Q_DIS_CMD_S)
+#define ICE_AQC_Q_DIS_CMD_VF_RESET (2 << ICE_AQC_Q_DIS_CMD_S)
+#define ICE_AQC_Q_DIS_CMD_PF_RESET (3 << ICE_AQC_Q_DIS_CMD_S)
+#define ICE_AQC_Q_DIS_CMD_SUBSEQ_CALL BIT(2)
+#define ICE_AQC_Q_DIS_CMD_FLUSH_PIPE BIT(3)
+ u8 num_entries;
+ __le16 vmvf_and_timeout;
+#define ICE_AQC_Q_DIS_VMVF_NUM_S 0
+#define ICE_AQC_Q_DIS_VMVF_NUM_M (0x3FF << ICE_AQC_Q_DIS_VMVF_NUM_S)
+#define ICE_AQC_Q_DIS_TIMEOUT_S 10
+#define ICE_AQC_Q_DIS_TIMEOUT_M (0x3F << ICE_AQC_Q_DIS_TIMEOUT_S)
+ __le32 blocked_cgds;
+ __le32 addr_high;
+ __le32 addr_low;
+};
+
+
+/* The buffer for Disable TX LAN Queues (indirect 0x0C31)
+ * contains the following structures, arrayed one after the
+ * other.
+ * Note: Since the q_id is 16 bits wide, if the
+ * number of queues is even, then 2 bytes of alignment MUST be
+ * added before the start of the next group, to allow correct
+ * alignment of the parent_teid field.
+ */
+struct ice_aqc_dis_txq_item {
+ __le32 parent_teid;
+ u8 num_qs;
+ u8 rsvd;
+ /* The length of the q_id array varies according to num_qs */
+ __le16 q_id[1];
+ /* This only applies from F8 onward */
+#define ICE_AQC_Q_DIS_BUF_ELEM_TYPE_S 15
+#define ICE_AQC_Q_DIS_BUF_ELEM_TYPE_LAN_Q \
+ (0 << ICE_AQC_Q_DIS_BUF_ELEM_TYPE_S)
+#define ICE_AQC_Q_DIS_BUF_ELEM_TYPE_RDMA_QSET \
+ (1 << ICE_AQC_Q_DIS_BUF_ELEM_TYPE_S)
+};
+
+
+struct ice_aqc_dis_txq {
+ struct ice_aqc_dis_txq_item qgrps[1];
+};
+
+
+
+
+
+
+
+/* Lan Queue Overflow Event (direct, 0x1001) */
+struct ice_aqc_event_lan_overflow {
+ __le32 prtdcb_ruptq;
+ __le32 qtx_ctl;
+ u8 reserved[8];
+};
+
+
+
+/* Configure Firmware Logging Command (indirect 0xFF09)
+ * Logging Information Read Response (indirect 0xFF10)
+ * Note: The 0xFF10 command has no input parameters.
+ */
+struct ice_aqc_fw_logging {
+ u8 log_ctrl;
+#define ICE_AQC_FW_LOG_AQ_EN BIT(0)
+#define ICE_AQC_FW_LOG_UART_EN BIT(1)
+ u8 rsvd0;
+ u8 log_ctrl_valid; /* Not used by 0xFF10 Response */
+#define ICE_AQC_FW_LOG_AQ_VALID BIT(0)
+#define ICE_AQC_FW_LOG_UART_VALID BIT(1)
+ u8 rsvd1[5];
+ __le32 addr_high;
+ __le32 addr_low;
+};
+
+
+enum ice_aqc_fw_logging_mod {
+ ICE_AQC_FW_LOG_ID_GENERAL = 0,
+ ICE_AQC_FW_LOG_ID_CTRL,
+ ICE_AQC_FW_LOG_ID_LINK,
+ ICE_AQC_FW_LOG_ID_LINK_TOPO,
+ ICE_AQC_FW_LOG_ID_DNL,
+ ICE_AQC_FW_LOG_ID_I2C,
+ ICE_AQC_FW_LOG_ID_SDP,
+ ICE_AQC_FW_LOG_ID_MDIO,
+ ICE_AQC_FW_LOG_ID_ADMINQ,
+ ICE_AQC_FW_LOG_ID_HDMA,
+ ICE_AQC_FW_LOG_ID_LLDP,
+ ICE_AQC_FW_LOG_ID_DCBX,
+ ICE_AQC_FW_LOG_ID_DCB,
+ ICE_AQC_FW_LOG_ID_NETPROXY,
+ ICE_AQC_FW_LOG_ID_NVM,
+ ICE_AQC_FW_LOG_ID_AUTH,
+ ICE_AQC_FW_LOG_ID_VPD,
+ ICE_AQC_FW_LOG_ID_IOSF,
+ ICE_AQC_FW_LOG_ID_PARSER,
+ ICE_AQC_FW_LOG_ID_SW,
+ ICE_AQC_FW_LOG_ID_SCHEDULER,
+ ICE_AQC_FW_LOG_ID_TXQ,
+ ICE_AQC_FW_LOG_ID_RSVD,
+ ICE_AQC_FW_LOG_ID_POST,
+ ICE_AQC_FW_LOG_ID_WATCHDOG,
+ ICE_AQC_FW_LOG_ID_TASK_DISPATCH,
+ ICE_AQC_FW_LOG_ID_MNG,
+ ICE_AQC_FW_LOG_ID_MAX,
+};
+
+/* This is the buffer for both of the logging commands.
+ * The entry array size depends on the datalen parameter in the descriptor.
+ * There will be a total of datalen / 2 entries.
+ */
+struct ice_aqc_fw_logging_data {
+ __le16 entry[1];
+#define ICE_AQC_FW_LOG_ID_S 0
+#define ICE_AQC_FW_LOG_ID_M (0xFFF << ICE_AQC_FW_LOG_ID_S)
+
+#define ICE_AQC_FW_LOG_CONF_SUCCESS 0 /* Used by response */
+#define ICE_AQC_FW_LOG_CONF_BAD_INDX BIT(12) /* Used by response */
+
+#define ICE_AQC_FW_LOG_EN_S 12
+#define ICE_AQC_FW_LOG_EN_M (0xF << ICE_AQC_FW_LOG_EN_S)
+#define ICE_AQC_FW_LOG_INFO_EN BIT(12) /* Used by command */
+#define ICE_AQC_FW_LOG_INIT_EN BIT(13) /* Used by command */
+#define ICE_AQC_FW_LOG_FLOW_EN BIT(14) /* Used by command */
+#define ICE_AQC_FW_LOG_ERR_EN BIT(15) /* Used by command */
+};
+
+
+/* Get/Clear FW Log (indirect 0xFF11) */
+struct ice_aqc_get_clear_fw_log {
+ u8 flags;
+#define ICE_AQC_FW_LOG_CLEAR BIT(0)
+#define ICE_AQC_FW_LOG_MORE_DATA_AVAIL BIT(1)
+ u8 rsvd1[7];
+ __le32 addr_high;
+ __le32 addr_low;
+};
+
+
+/**
+ * struct ice_aq_desc - Admin Queue (AQ) descriptor
+ * @flags: ICE_AQ_FLAG_* flags
+ * @opcode: AQ command opcode
+ * @datalen: length in bytes of indirect/external data buffer
+ * @retval: return value from firmware
+ * @cookie_h: opaque data high-half
+ * @cookie_l: opaque data low-half
+ * @params: command-specific parameters
+ *
+ * Descriptor format for commands the driver posts on the Admin Transmit Queue
+ * (ATQ). The firmware writes back onto the command descriptor and returns
+ * the result of the command. Asynchronous events that are not an immediate
+ * result of the command are written to the Admin Receive Queue (ARQ) using
+ * the same descriptor format. Descriptors are in little-endian notation with
+ * 32-bit words.
+ */
+struct ice_aq_desc {
+ __le16 flags;
+ __le16 opcode;
+ __le16 datalen;
+ __le16 retval;
+ __le32 cookie_high;
+ __le32 cookie_low;
+ union {
+ u8 raw[16];
+ struct ice_aqc_generic generic;
+ struct ice_aqc_get_ver get_ver;
+ struct ice_aqc_q_shutdown q_shutdown;
+ struct ice_aqc_req_res res_owner;
+ struct ice_aqc_manage_mac_read mac_read;
+ struct ice_aqc_manage_mac_write mac_write;
+ struct ice_aqc_clear_pxe clear_pxe;
+ struct ice_aqc_list_caps get_cap;
+ struct ice_aqc_get_phy_caps get_phy;
+ struct ice_aqc_set_phy_cfg set_phy;
+ struct ice_aqc_restart_an restart_an;
+ struct ice_aqc_set_port_id_led set_port_id_led;
+ struct ice_aqc_get_sw_cfg get_sw_conf;
+ struct ice_aqc_sw_rules sw_rules;
+ struct ice_aqc_get_topo get_topo;
+ struct ice_aqc_sched_elem_cmd sched_elem_cmd;
+ struct ice_aqc_query_txsched_res query_sched_res;
+ struct ice_aqc_nvm nvm;
+ struct ice_aqc_nvm_cfg nvm_cfg;
+ struct ice_aqc_nvm_checksum nvm_checksum;
+ struct ice_aqc_pf_vf_msg virt;
+ struct ice_aqc_get_set_rss_lut get_set_rss_lut;
+ struct ice_aqc_get_set_rss_key get_set_rss_key;
+ struct ice_aqc_add_txqs add_txqs;
+ struct ice_aqc_dis_txqs dis_txqs;
+ struct ice_aqc_add_get_update_free_vsi vsi_cmd;
+ struct ice_aqc_add_update_free_vsi_resp add_update_free_vsi_res;
+ struct ice_aqc_fw_logging fw_logging;
+ struct ice_aqc_get_clear_fw_log get_clear_fw_log;
+ struct ice_aqc_set_mac_lb set_mac_lb;
+ struct ice_aqc_alloc_free_res_cmd sw_res_ctrl;
+ struct ice_aqc_set_event_mask set_event_mask;
+ struct ice_aqc_get_link_status get_link_status;
+ } params;
+};
+
+
+/* FW defined boundary for a large buffer, 4k >= Large buffer > 512 bytes */
+#define ICE_AQ_LG_BUF 512
+
+/* Flags sub-structure
+ * |0 |1 |2 |3 |4 |5 |6 |7 |8 |9 |10 |11 |12 |13 |14 |15 |
+ * |DD |CMP|ERR|VFE| * * RESERVED * * |LB |RD |VFC|BUF|SI |EI |FE |
+ */
+
+/* command flags and offsets */
+#define ICE_AQ_FLAG_DD_S 0
+#define ICE_AQ_FLAG_CMP_S 1
+#define ICE_AQ_FLAG_ERR_S 2
+#define ICE_AQ_FLAG_VFE_S 3
+#define ICE_AQ_FLAG_LB_S 9
+#define ICE_AQ_FLAG_RD_S 10
+#define ICE_AQ_FLAG_VFC_S 11
+#define ICE_AQ_FLAG_BUF_S 12
+#define ICE_AQ_FLAG_SI_S 13
+#define ICE_AQ_FLAG_EI_S 14
+#define ICE_AQ_FLAG_FE_S 15
+
+#define ICE_AQ_FLAG_DD BIT(ICE_AQ_FLAG_DD_S) /* 0x1 */
+#define ICE_AQ_FLAG_CMP BIT(ICE_AQ_FLAG_CMP_S) /* 0x2 */
+#define ICE_AQ_FLAG_ERR BIT(ICE_AQ_FLAG_ERR_S) /* 0x4 */
+#define ICE_AQ_FLAG_VFE BIT(ICE_AQ_FLAG_VFE_S) /* 0x8 */
+#define ICE_AQ_FLAG_LB BIT(ICE_AQ_FLAG_LB_S) /* 0x200 */
+#define ICE_AQ_FLAG_RD BIT(ICE_AQ_FLAG_RD_S) /* 0x400 */
+#define ICE_AQ_FLAG_VFC BIT(ICE_AQ_FLAG_VFC_S) /* 0x800 */
+#define ICE_AQ_FLAG_BUF BIT(ICE_AQ_FLAG_BUF_S) /* 0x1000 */
+#define ICE_AQ_FLAG_SI BIT(ICE_AQ_FLAG_SI_S) /* 0x2000 */
+#define ICE_AQ_FLAG_EI BIT(ICE_AQ_FLAG_EI_S) /* 0x4000 */
+#define ICE_AQ_FLAG_FE BIT(ICE_AQ_FLAG_FE_S) /* 0x8000 */
+
+/* error codes */
+enum ice_aq_err {
+ ICE_AQ_RC_OK = 0, /* Success */
+ ICE_AQ_RC_EPERM = 1, /* Operation not permitted */
+ ICE_AQ_RC_ENOENT = 2, /* No such element */
+ ICE_AQ_RC_ESRCH = 3, /* Bad opcode */
+ ICE_AQ_RC_EINTR = 4, /* Operation interrupted */
+ ICE_AQ_RC_EIO = 5, /* I/O error */
+ ICE_AQ_RC_ENXIO = 6, /* No such resource */
+ ICE_AQ_RC_E2BIG = 7, /* Arg too long */
+ ICE_AQ_RC_EAGAIN = 8, /* Try again */
+ ICE_AQ_RC_ENOMEM = 9, /* Out of memory */
+ ICE_AQ_RC_EACCES = 10, /* Permission denied */
+ ICE_AQ_RC_EFAULT = 11, /* Bad address */
+ ICE_AQ_RC_EBUSY = 12, /* Device or resource busy */
+ ICE_AQ_RC_EEXIST = 13, /* object already exists */
+ ICE_AQ_RC_EINVAL = 14, /* Invalid argument */
+ ICE_AQ_RC_ENOTTY = 15, /* Not a typewriter */
+ ICE_AQ_RC_ENOSPC = 16, /* No space left or allocation failure */
+ ICE_AQ_RC_ENOSYS = 17, /* Function not implemented */
+ ICE_AQ_RC_ERANGE = 18, /* Parameter out of range */
+ ICE_AQ_RC_EFLUSHED = 19, /* Cmd flushed due to prev cmd error */
+ ICE_AQ_RC_BAD_ADDR = 20, /* Descriptor contains a bad pointer */
+ ICE_AQ_RC_EMODE = 21, /* Op not allowed in current dev mode */
+ ICE_AQ_RC_EFBIG = 22, /* File too big */
+ ICE_AQ_RC_ESBCOMP = 23, /* SB-IOSF completion unsuccessful */
+ ICE_AQ_RC_ENOSEC = 24, /* Missing security manifest */
+ ICE_AQ_RC_EBADSIG = 25, /* Bad RSA signature */
+ ICE_AQ_RC_ESVN = 26, /* SVN number prohibits this package */
+ ICE_AQ_RC_EBADMAN = 27, /* Manifest hash mismatch */
+ ICE_AQ_RC_EBADBUF = 28, /* Buffer hash mismatches manifest */
+};
+
+/* Admin Queue command opcodes */
+enum ice_adminq_opc {
+ /* AQ commands */
+ ice_aqc_opc_get_ver = 0x0001,
+ ice_aqc_opc_driver_ver = 0x0002,
+ ice_aqc_opc_q_shutdown = 0x0003,
+ ice_aqc_opc_get_exp_err = 0x0005,
+
+ /* resource ownership */
+ ice_aqc_opc_req_res = 0x0008,
+ ice_aqc_opc_release_res = 0x0009,
+
+ /* device/function capabilities */
+ ice_aqc_opc_list_func_caps = 0x000A,
+ ice_aqc_opc_list_dev_caps = 0x000B,
+
+ /* manage MAC address */
+ ice_aqc_opc_manage_mac_read = 0x0107,
+ ice_aqc_opc_manage_mac_write = 0x0108,
+
+ /* PXE */
+ ice_aqc_opc_clear_pxe_mode = 0x0110,
+
+ /* internal switch commands */
+ ice_aqc_opc_get_sw_cfg = 0x0200,
+
+ /* Alloc/Free/Get Resources */
+ ice_aqc_opc_get_res_alloc = 0x0204,
+ ice_aqc_opc_alloc_res = 0x0208,
+ ice_aqc_opc_free_res = 0x0209,
+ ice_aqc_opc_get_allocd_res_desc = 0x020A,
+
+ /* VSI commands */
+ ice_aqc_opc_add_vsi = 0x0210,
+ ice_aqc_opc_update_vsi = 0x0211,
+ ice_aqc_opc_get_vsi_params = 0x0212,
+ ice_aqc_opc_free_vsi = 0x0213,
+
+
+
+ /* switch rules population commands */
+ ice_aqc_opc_add_sw_rules = 0x02A0,
+ ice_aqc_opc_update_sw_rules = 0x02A1,
+ ice_aqc_opc_remove_sw_rules = 0x02A2,
+ ice_aqc_opc_get_sw_rules = 0x02A3,
+ ice_aqc_opc_clear_pf_cfg = 0x02A4,
+
+
+ /* transmit scheduler commands */
+ ice_aqc_opc_get_dflt_topo = 0x0400,
+ ice_aqc_opc_add_sched_elems = 0x0401,
+ ice_aqc_opc_cfg_sched_elems = 0x0403,
+ ice_aqc_opc_get_sched_elems = 0x0404,
+ ice_aqc_opc_move_sched_elems = 0x0408,
+ ice_aqc_opc_suspend_sched_elems = 0x0409,
+ ice_aqc_opc_resume_sched_elems = 0x040A,
+ ice_aqc_opc_suspend_sched_traffic = 0x040B,
+ ice_aqc_opc_resume_sched_traffic = 0x040C,
+ ice_aqc_opc_delete_sched_elems = 0x040F,
+ ice_aqc_opc_query_sched_res = 0x0412,
+ ice_aqc_opc_query_node_to_root = 0x0413,
+ ice_aqc_opc_cfg_l2_node_cgd = 0x0414,
+
+ /* PHY commands */
+ ice_aqc_opc_get_phy_caps = 0x0600,
+ ice_aqc_opc_set_phy_cfg = 0x0601,
+ ice_aqc_opc_set_mac_cfg = 0x0603,
+ ice_aqc_opc_restart_an = 0x0605,
+ ice_aqc_opc_get_link_status = 0x0607,
+ ice_aqc_opc_set_event_mask = 0x0613,
+ ice_aqc_opc_set_mac_lb = 0x0620,
+ ice_aqc_opc_set_port_id_led = 0x06E9,
+ ice_aqc_opc_get_port_options = 0x06EA,
+ ice_aqc_opc_set_port_option = 0x06EB,
+ ice_aqc_opc_set_gpio = 0x06EC,
+ ice_aqc_opc_get_gpio = 0x06ED,
+
+ /* NVM commands */
+ ice_aqc_opc_nvm_read = 0x0701,
+ ice_aqc_opc_nvm_erase = 0x0702,
+ ice_aqc_opc_nvm_update = 0x0703,
+ ice_aqc_opc_nvm_cfg_read = 0x0704,
+ ice_aqc_opc_nvm_cfg_write = 0x0705,
+ ice_aqc_opc_nvm_checksum = 0x0706,
+
+ /* PF/VF mailbox commands */
+ ice_mbx_opc_send_msg_to_pf = 0x0801,
+ ice_mbx_opc_send_msg_to_vf = 0x0802,
+
+ /* RSS commands */
+ ice_aqc_opc_set_rss_key = 0x0B02,
+ ice_aqc_opc_set_rss_lut = 0x0B03,
+ ice_aqc_opc_get_rss_key = 0x0B04,
+ ice_aqc_opc_get_rss_lut = 0x0B05,
+
+ /* TX queue handling commands/events */
+ ice_aqc_opc_add_txqs = 0x0C30,
+ ice_aqc_opc_dis_txqs = 0x0C31,
+ ice_aqc_opc_txqs_cleanup = 0x0C31,
+ ice_aqc_opc_move_recfg_txqs = 0x0C32,
+
+
+
+
+ /* Standalone Commands/Events */
+ ice_aqc_opc_event_lan_overflow = 0x1001,
+
+ /* debug commands */
+ ice_aqc_opc_fw_logging = 0xFF09,
+ ice_aqc_opc_fw_logging_info = 0xFF10,
+ ice_aqc_opc_get_clear_fw_log = 0xFF11
+};
+
+#endif /* _ICE_ADMINQ_CMD_H_ */
diff --git a/drivers/net/ice/base/ice_alloc.h b/drivers/net/ice/base/ice_alloc.h
new file mode 100644
index 0000000..7883104
--- /dev/null
+++ b/drivers/net/ice/base/ice_alloc.h
@@ -0,0 +1,22 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_ALLOC_H_
+#define _ICE_ALLOC_H_
+
+/* Memory types */
+enum ice_memset_type {
+ ICE_NONDMA_MEM = 0,
+ ICE_DMA_MEM
+};
+
+/* Memcpy types */
+enum ice_memcpy_type {
+ ICE_NONDMA_TO_NONDMA = 0,
+ ICE_NONDMA_TO_DMA,
+ ICE_DMA_TO_DMA,
+ ICE_DMA_TO_NONDMA
+};
+
+#endif /* _ICE_ALLOC_H_ */
diff --git a/drivers/net/ice/base/ice_bitops.h b/drivers/net/ice/base/ice_bitops.h
new file mode 100644
index 0000000..ac6a51b
--- /dev/null
+++ b/drivers/net/ice/base/ice_bitops.h
@@ -0,0 +1,233 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_BITOPS_H_
+#define _ICE_BITOPS_H_
+
+/* Define the size of the bitmap chunk */
+typedef u32 ice_bitmap_t;
+
+
+/* Number of bits per bitmap chunk */
+#define BITS_PER_CHUNK (BITS_PER_BYTE * sizeof(ice_bitmap_t))
+/* Determine which chunk a bit belongs in */
+#define BIT_CHUNK(nr) ((nr) / BITS_PER_CHUNK)
+/* How many chunks are required to store this many bits */
+#define BITS_TO_CHUNKS(sz) DIVIDE_AND_ROUND_UP((sz), BITS_PER_CHUNK)
+/* Which bit inside a chunk this bit corresponds to */
+#define BIT_IN_CHUNK(nr) BIT((nr) % BITS_PER_CHUNK)
+/* How many bits are valid in the last chunk, assumes nr > 0 */
+#define LAST_CHUNK_BITS(nr) ((((nr) - 1) % BITS_PER_CHUNK) + 1)
+/* Generate a bitmask of valid bits in the last chunk, assumes nr > 0 */
+#define LAST_CHUNK_MASK(nr) (((ice_bitmap_t)~0) >> \
+ (BITS_PER_CHUNK - LAST_CHUNK_BITS(nr)))
+
+#define ice_declare_bitmap(A, sz) \
+ ice_bitmap_t A[BITS_TO_CHUNKS(sz)]
+
+/**
+ * ice_is_bit_set - Check state of a bit in a bitmap
+ * @bitmap: the bitmap to check
+ * @nr: the bit to check
+ *
+ * Returns true if bit nr of bitmap is set. False otherwise. Assumes that nr
+ * is less than the size of the bitmap.
+ */
+static inline bool ice_is_bit_set(const ice_bitmap_t *bitmap, u16 nr)
+{
+ return !!(bitmap[BIT_CHUNK(nr)] & BIT_IN_CHUNK(nr));
+}
+
+/**
+ * ice_clear_bit - Clear a bit in a bitmap
+ * @bitmap: the bitmap to change
+ * @nr: the bit to change
+ *
+ * Clears the bit nr in bitmap. Assumes that nr is less than the size of the
+ * bitmap.
+ */
+static inline void ice_clear_bit(u16 nr, ice_bitmap_t *bitmap)
+{
+ bitmap[BIT_CHUNK(nr)] &= ~BIT_IN_CHUNK(nr);
+}
+
+/**
+ * ice_set_bit - Set a bit in a bitmap
+ * @bitmap: the bitmap to change
+ * @nr: the bit to change
+ *
+ * Sets the bit nr in bitmap. Assumes that nr is less than the size of the
+ * bitmap.
+ */
+static inline void ice_set_bit(u16 nr, ice_bitmap_t *bitmap)
+{
+ bitmap[BIT_CHUNK(nr)] |= BIT_IN_CHUNK(nr);
+}
+
+/* ice_zero_bitmap - set bits of bitmap to zero.
+ * @bmp: bitmap to set zeros
+ * @size: Size of the bitmaps in bits
+ *
+ * This function sets bits of a bitmap to zero.
+ */
+static inline void ice_zero_bitmap(ice_bitmap_t *bmp, u16 size)
+{
+ ice_bitmap_t mask;
+ u16 i;
+
+ /* Handle all but last chunk*/
+ for (i = 0; i < BITS_TO_CHUNKS(size) - 1; i++)
+ bmp[i] = 0;
+ /* For the last chunk, we want to take care of not to modify bits
+ * outside the size boundary. ~mask take care of all the bits outside
+ * the boundary.
+ */
+ mask = LAST_CHUNK_MASK(size);
+ bmp[i] &= ~mask;
+}
+
+/**
+ * ice_and_bitmap - bitwise AND 2 bitmaps and store result in dst bitmap
+ * @dst: Destination bitmap that receive the result of the operation
+ * @bmp1: The first bitmap to intersect
+ * @bmp2: The second bitmap to intersect wit the first
+ * @size: Size of the bitmaps in bits
+ *
+ * This function performs a bitwise AND on two "source" bitmaps of the same size
+ * and stores the result to "dst" bitmap. The "dst" bitmap must be of the same
+ * size as the "source" bitmaps to avoid buffer overflows. This function returns
+ * a non-zero value if at least one bit location from both "source" bitmaps is
+ * non-zero.
+ */
+static inline int
+ice_and_bitmap(ice_bitmap_t *dst, const ice_bitmap_t *bmp1,
+ const ice_bitmap_t *bmp2, u16 size)
+{
+ ice_bitmap_t res = 0, mask;
+ u16 i;
+
+ /* Handle all but the last chunk */
+ for (i = 0; i < BITS_TO_CHUNKS(size) - 1; i++) {
+ dst[i] = bmp1[i] & bmp2[i];
+ res |= dst[i];
+ }
+
+ /* We want to take care not to modify any bits outside of the bitmap
+ * size, even in the destination bitmap. Thus, we won't directly
+ * assign the last bitmap, but instead use a bitmask to ensure we only
+ * modify bits which are within the size, and leave any bits above the
+ * size value alone.
+ */
+ mask = LAST_CHUNK_MASK(size);
+ dst[i] &= ~mask;
+ dst[i] |= (bmp1[i] & bmp2[i]) & mask;
+ res |= dst[i] & mask;
+
+ return res != 0;
+}
+
+/**
+ * ice_or_bitmap - bitwise OR 2 bitmaps and store result in dst bitmap
+ * @dst: Destination bitmap that receive the result of the operation
+ * @bmp1: The first bitmap to intersect
+ * @bmp2: The second bitmap to intersect wit the first
+ * @size: Size of the bitmaps in bits
+ *
+ * This function performs a bitwise OR on two "source" bitmaps of the same size
+ * and stores the result to "dst" bitmap. The "dst" bitmap must be of the same
+ * size as the "source" bitmaps to avoid buffer overflows.
+ */
+static inline void
+ice_or_bitmap(ice_bitmap_t *dst, const ice_bitmap_t *bmp1,
+ const ice_bitmap_t *bmp2, u16 size)
+{
+ ice_bitmap_t mask;
+ u16 i;
+
+ /* Handle all but last chunk*/
+ for (i = 0; i < BITS_TO_CHUNKS(size) - 1; i++)
+ dst[i] = bmp1[i] | bmp2[i];
+
+ /* We want to only OR bits within the size. Furthermore, we also do
+ * not want to modify destination bits which are beyond the specified
+ * size. Use a bitmask to ensure that we only modify the bits that are
+ * within the specified size.
+ */
+ mask = LAST_CHUNK_MASK(size);
+ dst[i] &= ~mask;
+ dst[i] |= (bmp1[i] | bmp2[i]) & mask;
+}
+
+/**
+ * ice_find_next_bit - Find the index of the next set bit of a bitmap
+ * @bitmap: the bitmap to scan
+ * @size: the size in bits of the bitmap
+ * @offset: the offset to start at
+ *
+ * Scans the bitmap and returns the index of the first set bit which is equal
+ * to or after the specified offset. Will return size if no bits are set.
+ */
+static inline u16
+ice_find_next_bit(const ice_bitmap_t *bitmap, u16 size, u16 offset)
+{
+ u16 i, j;
+
+ if (offset >= size)
+ return size;
+
+ /* Since the starting position may not be directly on a chunk
+ * boundary, we need to be careful to handle the first chunk specially
+ */
+ i = BIT_CHUNK(offset);
+ if (bitmap[i] != 0) {
+ u16 off = i * BITS_PER_CHUNK;
+
+ for (j = offset % BITS_PER_CHUNK; j < BITS_PER_CHUNK; j++) {
+ if (ice_is_bit_set(bitmap, off + j))
+ return min(size, (u16)(off + j));
+ }
+ }
+
+ /* Now we handle the remaining chunks, if any */
+ for (i++; i < BITS_TO_CHUNKS(size); i++) {
+ if (bitmap[i] != 0) {
+ u16 off = i * BITS_PER_CHUNK;
+
+ for (j = 0; j < BITS_PER_CHUNK; j++) {
+ if (ice_is_bit_set(bitmap, off + j))
+ return min(size, (u16)(off + j));
+ }
+ }
+ }
+ return size;
+}
+
+/**
+ * ice_find_first_bit - Find the index of the first set bit of a bitmap
+ * @bitmap: the bitmap to scan
+ * @size: the size in bits of the bitmap
+ *
+ * Scans the bitmap and returns the index of the first set bit. Will return
+ * size if no bits are set.
+ */
+static inline u16 ice_find_first_bit(const ice_bitmap_t *bitmap, u16 size)
+{
+ return ice_find_next_bit(bitmap, size, 0);
+}
+
+/**
+ * ice_is_any_bit_set - Return true of any bit in the bitmap is set
+ * @bitmap: the bitmap to check
+ * @size: the size of the bitmap
+ *
+ * Equivalent to checking if ice_find_first_bit returns a value less than the
+ * bitmap size.
+ */
+static inline bool ice_is_any_bit_set(ice_bitmap_t *bitmap, u16 size)
+{
+ return ice_find_first_bit(bitmap, size) < size;
+}
+
+
+#endif /* _ICE_BITOPS_H_ */
diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c
new file mode 100644
index 0000000..d2e294e
--- /dev/null
+++ b/drivers/net/ice/base/ice_common.c
@@ -0,0 +1,3332 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#include "ice_common.h"
+#include "ice_sched.h"
+#include "ice_adminq_cmd.h"
+
+#include "ice_flow.h"
+#include "ice_switch.h"
+
+#define ICE_PF_RESET_WAIT_COUNT 200
+
+#define ICE_PROG_FLEX_ENTRY(hw, rxdid, mdid, idx) \
+ wr32((hw), GLFLXP_RXDID_FLX_WRD_##idx(rxdid), \
+ ((ICE_RX_OPC_MDID << \
+ GLFLXP_RXDID_FLX_WRD_##idx##_RXDID_OPCODE_S) & \
+ GLFLXP_RXDID_FLX_WRD_##idx##_RXDID_OPCODE_M) | \
+ (((mdid) << GLFLXP_RXDID_FLX_WRD_##idx##_PROT_MDID_S) & \
+ GLFLXP_RXDID_FLX_WRD_##idx##_PROT_MDID_M))
+
+#define ICE_PROG_FLG_ENTRY(hw, rxdid, flg_0, flg_1, flg_2, flg_3, idx) \
+ wr32((hw), GLFLXP_RXDID_FLAGS(rxdid, idx), \
+ (((flg_0) << GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_S) & \
+ GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_M) | \
+ (((flg_1) << GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_1_S) & \
+ GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_1_M) | \
+ (((flg_2) << GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_2_S) & \
+ GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_2_M) | \
+ (((flg_3) << GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_3_S) & \
+ GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_3_M))
+
+
+/**
+ * ice_set_mac_type - Sets MAC type
+ * @hw: pointer to the HW structure
+ *
+ * This function sets the MAC type of the adapter based on the
+ * vendor ID and device ID stored in the hw structure.
+ */
+static enum ice_status ice_set_mac_type(struct ice_hw *hw)
+{
+ enum ice_status status = ICE_SUCCESS;
+
+ ice_debug(hw, ICE_DBG_TRACE, "ice_set_mac_type\n");
+
+ if (hw->vendor_id == ICE_INTEL_VENDOR_ID) {
+ switch (hw->device_id) {
+ default:
+ hw->mac_type = ICE_MAC_GENERIC;
+ break;
+ }
+ } else {
+ status = ICE_ERR_DEVICE_NOT_SUPPORTED;
+ }
+
+ ice_debug(hw, ICE_DBG_INIT, "found mac_type: %d, status: %d\n",
+ hw->mac_type, status);
+
+ return status;
+}
+
+#if defined(FPGA_SUPPORT) || defined(CVL_A0_SUPPORT)
+void ice_dev_onetime_setup(struct ice_hw *hw)
+{
+
+ /* configure Rx - set non pxe mode */
+ wr32(hw, GLLAN_RCTL_0, 0x1);
+
+
+#define MBX_PF_VT_PFALLOC 0x00231E80 /* Reset Source: CORER */
+ /* set VFs per PF */
+ wr32(hw, MBX_PF_VT_PFALLOC, rd32(hw, PF_VT_PFALLOC_HIF));
+}
+#endif /* FPGA_SUPPORT || CVL_A0_SUPPORT */
+
+/**
+ * ice_clear_pf_cfg - Clear PF configuration
+ * @hw: pointer to the hardware structure
+ *
+ * Clears any existing PF configuration (VSIs, VSI lists, switch rules, port
+ * configuration, flow director filters, etc.).
+ */
+enum ice_status ice_clear_pf_cfg(struct ice_hw *hw)
+{
+ struct ice_aq_desc desc;
+
+ ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_clear_pf_cfg);
+
+ return ice_aq_send_cmd(hw, &desc, NULL, 0, NULL);
+}
+
+/**
+ * ice_aq_manage_mac_read - manage MAC address read command
+ * @hw: pointer to the hw struct
+ * @buf: a virtual buffer to hold the manage MAC read response
+ * @buf_size: Size of the virtual buffer
+ * @cd: pointer to command details structure or NULL
+ *
+ * This function is used to return per PF station MAC address (0x0107).
+ * NOTE: Upon successful completion of this command, MAC address information
+ * is returned in user specified buffer. Please interpret user specified
+ * buffer as "manage_mac_read" response.
+ * Response such as various MAC addresses are stored in HW struct (port.mac)
+ * ice_aq_discover_caps is expected to be called before this function is called.
+ */
+static enum ice_status
+ice_aq_manage_mac_read(struct ice_hw *hw, void *buf, u16 buf_size,
+ struct ice_sq_cd *cd)
+{
+ struct ice_aqc_manage_mac_read_resp *resp;
+ struct ice_aqc_manage_mac_read *cmd;
+ struct ice_aq_desc desc;
+ enum ice_status status;
+ u16 flags;
+ u8 i;
+
+ cmd = &desc.params.mac_read;
+
+ if (buf_size < sizeof(*resp))
+ return ICE_ERR_BUF_TOO_SHORT;
+
+ ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_manage_mac_read);
+
+ status = ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
+ if (status)
+ return status;
+
+ resp = (struct ice_aqc_manage_mac_read_resp *)buf;
+ flags = LE16_TO_CPU(cmd->flags) & ICE_AQC_MAN_MAC_READ_M;
+
+ if (!(flags & ICE_AQC_MAN_MAC_LAN_ADDR_VALID)) {
+ ice_debug(hw, ICE_DBG_LAN, "got invalid MAC address\n");
+ return ICE_ERR_CFG;
+ }
+
+ /* A single port can report up to two (LAN and WoL) addresses */
+ for (i = 0; i < cmd->num_addr; i++)
+ if (resp[i].addr_type == ICE_AQC_MAN_MAC_ADDR_TYPE_LAN) {
+ ice_memcpy(hw->port_info->mac.lan_addr,
+ resp[i].mac_addr, ETH_ALEN,
+ ICE_DMA_TO_NONDMA);
+ ice_memcpy(hw->port_info->mac.perm_addr,
+ resp[i].mac_addr,
+ ETH_ALEN, ICE_DMA_TO_NONDMA);
+ break;
+ }
+
+ return ICE_SUCCESS;
+}
+
+/**
+ * ice_aq_get_phy_caps - returns PHY capabilities
+ * @pi: port information structure
+ * @qual_mods: report qualified modules
+ * @report_mode: report mode capabilities
+ * @pcaps: structure for PHY capabilities to be filled
+ * @cd: pointer to command details structure or NULL
+ *
+ * Returns the various PHY capabilities supported on the Port (0x0600)
+ */
+enum ice_status
+ice_aq_get_phy_caps(struct ice_port_info *pi, bool qual_mods, u8 report_mode,
+ struct ice_aqc_get_phy_caps_data *pcaps,
+ struct ice_sq_cd *cd)
+{
+ struct ice_aqc_get_phy_caps *cmd;
+ u16 pcaps_size = sizeof(*pcaps);
+ struct ice_aq_desc desc;
+ enum ice_status status;
+
+ cmd = &desc.params.get_phy;
+
+ if (!pcaps || (report_mode & ~ICE_AQC_REPORT_MODE_M) || !pi)
+ return ICE_ERR_PARAM;
+
+ ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_phy_caps);
+
+ if (qual_mods)
+ cmd->param0 |= CPU_TO_LE16(ICE_AQC_GET_PHY_RQM);
+
+ cmd->param0 |= CPU_TO_LE16(report_mode);
+ status = ice_aq_send_cmd(pi->hw, &desc, pcaps, pcaps_size, cd);
+
+ if (status == ICE_SUCCESS && report_mode == ICE_AQC_REPORT_TOPO_CAP)
+ pi->phy.phy_type_low = LE64_TO_CPU(pcaps->phy_type_low);
+
+ return status;
+}
+
+/**
+ * ice_get_media_type - Gets media type
+ * @pi: port information structure
+ */
+static enum ice_media_type ice_get_media_type(struct ice_port_info *pi)
+{
+ struct ice_link_status *hw_link_info;
+
+ if (!pi)
+ return ICE_MEDIA_UNKNOWN;
+
+ hw_link_info = &pi->phy.link_info;
+
+ if (hw_link_info->phy_type_low) {
+ switch (hw_link_info->phy_type_low) {
+ case ICE_PHY_TYPE_LOW_1000BASE_SX:
+ case ICE_PHY_TYPE_LOW_1000BASE_LX:
+ case ICE_PHY_TYPE_LOW_10GBASE_SR:
+ case ICE_PHY_TYPE_LOW_10GBASE_LR:
+ case ICE_PHY_TYPE_LOW_10G_SFI_C2C:
+ case ICE_PHY_TYPE_LOW_25GBASE_SR:
+ case ICE_PHY_TYPE_LOW_25GBASE_LR:
+ case ICE_PHY_TYPE_LOW_25G_AUI_C2C:
+ case ICE_PHY_TYPE_LOW_40GBASE_SR4:
+ case ICE_PHY_TYPE_LOW_40GBASE_LR4:
+ return ICE_MEDIA_FIBER;
+ case ICE_PHY_TYPE_LOW_100BASE_TX:
+ case ICE_PHY_TYPE_LOW_1000BASE_T:
+ case ICE_PHY_TYPE_LOW_2500BASE_T:
+ case ICE_PHY_TYPE_LOW_5GBASE_T:
+ case ICE_PHY_TYPE_LOW_10GBASE_T:
+ case ICE_PHY_TYPE_LOW_25GBASE_T:
+ return ICE_MEDIA_BASET;
+ case ICE_PHY_TYPE_LOW_10G_SFI_DA:
+ case ICE_PHY_TYPE_LOW_25GBASE_CR:
+ case ICE_PHY_TYPE_LOW_25GBASE_CR_S:
+ case ICE_PHY_TYPE_LOW_25GBASE_CR1:
+ case ICE_PHY_TYPE_LOW_40GBASE_CR4:
+ return ICE_MEDIA_DA;
+ case ICE_PHY_TYPE_LOW_1000BASE_KX:
+ case ICE_PHY_TYPE_LOW_2500BASE_KX:
+ case ICE_PHY_TYPE_LOW_2500BASE_X:
+ case ICE_PHY_TYPE_LOW_5GBASE_KR:
+ case ICE_PHY_TYPE_LOW_10GBASE_KR_CR1:
+ case ICE_PHY_TYPE_LOW_25GBASE_KR:
+ case ICE_PHY_TYPE_LOW_25GBASE_KR1:
+ case ICE_PHY_TYPE_LOW_25GBASE_KR_S:
+ case ICE_PHY_TYPE_LOW_40GBASE_KR4:
+ return ICE_MEDIA_BACKPLANE;
+ }
+ }
+ return ICE_MEDIA_UNKNOWN;
+}
+
+/**
+ * ice_aq_get_link_info
+ * @pi: port information structure
+ * @ena_lse: enable/disable LinkStatusEvent reporting
+ * @link: pointer to link status structure - optional
+ * @cd: pointer to command details structure or NULL
+ *
+ * Get Link Status (0x607). Returns the link status of the adapter.
+ */
+enum ice_status
+ice_aq_get_link_info(struct ice_port_info *pi, bool ena_lse,
+ struct ice_link_status *link, struct ice_sq_cd *cd)
+{
+ struct ice_link_status *hw_link_info_old, *hw_link_info;
+ struct ice_aqc_get_link_status_data link_data = { 0 };
+ struct ice_aqc_get_link_status *resp;
+ enum ice_media_type *hw_media_type;
+ struct ice_fc_info *hw_fc_info;
+ bool tx_pause, rx_pause;
+ struct ice_aq_desc desc;
+ enum ice_status status;
+ u16 cmd_flags;
+
+ if (!pi)
+ return ICE_ERR_PARAM;
+ hw_link_info_old = &pi->phy.link_info_old;
+ hw_media_type = &pi->phy.media_type;
+ hw_link_info = &pi->phy.link_info;
+ hw_fc_info = &pi->fc;
+
+ ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_link_status);
+ cmd_flags = (ena_lse) ? ICE_AQ_LSE_ENA : ICE_AQ_LSE_DIS;
+ resp = &desc.params.get_link_status;
+ resp->cmd_flags = CPU_TO_LE16(cmd_flags);
+ resp->lport_num = pi->lport;
+
+ status = ice_aq_send_cmd(pi->hw, &desc, &link_data, sizeof(link_data),
+ cd);
+
+ if (status != ICE_SUCCESS)
+ return status;
+
+ /* save off old link status information */
+ *hw_link_info_old = *hw_link_info;
+
+ /* update current link status information */
+ hw_link_info->link_speed = LE16_TO_CPU(link_data.link_speed);
+ hw_link_info->phy_type_low = LE64_TO_CPU(link_data.phy_type_low);
+ *hw_media_type = ice_get_media_type(pi);
+ hw_link_info->link_info = link_data.link_info;
+ hw_link_info->an_info = link_data.an_info;
+ hw_link_info->ext_info = link_data.ext_info;
+ hw_link_info->max_frame_size = LE16_TO_CPU(link_data.max_frame_size);
+ hw_link_info->fec_info = link_data.cfg & ICE_AQ_FEC_MASK;
+ hw_link_info->topo_media_conflict = link_data.topo_media_conflict;
+ hw_link_info->pacing = link_data.cfg & ICE_AQ_CFG_PACING_M;
+
+ /* update fc info */
+ tx_pause = !!(link_data.an_info & ICE_AQ_LINK_PAUSE_TX);
+ rx_pause = !!(link_data.an_info & ICE_AQ_LINK_PAUSE_RX);
+ if (tx_pause && rx_pause)
+ hw_fc_info->current_mode = ICE_FC_FULL;
+ else if (tx_pause)
+ hw_fc_info->current_mode = ICE_FC_TX_PAUSE;
+ else if (rx_pause)
+ hw_fc_info->current_mode = ICE_FC_RX_PAUSE;
+ else
+ hw_fc_info->current_mode = ICE_FC_NONE;
+
+ hw_link_info->lse_ena =
+ !!(resp->cmd_flags & CPU_TO_LE16(ICE_AQ_LSE_IS_ENABLED));
+
+
+ /* save link status information */
+ if (link)
+ *link = *hw_link_info;
+
+ /* flag cleared so calling functions don't call AQ again */
+ pi->phy.get_link_info = false;
+
+ return status;
+}
+
+/**
+ * ice_init_flex_flags
+ * @hw: pointer to the hardware structure
+ * @prof_id: Rx Descriptor Builder profile ID
+ *
+ * Function to initialize Rx flex flags
+ */
+static void ice_init_flex_flags(struct ice_hw *hw, enum ice_rxdid prof_id)
+{
+ u8 idx = 0;
+
+ /* Flex-flag fields (0-2) are programmed with FLG64 bits with layout:
+ * flexiflags0[5:0] - TCP flags, is_packet_fragmented, is_packet_UDP_GRE
+ * flexiflags1[3:0] - Not used for flag programming
+ * flexiflags2[7:0] - Tunnel and VLAN types
+ * 2 invalid fields in last index
+ */
+ switch (prof_id) {
+ /* Rx flex flags are currently programmed for the NIC profiles only.
+ * Different flag bit programming configurations can be added per
+ * profile as needed.
+ */
+ case ICE_RXDID_FLEX_NIC:
+ case ICE_RXDID_FLEX_NIC_2:
+ ICE_PROG_FLG_ENTRY(hw, prof_id, ICE_RXFLG_PKT_FRG,
+ ICE_RXFLG_UDP_GRE, ICE_RXFLG_PKT_DSI,
+ ICE_RXFLG_FIN, idx++);
+ /* flex flag 1 is not used for flexi-flag programming, skipping
+ * these four FLG64 bits.
+ */
+ ICE_PROG_FLG_ENTRY(hw, prof_id, ICE_RXFLG_SYN, ICE_RXFLG_RST,
+ ICE_RXFLG_PKT_DSI, ICE_RXFLG_PKT_DSI, idx++);
+ ICE_PROG_FLG_ENTRY(hw, prof_id, ICE_RXFLG_PKT_DSI,
+ ICE_RXFLG_PKT_DSI, ICE_RXFLG_EVLAN_x8100,
+ ICE_RXFLG_EVLAN_x9100, idx++);
+ ICE_PROG_FLG_ENTRY(hw, prof_id, ICE_RXFLG_VLAN_x8100,
+ ICE_RXFLG_TNL_VLAN, ICE_RXFLG_TNL_MAC,
+ ICE_RXFLG_TNL0, idx++);
+ ICE_PROG_FLG_ENTRY(hw, prof_id, ICE_RXFLG_TNL1, ICE_RXFLG_TNL2,
+ ICE_RXFLG_PKT_DSI, ICE_RXFLG_PKT_DSI, idx);
+ break;
+
+ default:
+ ice_debug(hw, ICE_DBG_INIT,
+ "Flag programming for profile ID %d not supported\n",
+ prof_id);
+ }
+}
+
+/**
+ * ice_init_flex_flds
+ * @hw: pointer to the hardware structure
+ * @prof_id: Rx Descriptor Builder profile ID
+ *
+ * Function to initialize flex descriptors
+ */
+static void ice_init_flex_flds(struct ice_hw *hw, enum ice_rxdid prof_id)
+{
+ enum ice_flex_rx_mdid mdid;
+
+ switch (prof_id) {
+ case ICE_RXDID_FLEX_NIC:
+ case ICE_RXDID_FLEX_NIC_2:
+ ICE_PROG_FLEX_ENTRY(hw, prof_id, ICE_RX_MDID_HASH_LOW, 0);
+ ICE_PROG_FLEX_ENTRY(hw, prof_id, ICE_RX_MDID_HASH_HIGH, 1);
+ ICE_PROG_FLEX_ENTRY(hw, prof_id, ICE_RX_MDID_FLOW_ID_LOWER, 2);
+
+ mdid = (prof_id == ICE_RXDID_FLEX_NIC_2) ?
+ ICE_RX_MDID_SRC_VSI : ICE_RX_MDID_FLOW_ID_HIGH;
+
+ ICE_PROG_FLEX_ENTRY(hw, prof_id, mdid, 3);
+
+ ice_init_flex_flags(hw, prof_id);
+ break;
+
+ default:
+ ice_debug(hw, ICE_DBG_INIT,
+ "Field init for profile ID %d not supported\n",
+ prof_id);
+ }
+}
+
+
+/**
+ * ice_init_fltr_mgmt_struct - initializes filter management list and locks
+ * @hw: pointer to the hw struct
+ */
+static enum ice_status ice_init_fltr_mgmt_struct(struct ice_hw *hw)
+{
+ struct ice_switch_info *sw;
+
+ hw->switch_info = (struct ice_switch_info *)
+ ice_malloc(hw, sizeof(*hw->switch_info));
+ sw = hw->switch_info;
+
+ if (!sw)
+ return ICE_ERR_NO_MEMORY;
+
+ INIT_LIST_HEAD(&sw->vsi_list_map_head);
+
+ return ice_init_def_sw_recp(hw);
+}
+
+/**
+ * ice_cleanup_fltr_mgmt_struct - cleanup filter management list and locks
+ * @hw: pointer to the hw struct
+ */
+static void ice_cleanup_fltr_mgmt_struct(struct ice_hw *hw)
+{
+ struct ice_switch_info *sw = hw->switch_info;
+ struct ice_vsi_list_map_info *v_pos_map;
+ struct ice_vsi_list_map_info *v_tmp_map;
+ struct ice_sw_recipe *recps;
+ u8 i;
+
+ LIST_FOR_EACH_ENTRY_SAFE(v_pos_map, v_tmp_map, &sw->vsi_list_map_head,
+ ice_vsi_list_map_info, list_entry) {
+ LIST_DEL(&v_pos_map->list_entry);
+ ice_free(hw, v_pos_map);
+ }
+ recps = hw->switch_info->recp_list;
+ for (i = 0; i < ICE_SW_LKUP_LAST; i++) {
+ recps[i].root_rid = i;
+
+ if (recps[i].adv_rule) {
+ struct ice_adv_fltr_mgmt_list_entry *tmp_entry;
+ struct ice_adv_fltr_mgmt_list_entry *lst_itr;
+
+ ice_destroy_lock(&recps[i].filt_rule_lock);
+ LIST_FOR_EACH_ENTRY_SAFE(lst_itr, tmp_entry,
+ &recps[i].filt_rules,
+ ice_adv_fltr_mgmt_list_entry,
+ list_entry) {
+ LIST_DEL(&lst_itr->list_entry);
+ ice_free(hw, lst_itr->lkups);
+ ice_free(hw, lst_itr);
+ }
+ } else {
+ struct ice_fltr_mgmt_list_entry *lst_itr, *tmp_entry;
+
+ ice_destroy_lock(&recps[i].filt_rule_lock);
+ LIST_FOR_EACH_ENTRY_SAFE(lst_itr, tmp_entry,
+ &recps[i].filt_rules,
+ ice_fltr_mgmt_list_entry,
+ list_entry) {
+ LIST_DEL(&lst_itr->list_entry);
+ ice_free(hw, lst_itr);
+ }
+ }
+ }
+ ice_rm_all_sw_replay_rule_info(hw);
+ ice_free(hw, sw->recp_list);
+ ice_free(hw, sw);
+}
+
+#define ICE_FW_LOG_DESC_SIZE(n) (sizeof(struct ice_aqc_fw_logging_data) + \
+ (((n) - 1) * sizeof(((struct ice_aqc_fw_logging_data *)0)->entry)))
+#define ICE_FW_LOG_DESC_SIZE_MAX \
+ ICE_FW_LOG_DESC_SIZE(ICE_AQC_FW_LOG_ID_MAX)
+
+/**
+ * ice_cfg_fw_log - configure FW logging
+ * @hw: pointer to the hw struct
+ * @enable: enable certain FW logging events if true, disable all if false
+ *
+ * This function enables/disables the FW logging via Rx CQ events and a UART
+ * port based on predetermined configurations. FW logging via the Rx CQ can be
+ * enabled/disabled for individual PF's. However, FW logging via the UART can
+ * only be enabled/disabled for all PFs on the same device.
+ *
+ * To enable overall FW logging, the "cq_en" and "uart_en" enable bits in
+ * hw->fw_log need to be set accordingly, e.g. based on user-provided input,
+ * before initializing the device.
+ *
+ * When re/configuring FW logging, callers need to update the "cfg" elements of
+ * the hw->fw_log.evnts array with the desired logging event configurations for
+ * modules of interest. When disabling FW logging completely, the callers can
+ * just pass false in the "enable" parameter. On completion, the function will
+ * update the "cur" element of the hw->fw_log.evnts array with the resulting
+ * logging event configurations of the modules that are being re/configured. FW
+ * logging modules that are not part of a reconfiguration operation retain their
+ * previous states.
+ *
+ * Before resetting the device, it is recommended that the driver disables FW
+ * logging before shutting down the control queue. When disabling FW logging
+ * ("enable" = false), the latest configurations of FW logging events stored in
+ * hw->fw_log.evnts[] are not overridden to allow them to be reconfigured after
+ * a device reset.
+ *
+ * When enabling FW logging to emit log messages via the Rx CQ during the
+ * device's initialization phase, a mechanism alternative to interrupt handlers
+ * needs to be used to extract FW log messages from the Rx CQ periodically and
+ * to prevent the Rx CQ from being full and stalling other types of control
+ * messages from FW to SW. Interrupts are typically disabled during the device's
+ * initialization phase.
+ */
+static enum ice_status ice_cfg_fw_log(struct ice_hw *hw, bool enable)
+{
+ struct ice_aqc_fw_logging_data *data = NULL;
+ struct ice_aqc_fw_logging *cmd;
+ enum ice_status status = ICE_SUCCESS;
+ u16 i, chgs = 0, len = 0;
+ struct ice_aq_desc desc;
+ u8 actv_evnts = 0;
+ void *buf = NULL;
+
+ if (!hw->fw_log.cq_en && !hw->fw_log.uart_en)
+ return ICE_SUCCESS;
+
+ /* Disable FW logging only when the control queue is still responsive */
+ if (!enable &&
+ (!hw->fw_log.actv_evnts || !ice_check_sq_alive(hw, &hw->adminq)))
+ return ICE_SUCCESS;
+
+ ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_fw_logging);
+ cmd = &desc.params.fw_logging;
+
+ /* Indicate which controls are valid */
+ if (hw->fw_log.cq_en)
+ cmd->log_ctrl_valid |= ICE_AQC_FW_LOG_AQ_VALID;
+
+ if (hw->fw_log.uart_en)
+ cmd->log_ctrl_valid |= ICE_AQC_FW_LOG_UART_VALID;
+
+ if (enable) {
+ /* Fill in an array of entries with FW logging modules and
+ * logging events being reconfigured.
+ */
+ for (i = 0; i < ICE_AQC_FW_LOG_ID_MAX; i++) {
+ u16 val;
+
+ /* Keep track of enabled event types */
+ actv_evnts |= hw->fw_log.evnts[i].cfg;
+
+ if (hw->fw_log.evnts[i].cfg == hw->fw_log.evnts[i].cur)
+ continue;
+
+ if (!data) {
+ data = (struct ice_aqc_fw_logging_data *)
+ ice_malloc(hw,
+ ICE_FW_LOG_DESC_SIZE_MAX);
+ if (!data)
+ return ICE_ERR_NO_MEMORY;
+ }
+
+ val = i << ICE_AQC_FW_LOG_ID_S;
+ val |= hw->fw_log.evnts[i].cfg << ICE_AQC_FW_LOG_EN_S;
+ data->entry[chgs++] = CPU_TO_LE16(val);
+ }
+
+ /* Only enable FW logging if at least one module is specified.
+ * If FW logging is currently enabled but all modules are not
+ * enabled to emit log messages, disable FW logging altogether.
+ */
+ if (actv_evnts) {
+ /* Leave if there is effectively no change */
+ if (!chgs)
+ goto out;
+
+ if (hw->fw_log.cq_en)
+ cmd->log_ctrl |= ICE_AQC_FW_LOG_AQ_EN;
+
+ if (hw->fw_log.uart_en)
+ cmd->log_ctrl |= ICE_AQC_FW_LOG_UART_EN;
+
+ buf = data;
+ len = ICE_FW_LOG_DESC_SIZE(chgs);
+ desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+ }
+ }
+
+ status = ice_aq_send_cmd(hw, &desc, buf, len, NULL);
+ if (!status) {
+ /* Update the current configuration to reflect events enabled.
+ * hw->fw_log.cq_en and hw->fw_log.uart_en indicate if the FW
+ * logging mode is enabled for the device. They do not reflect
+ * actual modules being enabled to emit log messages. So, their
+ * values remain unchanged even when all modules are disabled.
+ */
+ u16 cnt = enable ? chgs : (u16)ICE_AQC_FW_LOG_ID_MAX;
+
+ hw->fw_log.actv_evnts = actv_evnts;
+ for (i = 0; i < cnt; i++) {
+ u16 v, m;
+
+ if (!enable) {
+ /* When disabling all FW logging events as part
+ * of device's de-initialization, the original
+ * configurations are retained, and can be used
+ * to reconfigure FW logging later if the device
+ * is re-initialized.
+ */
+ hw->fw_log.evnts[i].cur = 0;
+ continue;
+ }
+
+ v = LE16_TO_CPU(data->entry[i]);
+ m = (v & ICE_AQC_FW_LOG_ID_M) >> ICE_AQC_FW_LOG_ID_S;
+ hw->fw_log.evnts[m].cur = hw->fw_log.evnts[m].cfg;
+ }
+ }
+
+out:
+ if (data)
+ ice_free(hw, data);
+
+ return status;
+}
+
+/**
+ * ice_output_fw_log
+ * @hw: pointer to the hw struct
+ * @desc: pointer to the AQ message descriptor
+ * @buf: pointer to the buffer accompanying the AQ message
+ *
+ * Formats a FW Log message and outputs it via the standard driver logs.
+ */
+void ice_output_fw_log(struct ice_hw *hw, struct ice_aq_desc *desc, void *buf)
+{
+ ice_debug(hw, ICE_DBG_AQ_MSG, "[ FW Log Msg Start ]\n");
+ ice_debug_array(hw, ICE_DBG_AQ_MSG, 16, 1, (u8 *)buf,
+ LE16_TO_CPU(desc->datalen));
+ ice_debug(hw, ICE_DBG_AQ_MSG, "[ FW Log Msg End ]\n");
+}
+
+/**
+ * ice_get_itr_intrl_gran - determine int/intrl granularity
+ * @hw: pointer to the hw struct
+ *
+ * Determines the itr/intrl granularities based on the maximum aggregate
+ * bandwidth according to the device's configuration during power-on.
+ */
+static enum ice_status ice_get_itr_intrl_gran(struct ice_hw *hw)
+{
+ u8 max_agg_bw = (rd32(hw, GL_PWR_MODE_CTL) &
+ GL_PWR_MODE_CTL_CAR_MAX_BW_M) >>
+ GL_PWR_MODE_CTL_CAR_MAX_BW_S;
+
+ switch (max_agg_bw) {
+ case ICE_MAX_AGG_BW_200G:
+ case ICE_MAX_AGG_BW_100G:
+ case ICE_MAX_AGG_BW_50G:
+ hw->itr_gran = ICE_ITR_GRAN_ABOVE_25;
+ hw->intrl_gran = ICE_INTRL_GRAN_ABOVE_25;
+ break;
+ case ICE_MAX_AGG_BW_25G:
+ hw->itr_gran = ICE_ITR_GRAN_MAX_25;
+ hw->intrl_gran = ICE_INTRL_GRAN_MAX_25;
+ break;
+ default:
+ ice_debug(hw, ICE_DBG_INIT,
+ "Failed to determine itr/intrl granularity\n");
+ return ICE_ERR_CFG;
+ }
+
+ return ICE_SUCCESS;
+}
+
+/**
+ * ice_init_hw - main hardware initialization routine
+ * @hw: pointer to the hardware structure
+ */
+enum ice_status ice_init_hw(struct ice_hw *hw)
+{
+ struct ice_aqc_get_phy_caps_data *pcaps;
+ enum ice_status status;
+ u16 mac_buf_len;
+ void *mac_buf;
+
+ ice_debug(hw, ICE_DBG_TRACE, "ice_init_hw");
+
+
+ /* Set MAC type based on DeviceID */
+ status = ice_set_mac_type(hw);
+ if (status)
+ return status;
+
+ hw->pf_id = (u8)(rd32(hw, PF_FUNC_RID) &
+ PF_FUNC_RID_FUNCTION_NUMBER_M) >>
+ PF_FUNC_RID_FUNCTION_NUMBER_S;
+
+
+ status = ice_reset(hw, ICE_RESET_PFR);
+ if (status)
+ return status;
+
+ status = ice_get_itr_intrl_gran(hw);
+ if (status)
+ return status;
+
+
+ status = ice_init_all_ctrlq(hw);
+ if (status)
+ goto err_unroll_cqinit;
+
+ /* Enable FW logging. Not fatal if this fails. */
+ status = ice_cfg_fw_log(hw, true);
+ if (status)
+ ice_debug(hw, ICE_DBG_INIT, "Failed to enable FW logging.\n");
+
+ status = ice_clear_pf_cfg(hw);
+ if (status)
+ goto err_unroll_cqinit;
+
+
+ ice_clear_pxe_mode(hw);
+
+ status = ice_init_nvm(hw);
+ if (status)
+ goto err_unroll_cqinit;
+
+ status = ice_get_caps(hw);
+ if (status)
+ goto err_unroll_cqinit;
+
+ hw->port_info = (struct ice_port_info *)
+ ice_malloc(hw, sizeof(*hw->port_info));
+ if (!hw->port_info) {
+ status = ICE_ERR_NO_MEMORY;
+ goto err_unroll_cqinit;
+ }
+
+ /* set the back pointer to hw */
+ hw->port_info->hw = hw;
+
+ /* Initialize port_info struct with switch configuration data */
+ status = ice_get_initial_sw_cfg(hw);
+ if (status)
+ goto err_unroll_alloc;
+
+ hw->evb_veb = true;
+
+ /* Query the allocated resources for Tx scheduler */
+ status = ice_sched_query_res_alloc(hw);
+ if (status) {
+ ice_debug(hw, ICE_DBG_SCHED,
+ "Failed to get scheduler allocated resources\n");
+ goto err_unroll_alloc;
+ }
+
+
+ /* Initialize port_info struct with scheduler data */
+ status = ice_sched_init_port(hw->port_info);
+ if (status)
+ goto err_unroll_sched;
+
+ pcaps = (struct ice_aqc_get_phy_caps_data *)
+ ice_malloc(hw, sizeof(*pcaps));
+ if (!pcaps) {
+ status = ICE_ERR_NO_MEMORY;
+ goto err_unroll_sched;
+ }
+
+ /* Initialize port_info struct with PHY capabilities */
+ status = ice_aq_get_phy_caps(hw->port_info, false,
+ ICE_AQC_REPORT_TOPO_CAP, pcaps, NULL);
+ ice_free(hw, pcaps);
+ if (status)
+ goto err_unroll_sched;
+
+ /* Initialize port_info struct with link information */
+ status = ice_aq_get_link_info(hw->port_info, false, NULL, NULL);
+ if (status)
+ goto err_unroll_sched;
+ /* need a valid SW entry point to build a Tx tree */
+ if (!hw->sw_entry_point_layer) {
+ ice_debug(hw, ICE_DBG_SCHED, "invalid sw entry point\n");
+ status = ICE_ERR_CFG;
+ goto err_unroll_sched;
+ }
+ INIT_LIST_HEAD(&hw->agg_list);
+
+ status = ice_init_fltr_mgmt_struct(hw);
+ if (status)
+ goto err_unroll_sched;
+
+#if defined(FPGA_SUPPORT) || defined(CVL_A0_SUPPORT)
+ /* some of the register write workarounds to get Rx working */
+ ice_dev_onetime_setup(hw);
+#endif /* FPGA_SUPPORT || CVL_A0_SUPPORT */
+
+ /* Get MAC information */
+ /* A single port can report up to two (LAN and WoL) addresses */
+ mac_buf = ice_calloc(hw, 2,
+ sizeof(struct ice_aqc_manage_mac_read_resp));
+ mac_buf_len = 2 * sizeof(struct ice_aqc_manage_mac_read_resp);
+
+ if (!mac_buf) {
+ status = ICE_ERR_NO_MEMORY;
+ goto err_unroll_fltr_mgmt_struct;
+ }
+
+ status = ice_aq_manage_mac_read(hw, mac_buf, mac_buf_len, NULL);
+ ice_free(hw, mac_buf);
+
+ if (status)
+ goto err_unroll_fltr_mgmt_struct;
+
+ ice_init_flex_flds(hw, ICE_RXDID_FLEX_NIC);
+ ice_init_flex_flds(hw, ICE_RXDID_FLEX_NIC_2);
+
+
+ return ICE_SUCCESS;
+
+err_unroll_fltr_mgmt_struct:
+ ice_cleanup_fltr_mgmt_struct(hw);
+err_unroll_sched:
+ ice_sched_cleanup_all(hw);
+err_unroll_alloc:
+ ice_free(hw, hw->port_info);
+ hw->port_info = NULL;
+err_unroll_cqinit:
+ ice_shutdown_all_ctrlq(hw);
+ return status;
+}
+
+/**
+ * ice_deinit_hw - unroll initialization operations done by ice_init_hw
+ * @hw: pointer to the hardware structure
+ *
+ * This should be called only during nominal operation, not as a result of
+ * ice_init_hw() failing since ice_init_hw() will take care of unrolling
+ * applicable initializations if it fails for any reason.
+ */
+void ice_deinit_hw(struct ice_hw *hw)
+{
+ ice_cleanup_fltr_mgmt_struct(hw);
+
+ ice_sched_cleanup_all(hw);
+ ice_sched_clear_agg(hw);
+
+ if (hw->port_info) {
+ ice_free(hw, hw->port_info);
+ hw->port_info = NULL;
+ }
+
+ /* Attempt to disable FW logging before shutting down control queues */
+ ice_cfg_fw_log(hw, false);
+ ice_shutdown_all_ctrlq(hw);
+
+ /* Clear VSI contexts if not already cleared */
+ ice_clear_all_vsi_ctx(hw);
+}
+
+/**
+ * ice_check_reset - Check to see if a global reset is complete
+ * @hw: pointer to the hardware structure
+ */
+enum ice_status ice_check_reset(struct ice_hw *hw)
+{
+ u32 cnt, reg = 0, grst_delay;
+
+ /* Poll for Device Active state in case a recent CORER, GLOBR,
+ * or EMPR has occurred. The grst delay value is in 100ms units.
+ * Add 1sec for outstanding AQ commands that can take a long time.
+ */
+#define GLGEN_RSTCTL 0x000B8180 /* Reset Source: POR */
+#define GLGEN_RSTCTL_GRSTDEL_S 0
+#define GLGEN_RSTCTL_GRSTDEL_M MAKEMASK(0x3F, GLGEN_RSTCTL_GRSTDEL_S)
+ grst_delay = ((rd32(hw, GLGEN_RSTCTL) & GLGEN_RSTCTL_GRSTDEL_M) >>
+ GLGEN_RSTCTL_GRSTDEL_S) + 10;
+
+ for (cnt = 0; cnt < grst_delay; cnt++) {
+ ice_msec_delay(100, true);
+ reg = rd32(hw, GLGEN_RSTAT);
+ if (!(reg & GLGEN_RSTAT_DEVSTATE_M))
+ break;
+ }
+
+ if (cnt == grst_delay) {
+ ice_debug(hw, ICE_DBG_INIT,
+ "Global reset polling failed to complete.\n");
+ return ICE_ERR_RESET_FAILED;
+ }
+
+#define ICE_RESET_DONE_MASK (GLNVM_ULD_CORER_DONE_M | \
+ GLNVM_ULD_GLOBR_DONE_M)
+
+ /* Device is Active; check Global Reset processes are done */
+ for (cnt = 0; cnt < ICE_PF_RESET_WAIT_COUNT; cnt++) {
+ reg = rd32(hw, GLNVM_ULD) & ICE_RESET_DONE_MASK;
+ if (reg == ICE_RESET_DONE_MASK) {
+ ice_debug(hw, ICE_DBG_INIT,
+ "Global reset processes done. %d\n", cnt);
+ break;
+ }
+ ice_msec_delay(10, true);
+ }
+
+ if (cnt == ICE_PF_RESET_WAIT_COUNT) {
+ ice_debug(hw, ICE_DBG_INIT,
+ "Wait for Reset Done timed out. GLNVM_ULD = 0x%x\n",
+ reg);
+ return ICE_ERR_RESET_FAILED;
+ }
+
+ return ICE_SUCCESS;
+}
+
+/**
+ * ice_pf_reset - Reset the PF
+ * @hw: pointer to the hardware structure
+ *
+ * If a global reset has been triggered, this function checks
+ * for its completion and then issues the PF reset
+ */
+static enum ice_status ice_pf_reset(struct ice_hw *hw)
+{
+ u32 cnt, reg;
+
+ /* If at function entry a global reset was already in progress, i.e.
+ * state is not 'device active' or any of the reset done bits are not
+ * set in GLNVM_ULD, there is no need for a PF Reset; poll until the
+ * global reset is done.
+ */
+ if ((rd32(hw, GLGEN_RSTAT) & GLGEN_RSTAT_DEVSTATE_M) ||
+ (rd32(hw, GLNVM_ULD) & ICE_RESET_DONE_MASK) ^ ICE_RESET_DONE_MASK) {
+ /* poll on global reset currently in progress until done */
+ if (ice_check_reset(hw))
+ return ICE_ERR_RESET_FAILED;
+
+ return ICE_SUCCESS;
+ }
+
+ /* Reset the PF */
+ reg = rd32(hw, PFGEN_CTRL);
+
+ wr32(hw, PFGEN_CTRL, (reg | PFGEN_CTRL_PFSWR_M));
+
+ for (cnt = 0; cnt < ICE_PF_RESET_WAIT_COUNT; cnt++) {
+ reg = rd32(hw, PFGEN_CTRL);
+ if (!(reg & PFGEN_CTRL_PFSWR_M))
+ break;
+
+ ice_msec_delay(1, true);
+ }
+
+ if (cnt == ICE_PF_RESET_WAIT_COUNT) {
+ ice_debug(hw, ICE_DBG_INIT,
+ "PF reset polling failed to complete.\n");
+ return ICE_ERR_RESET_FAILED;
+ }
+
+ return ICE_SUCCESS;
+}
+
+/**
+ * ice_reset - Perform different types of reset
+ * @hw: pointer to the hardware structure
+ * @req: reset request
+ *
+ * This function triggers a reset as specified by the req parameter.
+ *
+ * Note:
+ * If anything other than a PF reset is triggered, PXE mode is restored.
+ * This has to be cleared using ice_clear_pxe_mode again, once the AQ
+ * interface has been restored in the rebuild flow.
+ */
+enum ice_status ice_reset(struct ice_hw *hw, enum ice_reset_req req)
+{
+ u32 val = 0;
+
+ switch (req) {
+ case ICE_RESET_PFR:
+ return ice_pf_reset(hw);
+ case ICE_RESET_CORER:
+ ice_debug(hw, ICE_DBG_INIT, "CoreR requested\n");
+ val = GLGEN_RTRIG_CORER_M;
+ break;
+ case ICE_RESET_GLOBR:
+ ice_debug(hw, ICE_DBG_INIT, "GlobalR requested\n");
+ val = GLGEN_RTRIG_GLOBR_M;
+ break;
+ default:
+ return ICE_ERR_PARAM;
+ }
+
+ val |= rd32(hw, GLGEN_RTRIG);
+ wr32(hw, GLGEN_RTRIG, val);
+ ice_flush(hw);
+
+
+ /* wait for the FW to be ready */
+ return ice_check_reset(hw);
+}
+
+
+
+/**
+ * ice_copy_rxq_ctx_to_hw
+ * @hw: pointer to the hardware structure
+ * @ice_rxq_ctx: pointer to the rxq context
+ * @rxq_index: the index of the Rx queue
+ *
+ * Copies rxq context from dense structure to hw register space
+ */
+static enum ice_status
+ice_copy_rxq_ctx_to_hw(struct ice_hw *hw, u8 *ice_rxq_ctx, u32 rxq_index)
+{
+ u8 i;
+
+ if (!ice_rxq_ctx)
+ return ICE_ERR_BAD_PTR;
+
+ if (rxq_index > QRX_CTRL_MAX_INDEX)
+ return ICE_ERR_PARAM;
+
+ /* Copy each dword separately to hw */
+ for (i = 0; i < ICE_RXQ_CTX_SIZE_DWORDS; i++) {
+ wr32(hw, QRX_CONTEXT(i, rxq_index),
+ *((u32 *)(ice_rxq_ctx + (i * sizeof(u32)))));
+
+ ice_debug(hw, ICE_DBG_QCTX, "qrxdata[%d]: %08X\n", i,
+ *((u32 *)(ice_rxq_ctx + (i * sizeof(u32)))));
+ }
+
+ return ICE_SUCCESS;
+}
+
+/* LAN Rx Queue Context */
+static const struct ice_ctx_ele ice_rlan_ctx_info[] = {
+ /* Field Width LSB */
+ ICE_CTX_STORE(ice_rlan_ctx, head, 13, 0),
+ ICE_CTX_STORE(ice_rlan_ctx, cpuid, 8, 13),
+ ICE_CTX_STORE(ice_rlan_ctx, base, 57, 32),
+ ICE_CTX_STORE(ice_rlan_ctx, qlen, 13, 89),
+ ICE_CTX_STORE(ice_rlan_ctx, dbuf, 7, 102),
+ ICE_CTX_STORE(ice_rlan_ctx, hbuf, 5, 109),
+ ICE_CTX_STORE(ice_rlan_ctx, dtype, 2, 114),
+ ICE_CTX_STORE(ice_rlan_ctx, dsize, 1, 116),
+ ICE_CTX_STORE(ice_rlan_ctx, crcstrip, 1, 117),
+ ICE_CTX_STORE(ice_rlan_ctx, l2tsel, 1, 119),
+ ICE_CTX_STORE(ice_rlan_ctx, hsplit_0, 4, 120),
+ ICE_CTX_STORE(ice_rlan_ctx, hsplit_1, 2, 124),
+ ICE_CTX_STORE(ice_rlan_ctx, showiv, 1, 127),
+ ICE_CTX_STORE(ice_rlan_ctx, rxmax, 14, 174),
+ ICE_CTX_STORE(ice_rlan_ctx, tphrdesc_ena, 1, 193),
+ ICE_CTX_STORE(ice_rlan_ctx, tphwdesc_ena, 1, 194),
+ ICE_CTX_STORE(ice_rlan_ctx, tphdata_ena, 1, 195),
+ ICE_CTX_STORE(ice_rlan_ctx, tphhead_ena, 1, 196),
+ ICE_CTX_STORE(ice_rlan_ctx, lrxqthresh, 3, 198),
+ { 0 }
+};
+
+/**
+ * ice_write_rxq_ctx
+ * @hw: pointer to the hardware structure
+ * @rlan_ctx: pointer to the rxq context
+ * @rxq_index: the index of the Rx queue
+ *
+ * Converts rxq context from sparse to dense structure and then writes
+ * it to hw register space
+ */
+enum ice_status
+ice_write_rxq_ctx(struct ice_hw *hw, struct ice_rlan_ctx *rlan_ctx,
+ u32 rxq_index)
+{
+ u8 ctx_buf[ICE_RXQ_CTX_SZ] = { 0 };
+
+ ice_set_ctx((u8 *)rlan_ctx, ctx_buf, ice_rlan_ctx_info);
+ return ice_copy_rxq_ctx_to_hw(hw, ctx_buf, rxq_index);
+}
+
+#if !defined(NO_UNUSED_CTX_CODE) || defined(AE_DRIVER)
+/**
+ * ice_clear_rxq_ctx
+ * @hw: pointer to the hardware structure
+ * @rxq_index: the index of the Rx queue to clear
+ *
+ * Clears rxq context in hw register space
+ */
+enum ice_status ice_clear_rxq_ctx(struct ice_hw *hw, u32 rxq_index)
+{
+ u8 i;
+
+ if (rxq_index > QRX_CTRL_MAX_INDEX)
+ return ICE_ERR_PARAM;
+
+ /* Clear each dword register separately */
+ for (i = 0; i < ICE_RXQ_CTX_SIZE_DWORDS; i++)
+ wr32(hw, QRX_CONTEXT(i, rxq_index), 0);
+
+ return ICE_SUCCESS;
+}
+#endif /* !NO_UNUSED_CTX_CODE || AE_DRIVER */
+
+/* LAN Tx Queue Context */
+const struct ice_ctx_ele ice_tlan_ctx_info[] = {
+ /* Field Width LSB */
+ ICE_CTX_STORE(ice_tlan_ctx, base, 57, 0),
+ ICE_CTX_STORE(ice_tlan_ctx, port_num, 3, 57),
+ ICE_CTX_STORE(ice_tlan_ctx, cgd_num, 5, 60),
+ ICE_CTX_STORE(ice_tlan_ctx, pf_num, 3, 65),
+ ICE_CTX_STORE(ice_tlan_ctx, vmvf_num, 10, 68),
+ ICE_CTX_STORE(ice_tlan_ctx, vmvf_type, 2, 78),
+ ICE_CTX_STORE(ice_tlan_ctx, src_vsi, 10, 80),
+ ICE_CTX_STORE(ice_tlan_ctx, tsyn_ena, 1, 90),
+ ICE_CTX_STORE(ice_tlan_ctx, alt_vlan, 1, 92),
+ ICE_CTX_STORE(ice_tlan_ctx, cpuid, 8, 93),
+ ICE_CTX_STORE(ice_tlan_ctx, wb_mode, 1, 101),
+ ICE_CTX_STORE(ice_tlan_ctx, tphrd_desc, 1, 102),
+ ICE_CTX_STORE(ice_tlan_ctx, tphrd, 1, 103),
+ ICE_CTX_STORE(ice_tlan_ctx, tphwr_desc, 1, 104),
+ ICE_CTX_STORE(ice_tlan_ctx, cmpq_id, 9, 105),
+ ICE_CTX_STORE(ice_tlan_ctx, qnum_in_func, 14, 114),
+ ICE_CTX_STORE(ice_tlan_ctx, itr_notification_mode, 1, 128),
+ ICE_CTX_STORE(ice_tlan_ctx, adjust_prof_id, 6, 129),
+ ICE_CTX_STORE(ice_tlan_ctx, qlen, 13, 135),
+ ICE_CTX_STORE(ice_tlan_ctx, quanta_prof_idx, 4, 148),
+ ICE_CTX_STORE(ice_tlan_ctx, tso_ena, 1, 152),
+ ICE_CTX_STORE(ice_tlan_ctx, tso_qnum, 11, 153),
+ ICE_CTX_STORE(ice_tlan_ctx, legacy_int, 1, 164),
+ ICE_CTX_STORE(ice_tlan_ctx, drop_ena, 1, 165),
+ ICE_CTX_STORE(ice_tlan_ctx, cache_prof_idx, 2, 166),
+ ICE_CTX_STORE(ice_tlan_ctx, pkt_shaper_prof_idx, 3, 168),
+ ICE_CTX_STORE(ice_tlan_ctx, int_q_state, 110, 171),
+ { 0 }
+};
+
+#if !defined(NO_UNUSED_CTX_CODE) || defined(AE_DRIVER)
+/**
+ * ice_copy_tx_cmpltnq_ctx_to_hw
+ * @hw: pointer to the hardware structure
+ * @ice_tx_cmpltnq_ctx: pointer to the Tx completion queue context
+ * @tx_cmpltnq_index: the index of the completion queue
+ *
+ * Copies Tx completion q context from dense structure to hw register space
+ */
+static enum ice_status
+ice_copy_tx_cmpltnq_ctx_to_hw(struct ice_hw *hw, u8 *ice_tx_cmpltnq_ctx,
+ u32 tx_cmpltnq_index)
+{
+ u8 i;
+
+ if (!ice_tx_cmpltnq_ctx)
+ return ICE_ERR_BAD_PTR;
+
+ if (tx_cmpltnq_index > GLTCLAN_CQ_CNTX0_MAX_INDEX)
+ return ICE_ERR_PARAM;
+
+ /* Copy each dword separately to hw */
+ for (i = 0; i < ICE_TX_CMPLTNQ_CTX_SIZE_DWORDS; i++) {
+ wr32(hw, GLTCLAN_CQ_CNTX(i, tx_cmpltnq_index),
+ *((u32 *)(ice_tx_cmpltnq_ctx + (i * sizeof(u32)))));
+
+ ice_debug(hw, ICE_DBG_QCTX, "cmpltnqdata[%d]: %08X\n", i,
+ *((u32 *)(ice_tx_cmpltnq_ctx + (i * sizeof(u32)))));
+ }
+
+ return ICE_SUCCESS;
+}
+
+/* LAN Tx Completion Queue Context */
+static const struct ice_ctx_ele ice_tx_cmpltnq_ctx_info[] = {
+ /* Field Width LSB */
+ ICE_CTX_STORE(ice_tx_cmpltnq_ctx, base, 57, 0),
+ ICE_CTX_STORE(ice_tx_cmpltnq_ctx, q_len, 18, 64),
+ ICE_CTX_STORE(ice_tx_cmpltnq_ctx, generation, 1, 96),
+ ICE_CTX_STORE(ice_tx_cmpltnq_ctx, wrt_ptr, 22, 97),
+ ICE_CTX_STORE(ice_tx_cmpltnq_ctx, pf_num, 3, 128),
+ ICE_CTX_STORE(ice_tx_cmpltnq_ctx, vmvf_num, 10, 131),
+ ICE_CTX_STORE(ice_tx_cmpltnq_ctx, vmvf_type, 2, 141),
+ ICE_CTX_STORE(ice_tx_cmpltnq_ctx, tph_desc_wr, 1, 160),
+ ICE_CTX_STORE(ice_tx_cmpltnq_ctx, cpuid, 8, 161),
+ ICE_CTX_STORE(ice_tx_cmpltnq_ctx, cmpltn_cache, 512, 192),
+ { 0 }
+};
+
+/**
+ * ice_write_tx_cmpltnq_ctx
+ * @hw: pointer to the hardware structure
+ * @tx_cmpltnq_ctx: pointer to the completion queue context
+ * @tx_cmpltnq_index: the index of the completion queue
+ *
+ * Converts completion queue context from sparse to dense structure and then
+ * writes it to hw register space
+ */
+enum ice_status
+ice_write_tx_cmpltnq_ctx(struct ice_hw *hw,
+ struct ice_tx_cmpltnq_ctx *tx_cmpltnq_ctx,
+ u32 tx_cmpltnq_index)
+{
+ u8 ctx_buf[ICE_TX_CMPLTNQ_CTX_SIZE_DWORDS * sizeof(u32)] = { 0 };
+
+ ice_set_ctx((u8 *)tx_cmpltnq_ctx, ctx_buf, ice_tx_cmpltnq_ctx_info);
+ return ice_copy_tx_cmpltnq_ctx_to_hw(hw, ctx_buf, tx_cmpltnq_index);
+}
+
+/**
+ * ice_clear_tx_cmpltnq_ctx
+ * @hw: pointer to the hardware structure
+ * @tx_cmpltnq_index: the index of the completion queue to clear
+ *
+ * Clears Tx completion queue context in hw register space
+ */
+enum ice_status
+ice_clear_tx_cmpltnq_ctx(struct ice_hw *hw, u32 tx_cmpltnq_index)
+{
+ u8 i;
+
+ if (tx_cmpltnq_index > GLTCLAN_CQ_CNTX0_MAX_INDEX)
+ return ICE_ERR_PARAM;
+
+ /* Clear each dword register separately */
+ for (i = 0; i < ICE_TX_CMPLTNQ_CTX_SIZE_DWORDS; i++)
+ wr32(hw, GLTCLAN_CQ_CNTX(i, tx_cmpltnq_index), 0);
+
+ return ICE_SUCCESS;
+}
+
+/**
+ * ice_copy_tx_drbell_q_ctx_to_hw
+ * @hw: pointer to the hardware structure
+ * @ice_tx_drbell_q_ctx: pointer to the doorbell queue context
+ * @tx_drbell_q_index: the index of the doorbell queue
+ *
+ * Copies doorbell q context from dense structure to hw register space
+ */
+static enum ice_status
+ice_copy_tx_drbell_q_ctx_to_hw(struct ice_hw *hw, u8 *ice_tx_drbell_q_ctx,
+ u32 tx_drbell_q_index)
+{
+ u8 i;
+
+ if (!ice_tx_drbell_q_ctx)
+ return ICE_ERR_BAD_PTR;
+
+ if (tx_drbell_q_index > QTX_COMM_DBLQ_DBELL_MAX_INDEX)
+ return ICE_ERR_PARAM;
+
+ /* Copy each dword separately to hw */
+ for (i = 0; i < ICE_TX_DRBELL_Q_CTX_SIZE_DWORDS; i++) {
+ wr32(hw, QTX_COMM_DBLQ_CNTX(i, tx_drbell_q_index),
+ *((u32 *)(ice_tx_drbell_q_ctx + (i * sizeof(u32)))));
+
+ ice_debug(hw, ICE_DBG_QCTX, "tx_drbell_qdata[%d]: %08X\n", i,
+ *((u32 *)(ice_tx_drbell_q_ctx + (i * sizeof(u32)))));
+ }
+
+ return ICE_SUCCESS;
+}
+
+/* LAN Tx Doorbell Queue Context info */
+static const struct ice_ctx_ele ice_tx_drbell_q_ctx_info[] = {
+ /* Field Width LSB */
+ ICE_CTX_STORE(ice_tx_drbell_q_ctx, base, 57, 0),
+ ICE_CTX_STORE(ice_tx_drbell_q_ctx, ring_len, 13, 64),
+ ICE_CTX_STORE(ice_tx_drbell_q_ctx, pf_num, 3, 80),
+ ICE_CTX_STORE(ice_tx_drbell_q_ctx, vf_num, 8, 84),
+ ICE_CTX_STORE(ice_tx_drbell_q_ctx, vmvf_type, 2, 94),
+ ICE_CTX_STORE(ice_tx_drbell_q_ctx, cpuid, 8, 96),
+ ICE_CTX_STORE(ice_tx_drbell_q_ctx, tph_desc_rd, 1, 104),
+ ICE_CTX_STORE(ice_tx_drbell_q_ctx, tph_desc_wr, 1, 108),
+ ICE_CTX_STORE(ice_tx_drbell_q_ctx, db_q_en, 1, 112),
+ ICE_CTX_STORE(ice_tx_drbell_q_ctx, rd_head, 13, 128),
+ ICE_CTX_STORE(ice_tx_drbell_q_ctx, rd_tail, 13, 144),
+ { 0 }
+};
+
+/**
+ * ice_write_tx_drbell_q_ctx
+ * @hw: pointer to the hardware structure
+ * @tx_drbell_q_ctx: pointer to the doorbell queue context
+ * @tx_drbell_q_index: the index of the doorbell queue
+ *
+ * Converts doorbell queue context from sparse to dense structure and then
+ * writes it to hw register space
+ */
+enum ice_status
+ice_write_tx_drbell_q_ctx(struct ice_hw *hw,
+ struct ice_tx_drbell_q_ctx *tx_drbell_q_ctx,
+ u32 tx_drbell_q_index)
+{
+ u8 ctx_buf[ICE_TX_DRBELL_Q_CTX_SIZE_DWORDS * sizeof(u32)] = { 0 };
+
+ ice_set_ctx((u8 *)tx_drbell_q_ctx, ctx_buf, ice_tx_drbell_q_ctx_info);
+ return ice_copy_tx_drbell_q_ctx_to_hw(hw, ctx_buf, tx_drbell_q_index);
+}
+
+/**
+ * ice_clear_tx_drbell_q_ctx
+ * @hw: pointer to the hardware structure
+ * @tx_drbell_q_index: the index of the doorbell queue to clear
+ *
+ * Clears doorbell queue context in hw register space
+ */
+enum ice_status
+ice_clear_tx_drbell_q_ctx(struct ice_hw *hw, u32 tx_drbell_q_index)
+{
+ u8 i;
+
+ if (tx_drbell_q_index > QTX_COMM_DBLQ_DBELL_MAX_INDEX)
+ return ICE_ERR_PARAM;
+
+ /* Clear each dword register separately */
+ for (i = 0; i < ICE_TX_DRBELL_Q_CTX_SIZE_DWORDS; i++)
+ wr32(hw, QTX_COMM_DBLQ_CNTX(i, tx_drbell_q_index), 0);
+
+ return ICE_SUCCESS;
+}
+#endif /* !NO_UNUSED_CTX_CODE || AE_DRIVER */
+
+/**
+ * ice_debug_cq
+ * @hw: pointer to the hardware structure
+ * @mask: debug mask
+ * @desc: pointer to control queue descriptor
+ * @buf: pointer to command buffer
+ * @buf_len: max length of buf
+ *
+ * Dumps debug log about control command with descriptor contents.
+ */
+void
+ice_debug_cq(struct ice_hw *hw, u32 __maybe_unused mask, void *desc, void *buf,
+ u16 buf_len)
+{
+ struct ice_aq_desc *cq_desc = (struct ice_aq_desc *)desc;
+ u16 len;
+
+
+ if (!desc)
+ return;
+
+ len = LE16_TO_CPU(cq_desc->datalen);
+
+ ice_debug(hw, mask,
+ "CQ CMD: opcode 0x%04X, flags 0x%04X, datalen 0x%04X, retval 0x%04X\n",
+ LE16_TO_CPU(cq_desc->opcode),
+ LE16_TO_CPU(cq_desc->flags),
+ LE16_TO_CPU(cq_desc->datalen), LE16_TO_CPU(cq_desc->retval));
+ ice_debug(hw, mask, "\tcookie (h,l) 0x%08X 0x%08X\n",
+ LE32_TO_CPU(cq_desc->cookie_high),
+ LE32_TO_CPU(cq_desc->cookie_low));
+ ice_debug(hw, mask, "\tparam (0,1) 0x%08X 0x%08X\n",
+ LE32_TO_CPU(cq_desc->params.generic.param0),
+ LE32_TO_CPU(cq_desc->params.generic.param1));
+ ice_debug(hw, mask, "\taddr (h,l) 0x%08X 0x%08X\n",
+ LE32_TO_CPU(cq_desc->params.generic.addr_high),
+ LE32_TO_CPU(cq_desc->params.generic.addr_low));
+ if (buf && cq_desc->datalen != 0) {
+ ice_debug(hw, mask, "Buffer:\n");
+ if (buf_len < len)
+ len = buf_len;
+
+ ice_debug_array(hw, mask, 16, 1, (u8 *)buf, len);
+ }
+}
+
+
+/* FW Admin Queue command wrappers */
+
+/**
+ * ice_aq_send_cmd - send FW Admin Queue command to FW Admin Queue
+ * @hw: pointer to the hw struct
+ * @desc: descriptor describing the command
+ * @buf: buffer to use for indirect commands (NULL for direct commands)
+ * @buf_size: size of buffer for indirect commands (0 for direct commands)
+ * @cd: pointer to command details structure
+ *
+ * Helper function to send FW Admin Queue commands to the FW Admin Queue.
+ */
+enum ice_status
+ice_aq_send_cmd(struct ice_hw *hw, struct ice_aq_desc *desc, void *buf,
+ u16 buf_size, struct ice_sq_cd *cd)
+{
+ return ice_sq_send_cmd(hw, &hw->adminq, desc, buf, buf_size, cd);
+}
+
+/**
+ * ice_aq_get_fw_ver
+ * @hw: pointer to the hw struct
+ * @cd: pointer to command details structure or NULL
+ *
+ * Get the firmware version (0x0001) from the admin queue commands
+ */
+enum ice_status ice_aq_get_fw_ver(struct ice_hw *hw, struct ice_sq_cd *cd)
+{
+ struct ice_aqc_get_ver *resp;
+ struct ice_aq_desc desc;
+ enum ice_status status;
+
+ resp = &desc.params.get_ver;
+
+ ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_ver);
+
+ status = ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
+
+ if (!status) {
+ hw->fw_branch = resp->fw_branch;
+ hw->fw_maj_ver = resp->fw_major;
+ hw->fw_min_ver = resp->fw_minor;
+ hw->fw_patch = resp->fw_patch;
+ hw->fw_build = LE32_TO_CPU(resp->fw_build);
+ hw->api_branch = resp->api_branch;
+ hw->api_maj_ver = resp->api_major;
+ hw->api_min_ver = resp->api_minor;
+ hw->api_patch = resp->api_patch;
+ }
+
+ return status;
+}
+
+
+/**
+ * ice_aq_q_shutdown
+ * @hw: pointer to the hw struct
+ * @unloading: is the driver unloading itself
+ *
+ * Tell the Firmware that we're shutting down the AdminQ and whether
+ * or not the driver is unloading as well (0x0003).
+ */
+enum ice_status ice_aq_q_shutdown(struct ice_hw *hw, bool unloading)
+{
+ struct ice_aqc_q_shutdown *cmd;
+ struct ice_aq_desc desc;
+
+ cmd = &desc.params.q_shutdown;
+
+ ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_q_shutdown);
+
+ if (unloading)
+ cmd->driver_unloading = CPU_TO_LE32(ICE_AQC_DRIVER_UNLOADING);
+
+ return ice_aq_send_cmd(hw, &desc, NULL, 0, NULL);
+}
+
+/**
+ * ice_aq_req_res
+ * @hw: pointer to the hw struct
+ * @res: resource id
+ * @access: access type
+ * @sdp_number: resource number
+ * @timeout: the maximum time in ms that the driver may hold the resource
+ * @cd: pointer to command details structure or NULL
+ *
+ * Requests common resource using the admin queue commands (0x0008).
+ * When attempting to acquire the Global Config Lock, the driver can
+ * learn of three states:
+ * 1) ICE_SUCCESS - acquired lock, and can perform download package
+ * 2) ICE_ERR_AQ_ERROR - did not get lock, driver should fail to load
+ * 3) ICE_ERR_AQ_NO_WORK - did not get lock, but another driver has
+ * successfully downloaded the package; the driver does
+ * not have to download the package and can continue
+ * loading
+ *
+ * Note that if the caller is in an acquire lock, perform action, release lock
+ * phase of operation, it is possible that the FW may detect a timeout and issue
+ * a CORER. In this case, the driver will receive a CORER interrupt and will
+ * have to determine its cause. The calling thread that is handling this flow
+ * will likely get an error propagated back to it indicating the Download
+ * Package, Update Package or the Release Resource AQ commands timed out.
+ */
+static enum ice_status
+ice_aq_req_res(struct ice_hw *hw, enum ice_aq_res_ids res,
+ enum ice_aq_res_access_type access, u8 sdp_number, u32 *timeout,
+ struct ice_sq_cd *cd)
+{
+ struct ice_aqc_req_res *cmd_resp;
+ struct ice_aq_desc desc;
+ enum ice_status status;
+
+ ice_debug(hw, ICE_DBG_TRACE, "ice_aq_req_res");
+
+ cmd_resp = &desc.params.res_owner;
+
+ ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_req_res);
+
+ cmd_resp->res_id = CPU_TO_LE16(res);
+ cmd_resp->access_type = CPU_TO_LE16(access);
+ cmd_resp->res_number = CPU_TO_LE32(sdp_number);
+ cmd_resp->timeout = CPU_TO_LE32(*timeout);
+ *timeout = 0;
+
+ status = ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
+
+ /* The completion specifies the maximum time in ms that the driver
+ * may hold the resource in the Timeout field.
+ */
+
+ /* Global config lock response utilizes an additional status field.
+ *
+ * If the Global config lock resource is held by some other driver, the
+ * command completes with ICE_AQ_RES_GLBL_IN_PROG in the status field
+ * and the timeout field indicates the maximum time the current owner
+ * of the resource has to free it.
+ */
+ if (res == ICE_GLOBAL_CFG_LOCK_RES_ID) {
+ if (LE16_TO_CPU(cmd_resp->status) == ICE_AQ_RES_GLBL_SUCCESS) {
+ *timeout = LE32_TO_CPU(cmd_resp->timeout);
+ return ICE_SUCCESS;
+ } else if (LE16_TO_CPU(cmd_resp->status) ==
+ ICE_AQ_RES_GLBL_IN_PROG) {
+ *timeout = LE32_TO_CPU(cmd_resp->timeout);
+ return ICE_ERR_AQ_ERROR;
+ } else if (LE16_TO_CPU(cmd_resp->status) ==
+ ICE_AQ_RES_GLBL_DONE) {
+ return ICE_ERR_AQ_NO_WORK;
+ }
+
+ /* invalid FW response, force a timeout immediately */
+ *timeout = 0;
+ return ICE_ERR_AQ_ERROR;
+ }
+
+ /* If the resource is held by some other driver, the command completes
+ * with a busy return value and the timeout field indicates the maximum
+ * time the current owner of the resource has to free it.
+ */
+ if (!status || hw->adminq.sq_last_status == ICE_AQ_RC_EBUSY)
+ *timeout = LE32_TO_CPU(cmd_resp->timeout);
+
+ return status;
+}
+
+/**
+ * ice_aq_release_res
+ * @hw: pointer to the hw struct
+ * @res: resource id
+ * @sdp_number: resource number
+ * @cd: pointer to command details structure or NULL
+ *
+ * release common resource using the admin queue commands (0x0009)
+ */
+static enum ice_status
+ice_aq_release_res(struct ice_hw *hw, enum ice_aq_res_ids res, u8 sdp_number,
+ struct ice_sq_cd *cd)
+{
+ struct ice_aqc_req_res *cmd;
+ struct ice_aq_desc desc;
+
+ ice_debug(hw, ICE_DBG_TRACE, "ice_aq_release_res");
+
+ cmd = &desc.params.res_owner;
+
+ ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_release_res);
+
+ cmd->res_id = CPU_TO_LE16(res);
+ cmd->res_number = CPU_TO_LE32(sdp_number);
+
+ return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
+}
+
+/**
+ * ice_acquire_res
+ * @hw: pointer to the HW structure
+ * @res: resource id
+ * @access: access type (read or write)
+ * @timeout: timeout in milliseconds
+ *
+ * This function will attempt to acquire the ownership of a resource.
+ */
+enum ice_status
+ice_acquire_res(struct ice_hw *hw, enum ice_aq_res_ids res,
+ enum ice_aq_res_access_type access, u32 timeout)
+{
+#define ICE_RES_POLLING_DELAY_MS 10
+ u32 delay = ICE_RES_POLLING_DELAY_MS;
+ u32 time_left = timeout;
+ enum ice_status status;
+
+ ice_debug(hw, ICE_DBG_TRACE, "ice_acquire_res");
+
+ status = ice_aq_req_res(hw, res, access, 0, &time_left, NULL);
+
+ /* A return code of ICE_ERR_AQ_NO_WORK means that another driver has
+ * previously acquired the resource and performed any necessary updates;
+ * in this case the caller does not obtain the resource and has no
+ * further work to do.
+ */
+ if (status == ICE_ERR_AQ_NO_WORK)
+ goto ice_acquire_res_exit;
+
+ if (status)
+ ice_debug(hw, ICE_DBG_RES,
+ "resource %d acquire type %d failed.\n", res, access);
+
+ /* If necessary, poll until the current lock owner timeouts */
+ timeout = time_left;
+ while (status && timeout && time_left) {
+ ice_msec_delay(delay, true);
+ timeout = (timeout > delay) ? timeout - delay : 0;
+ status = ice_aq_req_res(hw, res, access, 0, &time_left, NULL);
+
+ if (status == ICE_ERR_AQ_NO_WORK)
+ /* lock free, but no work to do */
+ break;
+
+ if (!status)
+ /* lock acquired */
+ break;
+ }
+ if (status && status != ICE_ERR_AQ_NO_WORK)
+ ice_debug(hw, ICE_DBG_RES, "resource acquire timed out.\n");
+
+ice_acquire_res_exit:
+ if (status == ICE_ERR_AQ_NO_WORK) {
+ if (access == ICE_RES_WRITE)
+ ice_debug(hw, ICE_DBG_RES,
+ "resource indicates no work to do.\n");
+ else
+ ice_debug(hw, ICE_DBG_RES,
+ "Warning: ICE_ERR_AQ_NO_WORK not expected\n");
+ }
+ return status;
+}
+
+/**
+ * ice_release_res
+ * @hw: pointer to the HW structure
+ * @res: resource id
+ *
+ * This function will release a resource using the proper Admin Command.
+ */
+void ice_release_res(struct ice_hw *hw, enum ice_aq_res_ids res)
+{
+ enum ice_status status;
+ u32 total_delay = 0;
+
+ ice_debug(hw, ICE_DBG_TRACE, "ice_release_res");
+
+ status = ice_aq_release_res(hw, res, 0, NULL);
+
+ /* there are some rare cases when trying to release the resource
+ * results in an admin Q timeout, so handle them correctly
+ */
+ while ((status == ICE_ERR_AQ_TIMEOUT) &&
+ (total_delay < hw->adminq.sq_cmd_timeout)) {
+ ice_msec_delay(1, true);
+ status = ice_aq_release_res(hw, res, 0, NULL);
+ total_delay++;
+ }
+}
+
+/**
+ * ice_aq_alloc_free_res - command to allocate/free resources
+ * @hw: pointer to the hw struct
+ * @num_entries: number of resource entries in buffer
+ * @buf: Indirect buffer to hold data parameters and response
+ * @buf_size: size of buffer for indirect commands
+ * @opc: pass in the command opcode
+ * @cd: pointer to command details structure or NULL
+ *
+ * Helper function to allocate/free resources using the admin queue commands
+ */
+enum ice_status
+ice_aq_alloc_free_res(struct ice_hw *hw, u16 num_entries,
+ struct ice_aqc_alloc_free_res_elem *buf, u16 buf_size,
+ enum ice_adminq_opc opc, struct ice_sq_cd *cd)
+{
+ struct ice_aqc_alloc_free_res_cmd *cmd;
+ struct ice_aq_desc desc;
+
+ ice_debug(hw, ICE_DBG_TRACE, "ice_aq_alloc_free_res");
+
+ cmd = &desc.params.sw_res_ctrl;
+
+ if (!buf)
+ return ICE_ERR_PARAM;
+
+ if (buf_size < (num_entries * sizeof(buf->elem[0])))
+ return ICE_ERR_PARAM;
+
+ ice_fill_dflt_direct_cmd_desc(&desc, opc);
+
+ desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+
+ cmd->num_entries = CPU_TO_LE16(num_entries);
+
+ return ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
+}
+
+
+/**
+ * ice_get_guar_num_vsi - determine number of guar VSI for a PF
+ * @hw: pointer to the hw structure
+ *
+ * Determine the number of valid functions by going through the bitmap returned
+ * from parsing capabilities and use this to calculate the number of VSI per PF.
+ */
+static u32 ice_get_guar_num_vsi(struct ice_hw *hw)
+{
+ u8 funcs;
+
+#define ICE_CAPS_VALID_FUNCS_M 0xFF
+ funcs = ice_hweight8(hw->dev_caps.common_cap.valid_functions &
+ ICE_CAPS_VALID_FUNCS_M);
+
+ if (!funcs)
+ return 0;
+
+ return ICE_MAX_VSI / funcs;
+}
+
+/**
+ * ice_parse_caps - parse function/device capabilities
+ * @hw: pointer to the hw struct
+ * @buf: pointer to a buffer containing function/device capability records
+ * @cap_count: number of capability records in the list
+ * @opc: type of capabilities list to parse
+ *
+ * Helper function to parse function(0x000a)/device(0x000b) capabilities list.
+ */
+static void
+ice_parse_caps(struct ice_hw *hw, void *buf, u32 cap_count,
+ enum ice_adminq_opc opc)
+{
+ struct ice_aqc_list_caps_elem *cap_resp;
+ struct ice_hw_func_caps *func_p = NULL;
+ struct ice_hw_dev_caps *dev_p = NULL;
+ struct ice_hw_common_caps *caps;
+ u32 i;
+
+ if (!buf)
+ return;
+
+ cap_resp = (struct ice_aqc_list_caps_elem *)buf;
+
+ if (opc == ice_aqc_opc_list_dev_caps) {
+ dev_p = &hw->dev_caps;
+ caps = &dev_p->common_cap;
+ } else if (opc == ice_aqc_opc_list_func_caps) {
+ func_p = &hw->func_caps;
+ caps = &func_p->common_cap;
+ } else {
+ ice_debug(hw, ICE_DBG_INIT, "wrong opcode\n");
+ return;
+ }
+
+ for (i = 0; caps && i < cap_count; i++, cap_resp++) {
+ u32 logical_id = LE32_TO_CPU(cap_resp->logical_id);
+ u32 phys_id = LE32_TO_CPU(cap_resp->phys_id);
+ u32 number = LE32_TO_CPU(cap_resp->number);
+ u16 cap = LE16_TO_CPU(cap_resp->cap);
+
+ switch (cap) {
+ case ICE_AQC_CAPS_VALID_FUNCTIONS:
+ caps->valid_functions = number;
+ ice_debug(hw, ICE_DBG_INIT,
+ "HW caps: Valid Functions = %d\n",
+ caps->valid_functions);
+ break;
+ case ICE_AQC_CAPS_SRIOV:
+ caps->sr_iov_1_1 = (number == 1);
+ ice_debug(hw, ICE_DBG_INIT,
+ "HW caps: SR-IOV = %d\n", caps->sr_iov_1_1);
+ break;
+ case ICE_AQC_CAPS_VF:
+ if (dev_p) {
+ dev_p->num_vfs_exposed = number;
+ ice_debug(hw, ICE_DBG_INIT,
+ "HW caps: VFs exposed = %d\n",
+ dev_p->num_vfs_exposed);
+ } else if (func_p) {
+ func_p->num_allocd_vfs = number;
+ func_p->vf_base_id = logical_id;
+ ice_debug(hw, ICE_DBG_INIT,
+ "HW caps: VFs allocated = %d\n",
+ func_p->num_allocd_vfs);
+ ice_debug(hw, ICE_DBG_INIT,
+ "HW caps: VF base_id = %d\n",
+ func_p->vf_base_id);
+ }
+ break;
+ case ICE_AQC_CAPS_VSI:
+ if (dev_p) {
+ dev_p->num_vsi_allocd_to_host = number;
+ ice_debug(hw, ICE_DBG_INIT,
+ "HW caps: Dev.VSI cnt = %d\n",
+ dev_p->num_vsi_allocd_to_host);
+ } else if (func_p) {
+ func_p->guar_num_vsi =
+ ice_get_guar_num_vsi(hw);
+ ice_debug(hw, ICE_DBG_INIT,
+ "HW caps: Func.VSI cnt = %d\n",
+ number);
+ }
+ break;
+ case ICE_AQC_CAPS_RSS:
+ caps->rss_table_size = number;
+ caps->rss_table_entry_width = logical_id;
+ ice_debug(hw, ICE_DBG_INIT,
+ "HW caps: RSS table size = %d\n",
+ caps->rss_table_size);
+ ice_debug(hw, ICE_DBG_INIT,
+ "HW caps: RSS table width = %d\n",
+ caps->rss_table_entry_width);
+ break;
+ case ICE_AQC_CAPS_RXQS:
+ caps->num_rxq = number;
+ caps->rxq_first_id = phys_id;
+ ice_debug(hw, ICE_DBG_INIT,
+ "HW caps: Num Rx Qs = %d\n", caps->num_rxq);
+ ice_debug(hw, ICE_DBG_INIT,
+ "HW caps: Rx first queue ID = %d\n",
+ caps->rxq_first_id);
+ break;
+ case ICE_AQC_CAPS_TXQS:
+ caps->num_txq = number;
+ caps->txq_first_id = phys_id;
+ ice_debug(hw, ICE_DBG_INIT,
+ "HW caps: Num Tx Qs = %d\n", caps->num_txq);
+ ice_debug(hw, ICE_DBG_INIT,
+ "HW caps: Tx first queue ID = %d\n",
+ caps->txq_first_id);
+ break;
+ case ICE_AQC_CAPS_MSIX:
+ caps->num_msix_vectors = number;
+ caps->msix_vector_first_id = phys_id;
+ ice_debug(hw, ICE_DBG_INIT,
+ "HW caps: MSIX vector count = %d\n",
+ caps->num_msix_vectors);
+ ice_debug(hw, ICE_DBG_INIT,
+ "HW caps: MSIX first vector index = %d\n",
+ caps->msix_vector_first_id);
+ break;
+ case ICE_AQC_CAPS_MAX_MTU:
+ caps->max_mtu = number;
+ if (dev_p)
+ ice_debug(hw, ICE_DBG_INIT,
+ "HW caps: Dev.MaxMTU = %d\n",
+ caps->max_mtu);
+ else if (func_p)
+ ice_debug(hw, ICE_DBG_INIT,
+ "HW caps: func.MaxMTU = %d\n",
+ caps->max_mtu);
+ break;
+ default:
+ ice_debug(hw, ICE_DBG_INIT,
+ "HW caps: Unknown capability[%d]: 0x%x\n", i,
+ cap);
+ break;
+ }
+ }
+}
+
+/**
+ * ice_aq_discover_caps - query function/device capabilities
+ * @hw: pointer to the hw struct
+ * @buf: a virtual buffer to hold the capabilities
+ * @buf_size: Size of the virtual buffer
+ * @cap_count: cap count needed if AQ err==ENOMEM
+ * @opc: capabilities type to discover - pass in the command opcode
+ * @cd: pointer to command details structure or NULL
+ *
+ * Get the function(0x000a)/device(0x000b) capabilities description from
+ * the firmware.
+ */
+static enum ice_status
+ice_aq_discover_caps(struct ice_hw *hw, void *buf, u16 buf_size, u32 *cap_count,
+ enum ice_adminq_opc opc, struct ice_sq_cd *cd)
+{
+ struct ice_aqc_list_caps *cmd;
+ struct ice_aq_desc desc;
+ enum ice_status status;
+
+ cmd = &desc.params.get_cap;
+
+ if (opc != ice_aqc_opc_list_func_caps &&
+ opc != ice_aqc_opc_list_dev_caps)
+ return ICE_ERR_PARAM;
+
+ ice_fill_dflt_direct_cmd_desc(&desc, opc);
+
+ status = ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
+ if (!status)
+ ice_parse_caps(hw, buf, LE32_TO_CPU(cmd->count), opc);
+ else if (hw->adminq.sq_last_status == ICE_AQ_RC_ENOMEM)
+ *cap_count = LE32_TO_CPU(cmd->count);
+ return status;
+}
+
+/**
+ * ice_discover_caps - get info about the HW
+ * @hw: pointer to the hardware structure
+ * @opc: capabilities type to discover - pass in the command opcode
+ */
+static enum ice_status
+ice_discover_caps(struct ice_hw *hw, enum ice_adminq_opc opc)
+{
+ enum ice_status status;
+ u32 cap_count;
+ u16 cbuf_len;
+ u8 retries;
+
+ /* The driver doesn't know how many capabilities the device will return
+ * so the buffer size required isn't known ahead of time. The driver
+ * starts with cbuf_len and if this turns out to be insufficient, the
+ * device returns ICE_AQ_RC_ENOMEM and also the cap_count it needs.
+ * The driver then allocates the buffer based on the count and retries
+ * the operation. So it follows that the retry count is 2.
+ */
+#define ICE_GET_CAP_BUF_COUNT 40
+#define ICE_GET_CAP_RETRY_COUNT 2
+
+ cap_count = ICE_GET_CAP_BUF_COUNT;
+ retries = ICE_GET_CAP_RETRY_COUNT;
+
+ do {
+ void *cbuf;
+
+ cbuf_len = (u16)(cap_count *
+ sizeof(struct ice_aqc_list_caps_elem));
+ cbuf = ice_malloc(hw, cbuf_len);
+ if (!cbuf)
+ return ICE_ERR_NO_MEMORY;
+
+ status = ice_aq_discover_caps(hw, cbuf, cbuf_len, &cap_count,
+ opc, NULL);
+ ice_free(hw, cbuf);
+
+ if (!status || hw->adminq.sq_last_status != ICE_AQ_RC_ENOMEM)
+ break;
+
+ /* If ENOMEM is returned, try again with bigger buffer */
+ } while (--retries);
+
+ return status;
+}
+
+/**
+ * ice_get_caps - get info about the HW
+ * @hw: pointer to the hardware structure
+ */
+enum ice_status ice_get_caps(struct ice_hw *hw)
+{
+ enum ice_status status;
+
+ status = ice_discover_caps(hw, ice_aqc_opc_list_dev_caps);
+ if (!status)
+ status = ice_discover_caps(hw, ice_aqc_opc_list_func_caps);
+
+ return status;
+}
+
+/**
+ * ice_aq_manage_mac_write - manage MAC address write command
+ * @hw: pointer to the hw struct
+ * @mac_addr: MAC address to be written as LAA/LAA+WoL/Port address
+ * @flags: flags to control write behavior
+ * @cd: pointer to command details structure or NULL
+ *
+ * This function is used to write MAC address to the NVM (0x0108).
+ */
+enum ice_status
+ice_aq_manage_mac_write(struct ice_hw *hw, const u8 *mac_addr, u8 flags,
+ struct ice_sq_cd *cd)
+{
+ struct ice_aqc_manage_mac_write *cmd;
+ struct ice_aq_desc desc;
+
+ cmd = &desc.params.mac_write;
+ ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_manage_mac_write);
+
+ cmd->flags = flags;
+
+
+ /* Prep values for flags, sah, sal */
+ cmd->sah = HTONS(*((const u16 *)mac_addr));
+ cmd->sal = HTONL(*((const u32 *)(mac_addr + 2)));
+
+ return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
+}
+
+/**
+ * ice_aq_clear_pxe_mode
+ * @hw: pointer to the hw struct
+ *
+ * Tell the firmware that the driver is taking over from PXE (0x0110).
+ */
+static enum ice_status ice_aq_clear_pxe_mode(struct ice_hw *hw)
+{
+ struct ice_aq_desc desc;
+
+ ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_clear_pxe_mode);
+ desc.params.clear_pxe.rx_cnt = ICE_AQC_CLEAR_PXE_RX_CNT;
+
+ return ice_aq_send_cmd(hw, &desc, NULL, 0, NULL);
+}
+
+/**
+ * ice_clear_pxe_mode - clear pxe operations mode
+ * @hw: pointer to the hw struct
+ *
+ * Make sure all PXE mode settings are cleared, including things
+ * like descriptor fetch/write-back mode.
+ */
+void ice_clear_pxe_mode(struct ice_hw *hw)
+{
+ if (ice_check_sq_alive(hw, &hw->adminq))
+ ice_aq_clear_pxe_mode(hw);
+}
+
+
+
+/**
+ * ice_get_link_speed_based_on_phy_type - returns link speed
+ * @phy_type_low: lower part of phy_type
+ *
+ * This helper function will convert a phy_type_low to its corresponding link
+ * speed.
+ * Note: In the structure of phy_type_low, there should be one bit set, as
+ * this function will convert one phy type to its speed.
+ * If no bit gets set, ICE_LINK_SPEED_UNKNOWN will be returned
+ * If more than one bit gets set, ICE_LINK_SPEED_UNKNOWN will be returned
+ */
+static u16 ice_get_link_speed_based_on_phy_type(u64 phy_type_low)
+{
+ u16 speed_phy_type_low = ICE_AQ_LINK_SPEED_UNKNOWN;
+
+ switch (phy_type_low) {
+ case ICE_PHY_TYPE_LOW_100BASE_TX:
+ case ICE_PHY_TYPE_LOW_100M_SGMII:
+ speed_phy_type_low = ICE_AQ_LINK_SPEED_100MB;
+ break;
+ case ICE_PHY_TYPE_LOW_1000BASE_T:
+ case ICE_PHY_TYPE_LOW_1000BASE_SX:
+ case ICE_PHY_TYPE_LOW_1000BASE_LX:
+ case ICE_PHY_TYPE_LOW_1000BASE_KX:
+ case ICE_PHY_TYPE_LOW_1G_SGMII:
+ speed_phy_type_low = ICE_AQ_LINK_SPEED_1000MB;
+ break;
+ case ICE_PHY_TYPE_LOW_2500BASE_T:
+ case ICE_PHY_TYPE_LOW_2500BASE_X:
+ case ICE_PHY_TYPE_LOW_2500BASE_KX:
+ speed_phy_type_low = ICE_AQ_LINK_SPEED_2500MB;
+ break;
+ case ICE_PHY_TYPE_LOW_5GBASE_T:
+ case ICE_PHY_TYPE_LOW_5GBASE_KR:
+ speed_phy_type_low = ICE_AQ_LINK_SPEED_5GB;
+ break;
+ case ICE_PHY_TYPE_LOW_10GBASE_T:
+ case ICE_PHY_TYPE_LOW_10G_SFI_DA:
+ case ICE_PHY_TYPE_LOW_10GBASE_SR:
+ case ICE_PHY_TYPE_LOW_10GBASE_LR:
+ case ICE_PHY_TYPE_LOW_10GBASE_KR_CR1:
+ case ICE_PHY_TYPE_LOW_10G_SFI_AOC_ACC:
+ case ICE_PHY_TYPE_LOW_10G_SFI_C2C:
+ speed_phy_type_low = ICE_AQ_LINK_SPEED_10GB;
+ break;
+ case ICE_PHY_TYPE_LOW_25GBASE_T:
+ case ICE_PHY_TYPE_LOW_25GBASE_CR:
+ case ICE_PHY_TYPE_LOW_25GBASE_CR_S:
+ case ICE_PHY_TYPE_LOW_25GBASE_CR1:
+ case ICE_PHY_TYPE_LOW_25GBASE_SR:
+ case ICE_PHY_TYPE_LOW_25GBASE_LR:
+ case ICE_PHY_TYPE_LOW_25GBASE_KR:
+ case ICE_PHY_TYPE_LOW_25GBASE_KR_S:
+ case ICE_PHY_TYPE_LOW_25GBASE_KR1:
+ case ICE_PHY_TYPE_LOW_25G_AUI_AOC_ACC:
+ case ICE_PHY_TYPE_LOW_25G_AUI_C2C:
+ speed_phy_type_low = ICE_AQ_LINK_SPEED_25GB;
+ break;
+ case ICE_PHY_TYPE_LOW_40GBASE_CR4:
+ case ICE_PHY_TYPE_LOW_40GBASE_SR4:
+ case ICE_PHY_TYPE_LOW_40GBASE_LR4:
+ case ICE_PHY_TYPE_LOW_40GBASE_KR4:
+ case ICE_PHY_TYPE_LOW_40G_XLAUI_AOC_ACC:
+ case ICE_PHY_TYPE_LOW_40G_XLAUI:
+ speed_phy_type_low = ICE_AQ_LINK_SPEED_40GB;
+ break;
+ default:
+ speed_phy_type_low = ICE_AQ_LINK_SPEED_UNKNOWN;
+ break;
+ }
+
+ return speed_phy_type_low;
+}
+
+/**
+ * ice_update_phy_type
+ * @phy_type_low: pointer to the lower part of phy_type
+ * @link_speeds_bitmap: targeted link speeds bitmap
+ *
+ * Note: For the link_speeds_bitmap structure, you can check it at
+ * [ice_aqc_get_link_status->link_speed]. Caller can pass in
+ * link_speeds_bitmap include multiple speeds.
+ *
+ * The value of phy_type_low will present a certain link speed. This helper
+ * function will turn on bits in the phy_type_low based on the value of
+ * link_speeds_bitmap input parameter.
+ */
+void ice_update_phy_type(u64 *phy_type_low, u16 link_speeds_bitmap)
+{
+ u16 speed = ICE_AQ_LINK_SPEED_UNKNOWN;
+ u64 pt_low;
+ int index;
+
+ /* We first check with low part of phy_type */
+ for (index = 0; index <= ICE_PHY_TYPE_LOW_MAX_INDEX; index++) {
+ pt_low = BIT_ULL(index);
+ speed = ice_get_link_speed_based_on_phy_type(pt_low);
+
+ if (link_speeds_bitmap & speed)
+ *phy_type_low |= BIT_ULL(index);
+ }
+}
+
+/**
+ * ice_aq_set_phy_cfg
+ * @hw: pointer to the hw struct
+ * @lport: logical port number
+ * @cfg: structure with PHY configuration data to be set
+ * @cd: pointer to command details structure or NULL
+ *
+ * Set the various PHY configuration parameters supported on the Port.
+ * One or more of the Set PHY config parameters may be ignored in an MFP
+ * mode as the PF may not have the privilege to set some of the PHY Config
+ * parameters. This status will be indicated by the command response (0x0601).
+ */
+enum ice_status
+ice_aq_set_phy_cfg(struct ice_hw *hw, u8 lport,
+ struct ice_aqc_set_phy_cfg_data *cfg, struct ice_sq_cd *cd)
+{
+ struct ice_aq_desc desc;
+
+ if (!cfg)
+ return ICE_ERR_PARAM;
+
+ ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_phy_cfg);
+ desc.params.set_phy.lport_num = lport;
+ desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+
+ return ice_aq_send_cmd(hw, &desc, cfg, sizeof(*cfg), cd);
+}
+
+/**
+ * ice_update_link_info - update status of the HW network link
+ * @pi: port info structure of the interested logical port
+ */
+enum ice_status ice_update_link_info(struct ice_port_info *pi)
+{
+ struct ice_aqc_get_phy_caps_data *pcaps;
+ struct ice_phy_info *phy_info;
+ enum ice_status status;
+ struct ice_hw *hw;
+
+ if (!pi)
+ return ICE_ERR_PARAM;
+
+ hw = pi->hw;
+
+ pcaps = (struct ice_aqc_get_phy_caps_data *)
+ ice_malloc(hw, sizeof(*pcaps));
+ if (!pcaps)
+ return ICE_ERR_NO_MEMORY;
+
+ phy_info = &pi->phy;
+ status = ice_aq_get_link_info(pi, true, NULL, NULL);
+ if (status)
+ goto out;
+
+ if (phy_info->link_info.link_info & ICE_AQ_MEDIA_AVAILABLE) {
+ status = ice_aq_get_phy_caps(pi, false, ICE_AQC_REPORT_SW_CFG,
+ pcaps, NULL);
+ if (status)
+ goto out;
+
+ ice_memcpy(phy_info->link_info.module_type, &pcaps->module_type,
+ sizeof(phy_info->link_info.module_type),
+ ICE_NONDMA_TO_NONDMA);
+ }
+out:
+ ice_free(hw, pcaps);
+ return status;
+}
+
+/**
+ * ice_set_fc
+ * @pi: port information structure
+ * @aq_failures: pointer to status code, specific to ice_set_fc routine
+ * @ena_auto_link_update: enable automatic link update
+ *
+ * Set the requested flow control mode.
+ */
+enum ice_status
+ice_set_fc(struct ice_port_info *pi, u8 *aq_failures, bool ena_auto_link_update)
+{
+ struct ice_aqc_set_phy_cfg_data cfg = { 0 };
+ struct ice_aqc_get_phy_caps_data *pcaps;
+ enum ice_status status;
+ u8 pause_mask = 0x0;
+ struct ice_hw *hw;
+
+ if (!pi)
+ return ICE_ERR_PARAM;
+ hw = pi->hw;
+ *aq_failures = ICE_SET_FC_AQ_FAIL_NONE;
+
+ switch (pi->fc.req_mode) {
+ case ICE_FC_FULL:
+ pause_mask |= ICE_AQC_PHY_EN_TX_LINK_PAUSE;
+ pause_mask |= ICE_AQC_PHY_EN_RX_LINK_PAUSE;
+ break;
+ case ICE_FC_RX_PAUSE:
+ pause_mask |= ICE_AQC_PHY_EN_RX_LINK_PAUSE;
+ break;
+ case ICE_FC_TX_PAUSE:
+ pause_mask |= ICE_AQC_PHY_EN_TX_LINK_PAUSE;
+ break;
+ default:
+ break;
+ }
+
+ pcaps = (struct ice_aqc_get_phy_caps_data *)
+ ice_malloc(hw, sizeof(*pcaps));
+ if (!pcaps)
+ return ICE_ERR_NO_MEMORY;
+
+ /* Get the current phy config */
+ status = ice_aq_get_phy_caps(pi, false, ICE_AQC_REPORT_SW_CFG, pcaps,
+ NULL);
+ if (status) {
+ *aq_failures = ICE_SET_FC_AQ_FAIL_GET;
+ goto out;
+ }
+
+ /* clear the old pause settings */
+ cfg.caps = pcaps->caps & ~(ICE_AQC_PHY_EN_TX_LINK_PAUSE |
+ ICE_AQC_PHY_EN_RX_LINK_PAUSE);
+ /* set the new capabilities */
+ cfg.caps |= pause_mask;
+ /* If the capabilities have changed, then set the new config */
+ if (cfg.caps != pcaps->caps) {
+ int retry_count, retry_max = 10;
+
+ /* Auto restart link so settings take effect */
+ if (ena_auto_link_update)
+ cfg.caps |= ICE_AQ_PHY_ENA_AUTO_LINK_UPDT;
+ /* Copy over all the old settings */
+ cfg.phy_type_low = pcaps->phy_type_low;
+ cfg.low_power_ctrl = pcaps->low_power_ctrl;
+ cfg.eee_cap = pcaps->eee_cap;
+ cfg.eeer_value = pcaps->eeer_value;
+ cfg.link_fec_opt = pcaps->link_fec_options;
+
+ status = ice_aq_set_phy_cfg(hw, pi->lport, &cfg, NULL);
+ if (status) {
+ *aq_failures = ICE_SET_FC_AQ_FAIL_SET;
+ goto out;
+ }
+
+ /* Update the link info
+ * It sometimes takes a really long time for link to
+ * come back from the atomic reset. Thus, we wait a
+ * little bit.
+ */
+ for (retry_count = 0; retry_count < retry_max; retry_count++) {
+ status = ice_update_link_info(pi);
+
+ if (status == ICE_SUCCESS)
+ break;
+
+ ice_msec_delay(100, true);
+ }
+
+ if (status)
+ *aq_failures = ICE_SET_FC_AQ_FAIL_UPDATE;
+ }
+
+out:
+ ice_free(hw, pcaps);
+ return status;
+}
+
+
+/**
+ * ice_get_link_status - get status of the HW network link
+ * @pi: port information structure
+ * @link_up: pointer to bool (true/false = linkup/linkdown)
+ *
+ * Variable link_up is true if link is up, false if link is down.
+ * The variable link_up is invalid if status is non zero. As a
+ * result of this call, link status reporting becomes enabled
+ */
+enum ice_status ice_get_link_status(struct ice_port_info *pi, bool *link_up)
+{
+ struct ice_phy_info *phy_info;
+ enum ice_status status = ICE_SUCCESS;
+
+ if (!pi || !link_up)
+ return ICE_ERR_PARAM;
+
+ phy_info = &pi->phy;
+
+ if (phy_info->get_link_info) {
+ status = ice_update_link_info(pi);
+
+ if (status)
+ ice_debug(pi->hw, ICE_DBG_LINK,
+ "get link status error, status = %d\n",
+ status);
+ }
+
+ *link_up = phy_info->link_info.link_info & ICE_AQ_LINK_UP;
+
+ return status;
+}
+
+/**
+ * ice_aq_set_link_restart_an
+ * @pi: pointer to the port information structure
+ * @ena_link: if true: enable link, if false: disable link
+ * @cd: pointer to command details structure or NULL
+ *
+ * Sets up the link and restarts the Auto-Negotiation over the link.
+ */
+enum ice_status
+ice_aq_set_link_restart_an(struct ice_port_info *pi, bool ena_link,
+ struct ice_sq_cd *cd)
+{
+ struct ice_aqc_restart_an *cmd;
+ struct ice_aq_desc desc;
+
+ cmd = &desc.params.restart_an;
+
+ ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_restart_an);
+
+ cmd->cmd_flags = ICE_AQC_RESTART_AN_LINK_RESTART;
+ cmd->lport_num = pi->lport;
+ if (ena_link)
+ cmd->cmd_flags |= ICE_AQC_RESTART_AN_LINK_ENABLE;
+ else
+ cmd->cmd_flags &= ~ICE_AQC_RESTART_AN_LINK_ENABLE;
+
+ return ice_aq_send_cmd(pi->hw, &desc, NULL, 0, cd);
+}
+
+/**
+ * ice_aq_set_event_mask
+ * @hw: pointer to the hw struct
+ * @port_num: port number of the physical function
+ * @mask: event mask to be set
+ * @cd: pointer to command details structure or NULL
+ *
+ * Set event mask (0x0613)
+ */
+enum ice_status
+ice_aq_set_event_mask(struct ice_hw *hw, u8 port_num, u16 mask,
+ struct ice_sq_cd *cd)
+{
+ struct ice_aqc_set_event_mask *cmd;
+ struct ice_aq_desc desc;
+
+ cmd = &desc.params.set_event_mask;
+
+ ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_event_mask);
+
+ cmd->lport_num = port_num;
+
+ cmd->event_mask = CPU_TO_LE16(mask);
+ return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
+}
+
+/**
+ * ice_aq_set_mac_loopback
+ * @hw: pointer to the hw struct
+ * @ena_lpbk: Enable or Disable loopback
+ * @cd: pointer to command details structure or NULL
+ *
+ * Enable/disable loopback on a given port
+ */
+enum ice_status
+ice_aq_set_mac_loopback(struct ice_hw *hw, bool ena_lpbk, struct ice_sq_cd *cd)
+{
+ struct ice_aqc_set_mac_lb *cmd;
+ struct ice_aq_desc desc;
+
+ cmd = &desc.params.set_mac_lb;
+
+ ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_mac_lb);
+ if (ena_lpbk)
+ cmd->lb_mode = ICE_AQ_MAC_LB_EN;
+
+ return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
+}
+
+
+/**
+ * ice_aq_set_port_id_led
+ * @pi: pointer to the port information
+ * @is_orig_mode: is this LED set to original mode (by the net-list)
+ * @cd: pointer to command details structure or NULL
+ *
+ * Set LED value for the given port (0x06e9)
+ */
+enum ice_status
+ice_aq_set_port_id_led(struct ice_port_info *pi, bool is_orig_mode,
+ struct ice_sq_cd *cd)
+{
+ struct ice_aqc_set_port_id_led *cmd;
+ struct ice_hw *hw = pi->hw;
+ struct ice_aq_desc desc;
+
+ cmd = &desc.params.set_port_id_led;
+
+ ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_port_id_led);
+
+
+ if (is_orig_mode)
+ cmd->ident_mode = ICE_AQC_PORT_IDENT_LED_ORIG;
+ else
+ cmd->ident_mode = ICE_AQC_PORT_IDENT_LED_BLINK;
+
+ return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
+}
+
+
+/**
+ * __ice_aq_get_set_rss_lut
+ * @hw: pointer to the hardware structure
+ * @vsi_id: VSI FW index
+ * @lut_type: LUT table type
+ * @lut: pointer to the LUT buffer provided by the caller
+ * @lut_size: size of the LUT buffer
+ * @glob_lut_idx: global LUT index
+ * @set: set true to set the table, false to get the table
+ *
+ * Internal function to get (0x0B05) or set (0x0B03) RSS look up table
+ */
+static enum ice_status
+__ice_aq_get_set_rss_lut(struct ice_hw *hw, u16 vsi_id, u8 lut_type, u8 *lut,
+ u16 lut_size, u8 glob_lut_idx, bool set)
+{
+ struct ice_aqc_get_set_rss_lut *cmd_resp;
+ struct ice_aq_desc desc;
+ enum ice_status status;
+ u16 flags = 0;
+
+ cmd_resp = &desc.params.get_set_rss_lut;
+
+ if (set) {
+ ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_rss_lut);
+ desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+ } else {
+ ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_rss_lut);
+ }
+
+ cmd_resp->vsi_id = CPU_TO_LE16(((vsi_id <<
+ ICE_AQC_GSET_RSS_LUT_VSI_ID_S) &
+ ICE_AQC_GSET_RSS_LUT_VSI_ID_M) |
+ ICE_AQC_GSET_RSS_LUT_VSI_VALID);
+
+ switch (lut_type) {
+ case ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_VSI:
+ case ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_PF:
+ case ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_GLOBAL:
+ flags |= ((lut_type << ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_S) &
+ ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_M);
+ break;
+ default:
+ status = ICE_ERR_PARAM;
+ goto ice_aq_get_set_rss_lut_exit;
+ }
+
+ if (lut_type == ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_GLOBAL) {
+ flags |= ((glob_lut_idx << ICE_AQC_GSET_RSS_LUT_GLOBAL_IDX_S) &
+ ICE_AQC_GSET_RSS_LUT_GLOBAL_IDX_M);
+
+ if (!set)
+ goto ice_aq_get_set_rss_lut_send;
+ } else if (lut_type == ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_PF) {
+ if (!set)
+ goto ice_aq_get_set_rss_lut_send;
+ } else {
+ goto ice_aq_get_set_rss_lut_send;
+ }
+
+ /* LUT size is only valid for Global and PF table types */
+ switch (lut_size) {
+ case ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_128:
+ break;
+ case ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_512:
+ flags |= (ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_512_FLAG <<
+ ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_S) &
+ ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_M;
+ break;
+ case ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_2K:
+ if (lut_type == ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_PF) {
+ flags |= (ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_2K_FLAG <<
+ ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_S) &
+ ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_M;
+ break;
+ }
+ /* fall-through */
+ default:
+ status = ICE_ERR_PARAM;
+ goto ice_aq_get_set_rss_lut_exit;
+ }
+
+ice_aq_get_set_rss_lut_send:
+ cmd_resp->flags = CPU_TO_LE16(flags);
+ status = ice_aq_send_cmd(hw, &desc, lut, lut_size, NULL);
+
+ice_aq_get_set_rss_lut_exit:
+ return status;
+}
+
+/**
+ * ice_aq_get_rss_lut
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: software VSI handle
+ * @lut_type: LUT table type
+ * @lut: pointer to the LUT buffer provided by the caller
+ * @lut_size: size of the LUT buffer
+ *
+ * get the RSS lookup table, PF or VSI type
+ */
+enum ice_status
+ice_aq_get_rss_lut(struct ice_hw *hw, u16 vsi_handle, u8 lut_type,
+ u8 *lut, u16 lut_size)
+{
+ if (!ice_is_vsi_valid(hw, vsi_handle) || !lut)
+ return ICE_ERR_PARAM;
+
+ return __ice_aq_get_set_rss_lut(hw, ice_get_hw_vsi_num(hw, vsi_handle),
+ lut_type, lut, lut_size, 0, false);
+}
+
+/**
+ * ice_aq_set_rss_lut
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: software VSI handle
+ * @lut_type: LUT table type
+ * @lut: pointer to the LUT buffer provided by the caller
+ * @lut_size: size of the LUT buffer
+ *
+ * set the RSS lookup table, PF or VSI type
+ */
+enum ice_status
+ice_aq_set_rss_lut(struct ice_hw *hw, u16 vsi_handle, u8 lut_type,
+ u8 *lut, u16 lut_size)
+{
+ if (!ice_is_vsi_valid(hw, vsi_handle) || !lut)
+ return ICE_ERR_PARAM;
+
+ return __ice_aq_get_set_rss_lut(hw, ice_get_hw_vsi_num(hw, vsi_handle),
+ lut_type, lut, lut_size, 0, true);
+}
+
+/**
+ * __ice_aq_get_set_rss_key
+ * @hw: pointer to the hw struct
+ * @vsi_id: VSI FW index
+ * @key: pointer to key info struct
+ * @set: set true to set the key, false to get the key
+ *
+ * get (0x0B04) or set (0x0B02) the RSS key per VSI
+ */
+static enum
+ice_status __ice_aq_get_set_rss_key(struct ice_hw *hw, u16 vsi_id,
+ struct ice_aqc_get_set_rss_keys *key,
+ bool set)
+{
+ struct ice_aqc_get_set_rss_key *cmd_resp;
+ u16 key_size = sizeof(*key);
+ struct ice_aq_desc desc;
+
+ cmd_resp = &desc.params.get_set_rss_key;
+
+ if (set) {
+ ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_rss_key);
+ desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+ } else {
+ ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_rss_key);
+ }
+
+ cmd_resp->vsi_id = CPU_TO_LE16(((vsi_id <<
+ ICE_AQC_GSET_RSS_KEY_VSI_ID_S) &
+ ICE_AQC_GSET_RSS_KEY_VSI_ID_M) |
+ ICE_AQC_GSET_RSS_KEY_VSI_VALID);
+
+ return ice_aq_send_cmd(hw, &desc, key, key_size, NULL);
+}
+
+/**
+ * ice_aq_get_rss_key
+ * @hw: pointer to the hw struct
+ * @vsi_handle: software VSI handle
+ * @key: pointer to key info struct
+ *
+ * get the RSS key per VSI
+ */
+enum ice_status
+ice_aq_get_rss_key(struct ice_hw *hw, u16 vsi_handle,
+ struct ice_aqc_get_set_rss_keys *key)
+{
+ if (!ice_is_vsi_valid(hw, vsi_handle) || !key)
+ return ICE_ERR_PARAM;
+
+ return __ice_aq_get_set_rss_key(hw, ice_get_hw_vsi_num(hw, vsi_handle),
+ key, false);
+}
+
+/**
+ * ice_aq_set_rss_key
+ * @hw: pointer to the hw struct
+ * @vsi_handle: software VSI handle
+ * @keys: pointer to key info struct
+ *
+ * set the RSS key per VSI
+ */
+enum ice_status
+ice_aq_set_rss_key(struct ice_hw *hw, u16 vsi_handle,
+ struct ice_aqc_get_set_rss_keys *keys)
+{
+ if (!ice_is_vsi_valid(hw, vsi_handle) || !keys)
+ return ICE_ERR_PARAM;
+
+ return __ice_aq_get_set_rss_key(hw, ice_get_hw_vsi_num(hw, vsi_handle),
+ keys, true);
+}
+
+/**
+ * ice_aq_add_lan_txq
+ * @hw: pointer to the hardware structure
+ * @num_qgrps: Number of added queue groups
+ * @qg_list: list of queue groups to be added
+ * @buf_size: size of buffer for indirect command
+ * @cd: pointer to command details structure or NULL
+ *
+ * Add Tx LAN queue (0x0C30)
+ *
+ * NOTE:
+ * Prior to calling add Tx LAN queue:
+ * Initialize the following as part of the Tx queue context:
+ * Completion queue ID if the queue uses Completion queue, Quanta profile,
+ * Cache profile and Packet shaper profile.
+ *
+ * After add Tx LAN queue AQ command is completed:
+ * Interrupts should be associated with specific queues,
+ * Association of Tx queue to Doorbell queue is not part of Add LAN Tx queue
+ * flow.
+ */
+static enum ice_status
+ice_aq_add_lan_txq(struct ice_hw *hw, u8 num_qgrps,
+ struct ice_aqc_add_tx_qgrp *qg_list, u16 buf_size,
+ struct ice_sq_cd *cd)
+{
+ u16 i, sum_header_size, sum_q_size = 0;
+ struct ice_aqc_add_tx_qgrp *list;
+ struct ice_aqc_add_txqs *cmd;
+ struct ice_aq_desc desc;
+
+ ice_debug(hw, ICE_DBG_TRACE, "ice_aq_add_lan_txq");
+
+ cmd = &desc.params.add_txqs;
+
+ ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_add_txqs);
+
+ if (!qg_list)
+ return ICE_ERR_PARAM;
+
+ if (num_qgrps > ICE_LAN_TXQ_MAX_QGRPS)
+ return ICE_ERR_PARAM;
+
+ sum_header_size = num_qgrps *
+ (sizeof(*qg_list) - sizeof(*qg_list->txqs));
+
+ list = qg_list;
+ for (i = 0; i < num_qgrps; i++) {
+ struct ice_aqc_add_txqs_perq *q = list->txqs;
+
+ sum_q_size += list->num_txqs * sizeof(*q);
+ list = (struct ice_aqc_add_tx_qgrp *)(q + list->num_txqs);
+ }
+
+ if (buf_size != (sum_header_size + sum_q_size))
+ return ICE_ERR_PARAM;
+
+ desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+
+ cmd->num_qgrps = num_qgrps;
+
+ return ice_aq_send_cmd(hw, &desc, qg_list, buf_size, cd);
+}
+
+/**
+ * ice_aq_dis_lan_txq
+ * @hw: pointer to the hardware structure
+ * @num_qgrps: number of groups in the list
+ * @qg_list: the list of groups to disable
+ * @buf_size: the total size of the qg_list buffer in bytes
+ * @rst_src: if called due to reset, specifies the rst source
+ * @vmvf_num: the relative vm or vf number that is undergoing the reset
+ * @cd: pointer to command details structure or NULL
+ *
+ * Disable LAN Tx queue (0x0C31)
+ */
+static enum ice_status
+ice_aq_dis_lan_txq(struct ice_hw *hw, u8 num_qgrps,
+ struct ice_aqc_dis_txq_item *qg_list, u16 buf_size,
+ enum ice_disq_rst_src rst_src, u16 vmvf_num,
+ struct ice_sq_cd *cd)
+{
+ struct ice_aqc_dis_txqs *cmd;
+ struct ice_aq_desc desc;
+ u16 i, sz = 0;
+
+ ice_debug(hw, ICE_DBG_TRACE, "ice_aq_dis_lan_txq");
+ cmd = &desc.params.dis_txqs;
+ ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_dis_txqs);
+
+ /* qg_list can be NULL only in VM/VF reset flow */
+ if (!qg_list && !rst_src)
+ return ICE_ERR_PARAM;
+
+ if (num_qgrps > ICE_LAN_TXQ_MAX_QGRPS)
+ return ICE_ERR_PARAM;
+
+ cmd->num_entries = num_qgrps;
+
+ cmd->vmvf_and_timeout = CPU_TO_LE16((5 << ICE_AQC_Q_DIS_TIMEOUT_S) &
+ ICE_AQC_Q_DIS_TIMEOUT_M);
+
+ switch (rst_src) {
+ case ICE_VM_RESET:
+ cmd->cmd_type = ICE_AQC_Q_DIS_CMD_VM_RESET;
+ cmd->vmvf_and_timeout |=
+ CPU_TO_LE16(vmvf_num & ICE_AQC_Q_DIS_VMVF_NUM_M);
+ break;
+ case ICE_VF_RESET:
+ cmd->cmd_type = ICE_AQC_Q_DIS_CMD_VF_RESET;
+ /* In this case, FW expects vmvf_num to be absolute VF id */
+ cmd->vmvf_and_timeout |=
+ CPU_TO_LE16((vmvf_num + hw->func_caps.vf_base_id) &
+ ICE_AQC_Q_DIS_VMVF_NUM_M);
+ break;
+ case ICE_NO_RESET:
+ default:
+ break;
+ }
+
+ /* If no queue group info, we are in a reset flow. Issue the AQ */
+ if (!qg_list)
+ goto do_aq;
+
+ /* set RD bit to indicate that command buffer is provided by the driver
+ * and it needs to be read by the firmware
+ */
+ desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+
+ for (i = 0; i < num_qgrps; ++i) {
+ /* Calculate the size taken up by the queue IDs in this group */
+ sz += qg_list[i].num_qs * sizeof(qg_list[i].q_id);
+
+ /* Add the size of the group header */
+ sz += sizeof(qg_list[i]) - sizeof(qg_list[i].q_id);
+
+ /* If the num of queues is even, add 2 bytes of padding */
+ if ((qg_list[i].num_qs % 2) == 0)
+ sz += 2;
+ }
+
+ if (buf_size != sz)
+ return ICE_ERR_PARAM;
+
+do_aq:
+ return ice_aq_send_cmd(hw, &desc, qg_list, buf_size, cd);
+}
+
+
+/* End of FW Admin Queue command wrappers */
+
+/**
+ * ice_write_byte - write a byte to a packed context structure
+ * @src_ctx: the context structure to read from
+ * @dest_ctx: the context to be written to
+ * @ce_info: a description of the struct to be filled
+ */
+static void
+ice_write_byte(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info)
+{
+ u8 src_byte, dest_byte, mask;
+ u8 *from, *dest;
+ u16 shift_width;
+
+ /* copy from the next struct field */
+ from = src_ctx + ce_info->offset;
+
+ /* prepare the bits and mask */
+ shift_width = ce_info->lsb % 8;
+ mask = (u8)(BIT(ce_info->width) - 1);
+
+ src_byte = *from;
+ src_byte &= mask;
+
+ /* shift to correct alignment */
+ mask <<= shift_width;
+ src_byte <<= shift_width;
+
+ /* get the current bits from the target bit string */
+ dest = dest_ctx + (ce_info->lsb / 8);
+
+ ice_memcpy(&dest_byte, dest, sizeof(dest_byte), ICE_DMA_TO_NONDMA);
+
+ dest_byte &= ~mask; /* get the bits not changing */
+ dest_byte |= src_byte; /* add in the new bits */
+
+ /* put it all back */
+ ice_memcpy(dest, &dest_byte, sizeof(dest_byte), ICE_NONDMA_TO_DMA);
+}
+
+/**
+ * ice_write_word - write a word to a packed context structure
+ * @src_ctx: the context structure to read from
+ * @dest_ctx: the context to be written to
+ * @ce_info: a description of the struct to be filled
+ */
+static void
+ice_write_word(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info)
+{
+ u16 src_word, mask;
+ __le16 dest_word;
+ u8 *from, *dest;
+ u16 shift_width;
+
+ /* copy from the next struct field */
+ from = src_ctx + ce_info->offset;
+
+ /* prepare the bits and mask */
+ shift_width = ce_info->lsb % 8;
+ mask = BIT(ce_info->width) - 1;
+
+ /* don't swizzle the bits until after the mask because the mask bits
+ * will be in a different bit position on big endian machines
+ */
+ src_word = *(u16 *)from;
+ src_word &= mask;
+
+ /* shift to correct alignment */
+ mask <<= shift_width;
+ src_word <<= shift_width;
+
+ /* get the current bits from the target bit string */
+ dest = dest_ctx + (ce_info->lsb / 8);
+
+ ice_memcpy(&dest_word, dest, sizeof(dest_word), ICE_DMA_TO_NONDMA);
+
+ dest_word &= ~(CPU_TO_LE16(mask)); /* get the bits not changing */
+ dest_word |= CPU_TO_LE16(src_word); /* add in the new bits */
+
+ /* put it all back */
+ ice_memcpy(dest, &dest_word, sizeof(dest_word), ICE_NONDMA_TO_DMA);
+}
+
+/**
+ * ice_write_dword - write a dword to a packed context structure
+ * @src_ctx: the context structure to read from
+ * @dest_ctx: the context to be written to
+ * @ce_info: a description of the struct to be filled
+ */
+static void
+ice_write_dword(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info)
+{
+ u32 src_dword, mask;
+ __le32 dest_dword;
+ u8 *from, *dest;
+ u16 shift_width;
+
+ /* copy from the next struct field */
+ from = src_ctx + ce_info->offset;
+
+ /* prepare the bits and mask */
+ shift_width = ce_info->lsb % 8;
+
+ /* if the field width is exactly 32 on an x86 machine, then the shift
+ * operation will not work because the SHL instructions count is masked
+ * to 5 bits so the shift will do nothing
+ */
+ if (ce_info->width < 32)
+ mask = BIT(ce_info->width) - 1;
+ else
+ mask = (u32)~0;
+
+ /* don't swizzle the bits until after the mask because the mask bits
+ * will be in a different bit position on big endian machines
+ */
+ src_dword = *(u32 *)from;
+ src_dword &= mask;
+
+ /* shift to correct alignment */
+ mask <<= shift_width;
+ src_dword <<= shift_width;
+
+ /* get the current bits from the target bit string */
+ dest = dest_ctx + (ce_info->lsb / 8);
+
+ ice_memcpy(&dest_dword, dest, sizeof(dest_dword), ICE_DMA_TO_NONDMA);
+
+ dest_dword &= ~(CPU_TO_LE32(mask)); /* get the bits not changing */
+ dest_dword |= CPU_TO_LE32(src_dword); /* add in the new bits */
+
+ /* put it all back */
+ ice_memcpy(dest, &dest_dword, sizeof(dest_dword), ICE_NONDMA_TO_DMA);
+}
+
+/**
+ * ice_write_qword - write a qword to a packed context structure
+ * @src_ctx: the context structure to read from
+ * @dest_ctx: the context to be written to
+ * @ce_info: a description of the struct to be filled
+ */
+static void
+ice_write_qword(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info)
+{
+ u64 src_qword, mask;
+ __le64 dest_qword;
+ u8 *from, *dest;
+ u16 shift_width;
+
+ /* copy from the next struct field */
+ from = src_ctx + ce_info->offset;
+
+ /* prepare the bits and mask */
+ shift_width = ce_info->lsb % 8;
+
+ /* if the field width is exactly 64 on an x86 machine, then the shift
+ * operation will not work because the SHL instructions count is masked
+ * to 6 bits so the shift will do nothing
+ */
+ if (ce_info->width < 64)
+ mask = BIT_ULL(ce_info->width) - 1;
+ else
+ mask = (u64)~0;
+
+ /* don't swizzle the bits until after the mask because the mask bits
+ * will be in a different bit position on big endian machines
+ */
+ src_qword = *(u64 *)from;
+ src_qword &= mask;
+
+ /* shift to correct alignment */
+ mask <<= shift_width;
+ src_qword <<= shift_width;
+
+ /* get the current bits from the target bit string */
+ dest = dest_ctx + (ce_info->lsb / 8);
+
+ ice_memcpy(&dest_qword, dest, sizeof(dest_qword), ICE_DMA_TO_NONDMA);
+
+ dest_qword &= ~(CPU_TO_LE64(mask)); /* get the bits not changing */
+ dest_qword |= CPU_TO_LE64(src_qword); /* add in the new bits */
+
+ /* put it all back */
+ ice_memcpy(dest, &dest_qword, sizeof(dest_qword), ICE_NONDMA_TO_DMA);
+}
+
+/**
+ * ice_set_ctx - set context bits in packed structure
+ * @src_ctx: pointer to a generic non-packed context structure
+ * @dest_ctx: pointer to memory for the packed structure
+ * @ce_info: a description of the structure to be transformed
+ */
+enum ice_status
+ice_set_ctx(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info)
+{
+ int f;
+
+ for (f = 0; ce_info[f].width; f++) {
+ /* We have to deal with each element of the FW response
+ * using the correct size so that we are correct regardless
+ * of the endianness of the machine.
+ */
+ switch (ce_info[f].size_of) {
+ case sizeof(u8):
+ ice_write_byte(src_ctx, dest_ctx, &ce_info[f]);
+ break;
+ case sizeof(u16):
+ ice_write_word(src_ctx, dest_ctx, &ce_info[f]);
+ break;
+ case sizeof(u32):
+ ice_write_dword(src_ctx, dest_ctx, &ce_info[f]);
+ break;
+ case sizeof(u64):
+ ice_write_qword(src_ctx, dest_ctx, &ce_info[f]);
+ break;
+ default:
+ return ICE_ERR_INVAL_SIZE;
+ }
+ }
+
+ return ICE_SUCCESS;
+}
+
+
+
+
+
+/**
+ * ice_ena_vsi_txq
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @tc: tc number
+ * @num_qgrps: Number of added queue groups
+ * @buf: list of queue groups to be added
+ * @buf_size: size of buffer for indirect command
+ * @cd: pointer to command details structure or NULL
+ *
+ * This function adds one lan q
+ */
+enum ice_status
+ice_ena_vsi_txq(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u8 num_qgrps,
+ struct ice_aqc_add_tx_qgrp *buf, u16 buf_size,
+ struct ice_sq_cd *cd)
+{
+ struct ice_aqc_txsched_elem_data node = { 0 };
+ struct ice_sched_node *parent;
+ enum ice_status status;
+ struct ice_hw *hw;
+
+ if (!pi || pi->port_state != ICE_SCHED_PORT_STATE_READY)
+ return ICE_ERR_CFG;
+
+ if (num_qgrps > 1 || buf->num_txqs > 1)
+ return ICE_ERR_MAX_LIMIT;
+
+ hw = pi->hw;
+
+ if (!ice_is_vsi_valid(hw, vsi_handle))
+ return ICE_ERR_PARAM;
+
+ ice_acquire_lock(&pi->sched_lock);
+
+ /* find a parent node */
+ parent = ice_sched_get_free_qparent(pi, vsi_handle, tc,
+ ICE_SCHED_NODE_OWNER_LAN);
+ if (!parent) {
+ status = ICE_ERR_PARAM;
+ goto ena_txq_exit;
+ }
+
+ buf->parent_teid = parent->info.node_teid;
+ node.parent_teid = parent->info.node_teid;
+ /* Mark that the values in the "generic" section as valid. The default
+ * value in the "generic" section is zero. This means that :
+ * - Scheduling mode is Bytes Per Second (BPS), indicated by Bit 0.
+ * - 0 priority among siblings, indicated by Bit 1-3.
+ * - WFQ, indicated by Bit 4.
+ * - 0 Adjustment value is used in PSM credit update flow, indicated by
+ * Bit 5-6.
+ * - Bit 7 is reserved.
+ * Without setting the generic section as valid in valid_sections, the
+ * Admin Q command will fail with error code ICE_AQ_RC_EINVAL.
+ */
+ buf->txqs[0].info.valid_sections = ICE_AQC_ELEM_VALID_GENERIC;
+
+ /* add the lan q */
+ status = ice_aq_add_lan_txq(hw, num_qgrps, buf, buf_size, cd);
+ if (status != ICE_SUCCESS)
+ goto ena_txq_exit;
+
+ node.node_teid = buf->txqs[0].q_teid;
+ node.data.elem_type = ICE_AQC_ELEM_TYPE_LEAF;
+
+ /* add a leaf node into schduler tree q layer */
+ status = ice_sched_add_node(pi, hw->num_tx_sched_layers - 1, &node);
+
+ena_txq_exit:
+ ice_release_lock(&pi->sched_lock);
+ return status;
+}
+
+/**
+ * ice_dis_vsi_txq
+ * @pi: port information structure
+ * @num_queues: number of queues
+ * @q_ids: pointer to the q_id array
+ * @q_teids: pointer to queue node teids
+ * @rst_src: if called due to reset, specifies the rst source
+ * @vmvf_num: the relative vm or vf number that is undergoing the reset
+ * @cd: pointer to command details structure or NULL
+ *
+ * This function removes queues and their corresponding nodes in SW DB
+ */
+enum ice_status
+ice_dis_vsi_txq(struct ice_port_info *pi, u8 num_queues, u16 *q_ids,
+ u32 *q_teids, enum ice_disq_rst_src rst_src, u16 vmvf_num,
+ struct ice_sq_cd *cd)
+{
+ enum ice_status status = ICE_ERR_DOES_NOT_EXIST;
+ struct ice_aqc_dis_txq_item qg_list;
+ u16 i;
+
+ if (!pi || pi->port_state != ICE_SCHED_PORT_STATE_READY)
+ return ICE_ERR_CFG;
+
+ /* if queue is disabled already yet the disable queue command has to be
+ * sent to complete the VF reset, then call ice_aq_dis_lan_txq without
+ * any queue information
+ */
+
+ if (!num_queues && rst_src)
+ return ice_aq_dis_lan_txq(pi->hw, 0, NULL, 0, rst_src, vmvf_num,
+ NULL);
+
+ ice_acquire_lock(&pi->sched_lock);
+
+ for (i = 0; i < num_queues; i++) {
+ struct ice_sched_node *node;
+
+ node = ice_sched_find_node_by_teid(pi->root, q_teids[i]);
+ if (!node)
+ continue;
+ qg_list.parent_teid = node->info.parent_teid;
+ qg_list.num_qs = 1;
+ qg_list.q_id[0] = CPU_TO_LE16(q_ids[i]);
+ status = ice_aq_dis_lan_txq(pi->hw, 1, &qg_list,
+ sizeof(qg_list), rst_src, vmvf_num,
+ cd);
+
+ if (status != ICE_SUCCESS)
+ break;
+ ice_free_sched_node(pi, node);
+ }
+ ice_release_lock(&pi->sched_lock);
+ return status;
+}
+
+/**
+ * ice_cfg_vsi_qs - configure the new/exisiting VSI queues
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @tc_bitmap: TC bitmap
+ * @maxqs: max queues array per TC
+ * @owner: lan or rdma
+ *
+ * This function adds/updates the VSI queues per TC.
+ */
+static enum ice_status
+ice_cfg_vsi_qs(struct ice_port_info *pi, u16 vsi_handle, u8 tc_bitmap,
+ u16 *maxqs, u8 owner)
+{
+ enum ice_status status = ICE_SUCCESS;
+ u8 i;
+
+ if (!pi || pi->port_state != ICE_SCHED_PORT_STATE_READY)
+ return ICE_ERR_CFG;
+
+ if (!ice_is_vsi_valid(pi->hw, vsi_handle))
+ return ICE_ERR_PARAM;
+
+ ice_acquire_lock(&pi->sched_lock);
+
+ for (i = 0; i < ICE_MAX_TRAFFIC_CLASS; i++) {
+ /* configuration is possible only if TC node is present */
+ if (!ice_sched_get_tc_node(pi, i))
+ continue;
+
+ status = ice_sched_cfg_vsi(pi, vsi_handle, i, maxqs[i], owner,
+ ice_is_tc_ena(tc_bitmap, i));
+ if (status)
+ break;
+ }
+
+ ice_release_lock(&pi->sched_lock);
+ return status;
+}
+
+/**
+ * ice_cfg_vsi_lan - configure VSI lan queues
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @tc_bitmap: TC bitmap
+ * @max_lanqs: max lan queues array per TC
+ *
+ * This function adds/updates the VSI lan queues per TC.
+ */
+enum ice_status
+ice_cfg_vsi_lan(struct ice_port_info *pi, u16 vsi_handle, u8 tc_bitmap,
+ u16 *max_lanqs)
+{
+ return ice_cfg_vsi_qs(pi, vsi_handle, tc_bitmap, max_lanqs,
+ ICE_SCHED_NODE_OWNER_LAN);
+}
+
+
+
+
+
+/**
+ * ice_replay_pre_init - replay pre initialization
+ * @hw: pointer to the hw struct
+ *
+ * Initializes required config data for VSI, FD, ACL, and RSS before replay.
+ */
+static enum ice_status ice_replay_pre_init(struct ice_hw *hw)
+{
+ struct ice_switch_info *sw = hw->switch_info;
+ u8 i;
+
+ /* Delete old entries from replay filter list head if there is any */
+ ice_rm_all_sw_replay_rule_info(hw);
+ /* In start of replay, move entries into replay_rules list, it
+ * will allow adding rules entries back to filt_rules list,
+ * which is operational list.
+ */
+ for (i = 0; i < ICE_SW_LKUP_LAST; i++)
+ LIST_REPLACE_INIT(&sw->recp_list[i].filt_rules,
+ &sw->recp_list[i].filt_replay_rules);
+
+ return ICE_SUCCESS;
+}
+
+/**
+ * ice_replay_vsi - replay vsi configuration
+ * @hw: pointer to the hw struct
+ * @vsi_handle: driver vsi handle
+ *
+ * Restore all VSI configuration after reset. It is required to call this
+ * function with main VSI first.
+ */
+enum ice_status ice_replay_vsi(struct ice_hw *hw, u16 vsi_handle)
+{
+ enum ice_status status;
+
+ if (!ice_is_vsi_valid(hw, vsi_handle))
+ return ICE_ERR_PARAM;
+
+ /* Replay pre-initialization if there is any */
+ if (vsi_handle == ICE_MAIN_VSI_HANDLE) {
+ status = ice_replay_pre_init(hw);
+ if (status)
+ return status;
+ }
+
+ /* Replay per VSI all filters */
+ status = ice_replay_vsi_all_fltr(hw, vsi_handle);
+ return status;
+}
+
+/**
+ * ice_replay_post - post replay configuration cleanup
+ * @hw: pointer to the hw struct
+ *
+ * Post replay cleanup.
+ */
+void ice_replay_post(struct ice_hw *hw)
+{
+ /* Delete old entries from replay filter list head */
+ ice_rm_all_sw_replay_rule_info(hw);
+}
+
+/**
+ * ice_stat_update40 - read 40 bit stat from the chip and update stat values
+ * @hw: ptr to the hardware info
+ * @hireg: high 32 bit HW register to read from
+ * @loreg: low 32 bit HW register to read from
+ * @prev_stat_loaded: bool to specify if previous stats are loaded
+ * @prev_stat: ptr to previous loaded stat value
+ * @cur_stat: ptr to current stat value
+ */
+void
+ice_stat_update40(struct ice_hw *hw, u32 hireg, u32 loreg,
+ bool prev_stat_loaded, u64 *prev_stat, u64 *cur_stat)
+{
+ u64 new_data;
+
+ new_data = rd32(hw, loreg);
+ new_data |= ((u64)(rd32(hw, hireg) & 0xFFFF)) << 32;
+
+ /* device stats are not reset at PFR, they likely will not be zeroed
+ * when the driver starts. So save the first values read and use them as
+ * offsets to be subtracted from the raw values in order to report stats
+ * that count from zero.
+ */
+ if (!prev_stat_loaded)
+ *prev_stat = new_data;
+ if (new_data >= *prev_stat)
+ *cur_stat = new_data - *prev_stat;
+ else
+ /* to manage the potential roll-over */
+ *cur_stat = (new_data + BIT_ULL(40)) - *prev_stat;
+ *cur_stat &= 0xFFFFFFFFFFULL;
+}
+
+/**
+ * ice_stat_update32 - read 32 bit stat from the chip and update stat values
+ * @hw: ptr to the hardware info
+ * @reg: HW register to read from
+ * @prev_stat_loaded: bool to specify if previous stats are loaded
+ * @prev_stat: ptr to previous loaded stat value
+ * @cur_stat: ptr to current stat value
+ */
+void
+ice_stat_update32(struct ice_hw *hw, u32 reg, bool prev_stat_loaded,
+ u64 *prev_stat, u64 *cur_stat)
+{
+ u32 new_data;
+
+ new_data = rd32(hw, reg);
+
+ /* device stats are not reset at PFR, they likely will not be zeroed
+ * when the driver starts. So save the first values read and use them as
+ * offsets to be subtracted from the raw values in order to report stats
+ * that count from zero.
+ */
+ if (!prev_stat_loaded)
+ *prev_stat = new_data;
+ if (new_data >= *prev_stat)
+ *cur_stat = new_data - *prev_stat;
+ else
+ /* to manage the potential roll-over */
+ *cur_stat = (new_data + BIT_ULL(32)) - *prev_stat;
+}
+
diff --git a/drivers/net/ice/base/ice_common.h b/drivers/net/ice/base/ice_common.h
new file mode 100644
index 0000000..fc2870c
--- /dev/null
+++ b/drivers/net/ice/base/ice_common.h
@@ -0,0 +1,159 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_COMMON_H_
+#define _ICE_COMMON_H_
+
+#include "ice_type.h"
+
+#include "virtchnl.h"
+#include "ice_switch.h"
+
+enum ice_status ice_nvm_validate_checksum(struct ice_hw *hw);
+
+void
+ice_debug_cq(struct ice_hw *hw, u32 mask, void *desc, void *buf, u16 buf_len);
+enum ice_status ice_init_hw(struct ice_hw *hw);
+void ice_deinit_hw(struct ice_hw *hw);
+enum ice_status ice_check_reset(struct ice_hw *hw);
+enum ice_status ice_reset(struct ice_hw *hw, enum ice_reset_req req);
+
+enum ice_status ice_init_all_ctrlq(struct ice_hw *hw);
+void ice_shutdown_all_ctrlq(struct ice_hw *hw);
+enum ice_status
+ice_clean_rq_elem(struct ice_hw *hw, struct ice_ctl_q_info *cq,
+ struct ice_rq_event_info *e, u16 *pending);
+enum ice_status
+ice_get_link_status(struct ice_port_info *pi, bool *link_up);
+enum ice_status
+ice_update_link_info(struct ice_port_info *pi);
+enum ice_status
+ice_acquire_res(struct ice_hw *hw, enum ice_aq_res_ids res,
+ enum ice_aq_res_access_type access, u32 timeout);
+void ice_release_res(struct ice_hw *hw, enum ice_aq_res_ids res);
+enum ice_status
+ice_aq_alloc_free_res(struct ice_hw *hw, u16 num_entries,
+ struct ice_aqc_alloc_free_res_elem *buf, u16 buf_size,
+ enum ice_adminq_opc opc, struct ice_sq_cd *cd);
+enum ice_status ice_init_nvm(struct ice_hw *hw);
+enum ice_status ice_read_sr_word(struct ice_hw *hw, u16 offset, u16 *data);
+enum ice_status
+ice_read_sr_buf(struct ice_hw *hw, u16 offset, u16 *words, u16 *data);
+enum ice_status
+ice_sq_send_cmd(struct ice_hw *hw, struct ice_ctl_q_info *cq,
+ struct ice_aq_desc *desc, void *buf, u16 buf_size,
+ struct ice_sq_cd *cd);
+void ice_clear_pxe_mode(struct ice_hw *hw);
+
+enum ice_status ice_get_caps(struct ice_hw *hw);
+
+
+
+#if defined(FPGA_SUPPORT) || defined(CVL_A0_SUPPORT)
+void ice_dev_onetime_setup(struct ice_hw *hw);
+#endif /* FPGA_SUPPORT || CVL_A0_SUPPORT */
+
+
+enum ice_status
+ice_write_rxq_ctx(struct ice_hw *hw, struct ice_rlan_ctx *rlan_ctx,
+ u32 rxq_index);
+#if !defined(NO_UNUSED_CTX_CODE) || defined(AE_DRIVER)
+enum ice_status ice_clear_rxq_ctx(struct ice_hw *hw, u32 rxq_index);
+enum ice_status
+ice_clear_tx_cmpltnq_ctx(struct ice_hw *hw, u32 tx_cmpltnq_index);
+enum ice_status
+ice_write_tx_cmpltnq_ctx(struct ice_hw *hw,
+ struct ice_tx_cmpltnq_ctx *tx_cmpltnq_ctx,
+ u32 tx_cmpltnq_index);
+enum ice_status
+ice_clear_tx_drbell_q_ctx(struct ice_hw *hw, u32 tx_drbell_q_index);
+enum ice_status
+ice_write_tx_drbell_q_ctx(struct ice_hw *hw,
+ struct ice_tx_drbell_q_ctx *tx_drbell_q_ctx,
+ u32 tx_drbell_q_index);
+#endif /* !NO_UNUSED_CTX_CODE || AE_DRIVER */
+
+enum ice_status
+ice_aq_get_rss_lut(struct ice_hw *hw, u16 vsi_handle, u8 lut_type, u8 *lut,
+ u16 lut_size);
+enum ice_status
+ice_aq_set_rss_lut(struct ice_hw *hw, u16 vsi_handle, u8 lut_type, u8 *lut,
+ u16 lut_size);
+enum ice_status
+ice_aq_get_rss_key(struct ice_hw *hw, u16 vsi_handle,
+ struct ice_aqc_get_set_rss_keys *keys);
+enum ice_status
+ice_aq_set_rss_key(struct ice_hw *hw, u16 vsi_handle,
+ struct ice_aqc_get_set_rss_keys *keys);
+
+bool ice_check_sq_alive(struct ice_hw *hw, struct ice_ctl_q_info *cq);
+enum ice_status ice_aq_q_shutdown(struct ice_hw *hw, bool unloading);
+void ice_fill_dflt_direct_cmd_desc(struct ice_aq_desc *desc, u16 opcode);
+extern const struct ice_ctx_ele ice_tlan_ctx_info[];
+enum ice_status
+ice_set_ctx(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info);
+enum ice_status
+ice_aq_send_cmd(struct ice_hw *hw, struct ice_aq_desc *desc,
+ void *buf, u16 buf_size, struct ice_sq_cd *cd);
+enum ice_status ice_aq_get_fw_ver(struct ice_hw *hw, struct ice_sq_cd *cd);
+
+enum ice_status
+ice_aq_get_phy_caps(struct ice_port_info *pi, bool qual_mods, u8 report_mode,
+ struct ice_aqc_get_phy_caps_data *caps,
+ struct ice_sq_cd *cd);
+void
+ice_update_phy_type(u64 *phy_type_low, u16 link_speeds_bitmap);
+enum ice_status
+ice_aq_manage_mac_write(struct ice_hw *hw, const u8 *mac_addr, u8 flags,
+ struct ice_sq_cd *cd);
+
+enum ice_status ice_clear_pf_cfg(struct ice_hw *hw);
+enum ice_status
+ice_aq_set_phy_cfg(struct ice_hw *hw, u8 lport,
+ struct ice_aqc_set_phy_cfg_data *cfg, struct ice_sq_cd *cd);
+enum ice_status
+ice_set_fc(struct ice_port_info *pi, u8 *aq_failures,
+ bool ena_auto_link_update);
+
+enum ice_status
+ice_aq_set_link_restart_an(struct ice_port_info *pi, bool ena_link,
+ struct ice_sq_cd *cd);
+enum ice_status
+ice_aq_get_link_info(struct ice_port_info *pi, bool ena_lse,
+ struct ice_link_status *link, struct ice_sq_cd *cd);
+enum ice_status
+ice_aq_set_event_mask(struct ice_hw *hw, u8 port_num, u16 mask,
+ struct ice_sq_cd *cd);
+enum ice_status
+ice_aq_set_mac_loopback(struct ice_hw *hw, bool ena_lpbk, struct ice_sq_cd *cd);
+
+
+enum ice_status
+ice_aq_set_port_id_led(struct ice_port_info *pi, bool is_orig_mode,
+ struct ice_sq_cd *cd);
+
+
+
+
+enum ice_status
+ice_dis_vsi_txq(struct ice_port_info *pi, u8 num_queues, u16 *q_ids,
+ u32 *q_teids, enum ice_disq_rst_src rst_src, u16 vmvf_num,
+ struct ice_sq_cd *cmd_details);
+enum ice_status
+ice_cfg_vsi_lan(struct ice_port_info *pi, u16 vsi_handle, u8 tc_bitmap,
+ u16 *max_lanqs);
+enum ice_status
+ice_ena_vsi_txq(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u8 num_qgrps,
+ struct ice_aqc_add_tx_qgrp *buf, u16 buf_size,
+ struct ice_sq_cd *cd);
+enum ice_status ice_replay_vsi(struct ice_hw *hw, u16 vsi_handle);
+void ice_replay_post(struct ice_hw *hw);
+void ice_output_fw_log(struct ice_hw *hw, struct ice_aq_desc *desc, void *buf);
+void
+ice_stat_update40(struct ice_hw *hw, u32 hireg, u32 loreg,
+ bool prev_stat_loaded, u64 *prev_stat, u64 *cur_stat);
+void
+ice_stat_update32(struct ice_hw *hw, u32 reg, bool prev_stat_loaded,
+ u64 *prev_stat, u64 *cur_stat);
+#endif /* _ICE_COMMON_H_ */
diff --git a/drivers/net/ice/base/ice_controlq.c b/drivers/net/ice/base/ice_controlq.c
new file mode 100644
index 0000000..cbc4cb4
--- /dev/null
+++ b/drivers/net/ice/base/ice_controlq.c
@@ -0,0 +1,1098 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#include "ice_common.h"
+
+
+#define ICE_CQ_INIT_REGS(qinfo, prefix) \
+do { \
+ (qinfo)->sq.head = prefix##_ATQH; \
+ (qinfo)->sq.tail = prefix##_ATQT; \
+ (qinfo)->sq.len = prefix##_ATQLEN; \
+ (qinfo)->sq.bah = prefix##_ATQBAH; \
+ (qinfo)->sq.bal = prefix##_ATQBAL; \
+ (qinfo)->sq.len_mask = prefix##_ATQLEN_ATQLEN_M; \
+ (qinfo)->sq.len_ena_mask = prefix##_ATQLEN_ATQENABLE_M; \
+ (qinfo)->sq.head_mask = prefix##_ATQH_ATQH_M; \
+ (qinfo)->rq.head = prefix##_ARQH; \
+ (qinfo)->rq.tail = prefix##_ARQT; \
+ (qinfo)->rq.len = prefix##_ARQLEN; \
+ (qinfo)->rq.bah = prefix##_ARQBAH; \
+ (qinfo)->rq.bal = prefix##_ARQBAL; \
+ (qinfo)->rq.len_mask = prefix##_ARQLEN_ARQLEN_M; \
+ (qinfo)->rq.len_ena_mask = prefix##_ARQLEN_ARQENABLE_M; \
+ (qinfo)->rq.head_mask = prefix##_ARQH_ARQH_M; \
+} while (0)
+
+/**
+ * ice_adminq_init_regs - Initialize AdminQ registers
+ * @hw: pointer to the hardware structure
+ *
+ * This assumes the alloc_sq and alloc_rq functions have already been called
+ */
+static void ice_adminq_init_regs(struct ice_hw *hw)
+{
+ struct ice_ctl_q_info *cq = &hw->adminq;
+
+ ICE_CQ_INIT_REGS(cq, PF_FW);
+}
+
+/**
+ * ice_mailbox_init_regs - Initialize Mailbox registers
+ * @hw: pointer to the hardware structure
+ *
+ * This assumes the alloc_sq and alloc_rq functions have already been called
+ */
+static void ice_mailbox_init_regs(struct ice_hw *hw)
+{
+ struct ice_ctl_q_info *cq = &hw->mailboxq;
+
+ ICE_CQ_INIT_REGS(cq, PF_MBX);
+}
+
+
+/**
+ * ice_check_sq_alive
+ * @hw: pointer to the hw struct
+ * @cq: pointer to the specific Control queue
+ *
+ * Returns true if Queue is enabled else false.
+ */
+bool ice_check_sq_alive(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+ /* check both queue-length and queue-enable fields */
+ if (cq->sq.len && cq->sq.len_mask && cq->sq.len_ena_mask)
+ return (rd32(hw, cq->sq.len) & (cq->sq.len_mask |
+ cq->sq.len_ena_mask)) ==
+ (cq->num_sq_entries | cq->sq.len_ena_mask);
+
+ return false;
+}
+
+/**
+ * ice_alloc_ctrlq_sq_ring - Allocate Control Transmit Queue (ATQ) rings
+ * @hw: pointer to the hardware structure
+ * @cq: pointer to the specific Control queue
+ */
+static enum ice_status
+ice_alloc_ctrlq_sq_ring(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+ size_t size = cq->num_sq_entries * sizeof(struct ice_aq_desc);
+
+ cq->sq.desc_buf.va = ice_alloc_dma_mem(hw, &cq->sq.desc_buf, size);
+ if (!cq->sq.desc_buf.va)
+ return ICE_ERR_NO_MEMORY;
+
+ cq->sq.cmd_buf = ice_calloc(hw, cq->num_sq_entries,
+ sizeof(struct ice_sq_cd));
+ if (!cq->sq.cmd_buf) {
+ ice_free_dma_mem(hw, &cq->sq.desc_buf);
+ return ICE_ERR_NO_MEMORY;
+ }
+
+ return ICE_SUCCESS;
+}
+
+/**
+ * ice_alloc_ctrlq_rq_ring - Allocate Control Receive Queue (ARQ) rings
+ * @hw: pointer to the hardware structure
+ * @cq: pointer to the specific Control queue
+ */
+static enum ice_status
+ice_alloc_ctrlq_rq_ring(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+ size_t size = cq->num_rq_entries * sizeof(struct ice_aq_desc);
+
+ cq->rq.desc_buf.va = ice_alloc_dma_mem(hw, &cq->rq.desc_buf, size);
+ if (!cq->rq.desc_buf.va)
+ return ICE_ERR_NO_MEMORY;
+ return ICE_SUCCESS;
+}
+
+/**
+ * ice_free_cq_ring - Free control queue ring
+ * @hw: pointer to the hardware structure
+ * @ring: pointer to the specific control queue ring
+ *
+ * This assumes the posted buffers have already been cleaned
+ * and de-allocated
+ */
+static void ice_free_cq_ring(struct ice_hw *hw, struct ice_ctl_q_ring *ring)
+{
+ ice_free_dma_mem(hw, &ring->desc_buf);
+}
+
+/**
+ * ice_alloc_rq_bufs - Allocate pre-posted buffers for the ARQ
+ * @hw: pointer to the hardware structure
+ * @cq: pointer to the specific Control queue
+ */
+static enum ice_status
+ice_alloc_rq_bufs(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+ int i;
+
+ /* We'll be allocating the buffer info memory first, then we can
+ * allocate the mapped buffers for the event processing
+ */
+ cq->rq.dma_head = ice_calloc(hw, cq->num_rq_entries,
+ sizeof(cq->rq.desc_buf));
+ if (!cq->rq.dma_head)
+ return ICE_ERR_NO_MEMORY;
+ cq->rq.r.rq_bi = (struct ice_dma_mem *)cq->rq.dma_head;
+
+ /* allocate the mapped buffers */
+ for (i = 0; i < cq->num_rq_entries; i++) {
+ struct ice_aq_desc *desc;
+ struct ice_dma_mem *bi;
+
+ bi = &cq->rq.r.rq_bi[i];
+ bi->va = ice_alloc_dma_mem(hw, bi, cq->rq_buf_size);
+ if (!bi->va)
+ goto unwind_alloc_rq_bufs;
+
+ /* now configure the descriptors for use */
+ desc = ICE_CTL_Q_DESC(cq->rq, i);
+
+ desc->flags = CPU_TO_LE16(ICE_AQ_FLAG_BUF);
+ if (cq->rq_buf_size > ICE_AQ_LG_BUF)
+ desc->flags |= CPU_TO_LE16(ICE_AQ_FLAG_LB);
+ desc->opcode = 0;
+ /* This is in accordance with Admin queue design, there is no
+ * register for buffer size configuration
+ */
+ desc->datalen = CPU_TO_LE16(bi->size);
+ desc->retval = 0;
+ desc->cookie_high = 0;
+ desc->cookie_low = 0;
+ desc->params.generic.addr_high =
+ CPU_TO_LE32(ICE_HI_DWORD(bi->pa));
+ desc->params.generic.addr_low =
+ CPU_TO_LE32(ICE_LO_DWORD(bi->pa));
+ desc->params.generic.param0 = 0;
+ desc->params.generic.param1 = 0;
+ }
+ return ICE_SUCCESS;
+
+unwind_alloc_rq_bufs:
+ /* don't try to free the one that failed... */
+ i--;
+ for (; i >= 0; i--)
+ ice_free_dma_mem(hw, &cq->rq.r.rq_bi[i]);
+ ice_free(hw, cq->rq.dma_head);
+
+ return ICE_ERR_NO_MEMORY;
+}
+
+/**
+ * ice_alloc_sq_bufs - Allocate empty buffer structs for the ATQ
+ * @hw: pointer to the hardware structure
+ * @cq: pointer to the specific Control queue
+ */
+static enum ice_status
+ice_alloc_sq_bufs(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+ int i;
+
+ /* No mapped memory needed yet, just the buffer info structures */
+ cq->sq.dma_head = ice_calloc(hw, cq->num_sq_entries,
+ sizeof(cq->sq.desc_buf));
+ if (!cq->sq.dma_head)
+ return ICE_ERR_NO_MEMORY;
+ cq->sq.r.sq_bi = (struct ice_dma_mem *)cq->sq.dma_head;
+
+ /* allocate the mapped buffers */
+ for (i = 0; i < cq->num_sq_entries; i++) {
+ struct ice_dma_mem *bi;
+
+ bi = &cq->sq.r.sq_bi[i];
+ bi->va = ice_alloc_dma_mem(hw, bi, cq->sq_buf_size);
+ if (!bi->va)
+ goto unwind_alloc_sq_bufs;
+ }
+ return ICE_SUCCESS;
+
+unwind_alloc_sq_bufs:
+ /* don't try to free the one that failed... */
+ i--;
+ for (; i >= 0; i--)
+ ice_free_dma_mem(hw, &cq->sq.r.sq_bi[i]);
+ ice_free(hw, cq->sq.dma_head);
+
+ return ICE_ERR_NO_MEMORY;
+}
+
+static enum ice_status
+ice_cfg_cq_regs(struct ice_hw *hw, struct ice_ctl_q_ring *ring, u16 num_entries)
+{
+ /* Clear Head and Tail */
+ wr32(hw, ring->head, 0);
+ wr32(hw, ring->tail, 0);
+
+ /* set starting point */
+ wr32(hw, ring->len, (num_entries | ring->len_ena_mask));
+ wr32(hw, ring->bal, ICE_LO_DWORD(ring->desc_buf.pa));
+ wr32(hw, ring->bah, ICE_HI_DWORD(ring->desc_buf.pa));
+
+ /* Check one register to verify that config was applied */
+ if (rd32(hw, ring->bal) != ICE_LO_DWORD(ring->desc_buf.pa))
+ return ICE_ERR_AQ_ERROR;
+
+ return ICE_SUCCESS;
+}
+
+/**
+ * ice_cfg_sq_regs - configure Control ATQ registers
+ * @hw: pointer to the hardware structure
+ * @cq: pointer to the specific Control queue
+ *
+ * Configure base address and length registers for the transmit queue
+ */
+static enum ice_status
+ice_cfg_sq_regs(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+ return ice_cfg_cq_regs(hw, &cq->sq, cq->num_sq_entries);
+}
+
+/**
+ * ice_cfg_rq_regs - configure Control ARQ register
+ * @hw: pointer to the hardware structure
+ * @cq: pointer to the specific Control queue
+ *
+ * Configure base address and length registers for the receive (event q)
+ */
+static enum ice_status
+ice_cfg_rq_regs(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+ enum ice_status status;
+
+ status = ice_cfg_cq_regs(hw, &cq->rq, cq->num_rq_entries);
+ if (status)
+ return status;
+
+ /* Update tail in the HW to post pre-allocated buffers */
+ wr32(hw, cq->rq.tail, (u32)(cq->num_rq_entries - 1));
+
+ return ICE_SUCCESS;
+}
+
+/**
+ * ice_init_sq - main initialization routine for Control ATQ
+ * @hw: pointer to the hardware structure
+ * @cq: pointer to the specific Control queue
+ *
+ * This is the main initialization routine for the Control Send Queue
+ * Prior to calling this function, drivers *MUST* set the following fields
+ * in the cq->structure:
+ * - cq->num_sq_entries
+ * - cq->sq_buf_size
+ *
+ * Do *NOT* hold the lock when calling this as the memory allocation routines
+ * called are not going to be atomic context safe
+ */
+static enum ice_status ice_init_sq(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+ enum ice_status ret_code;
+
+ if (cq->sq.count > 0) {
+ /* queue already initialized */
+ ret_code = ICE_ERR_NOT_READY;
+ goto init_ctrlq_exit;
+ }
+
+ /* verify input for valid configuration */
+ if (!cq->num_sq_entries || !cq->sq_buf_size) {
+ ret_code = ICE_ERR_CFG;
+ goto init_ctrlq_exit;
+ }
+
+ cq->sq.next_to_use = 0;
+ cq->sq.next_to_clean = 0;
+
+ /* allocate the ring memory */
+ ret_code = ice_alloc_ctrlq_sq_ring(hw, cq);
+ if (ret_code)
+ goto init_ctrlq_exit;
+
+ /* allocate buffers in the rings */
+ ret_code = ice_alloc_sq_bufs(hw, cq);
+ if (ret_code)
+ goto init_ctrlq_free_rings;
+
+ /* initialize base registers */
+ ret_code = ice_cfg_sq_regs(hw, cq);
+ if (ret_code)
+ goto init_ctrlq_free_rings;
+
+ /* success! */
+ cq->sq.count = cq->num_sq_entries;
+ goto init_ctrlq_exit;
+
+init_ctrlq_free_rings:
+ ice_free_cq_ring(hw, &cq->sq);
+
+init_ctrlq_exit:
+ return ret_code;
+}
+
+/**
+ * ice_init_rq - initialize ARQ
+ * @hw: pointer to the hardware structure
+ * @cq: pointer to the specific Control queue
+ *
+ * The main initialization routine for the Admin Receive (Event) Queue.
+ * Prior to calling this function, drivers *MUST* set the following fields
+ * in the cq->structure:
+ * - cq->num_rq_entries
+ * - cq->rq_buf_size
+ *
+ * Do *NOT* hold the lock when calling this as the memory allocation routines
+ * called are not going to be atomic context safe
+ */
+static enum ice_status ice_init_rq(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+ enum ice_status ret_code;
+
+ if (cq->rq.count > 0) {
+ /* queue already initialized */
+ ret_code = ICE_ERR_NOT_READY;
+ goto init_ctrlq_exit;
+ }
+
+ /* verify input for valid configuration */
+ if (!cq->num_rq_entries || !cq->rq_buf_size) {
+ ret_code = ICE_ERR_CFG;
+ goto init_ctrlq_exit;
+ }
+
+ cq->rq.next_to_use = 0;
+ cq->rq.next_to_clean = 0;
+
+ /* allocate the ring memory */
+ ret_code = ice_alloc_ctrlq_rq_ring(hw, cq);
+ if (ret_code)
+ goto init_ctrlq_exit;
+
+ /* allocate buffers in the rings */
+ ret_code = ice_alloc_rq_bufs(hw, cq);
+ if (ret_code)
+ goto init_ctrlq_free_rings;
+
+ /* initialize base registers */
+ ret_code = ice_cfg_rq_regs(hw, cq);
+ if (ret_code)
+ goto init_ctrlq_free_rings;
+
+ /* success! */
+ cq->rq.count = cq->num_rq_entries;
+ goto init_ctrlq_exit;
+
+init_ctrlq_free_rings:
+ ice_free_cq_ring(hw, &cq->rq);
+
+init_ctrlq_exit:
+ return ret_code;
+}
+
+#define ICE_FREE_CQ_BUFS(hw, qi, ring) \
+do { \
+ int i; \
+ /* free descriptors */ \
+ for (i = 0; i < (qi)->num_##ring##_entries; i++) \
+ if ((qi)->ring.r.ring##_bi[i].pa) \
+ ice_free_dma_mem((hw), \
+ &(qi)->ring.r.ring##_bi[i]); \
+ /* free the buffer info list */ \
+ if ((qi)->ring.cmd_buf) \
+ ice_free(hw, (qi)->ring.cmd_buf); \
+ /* free dma head */ \
+ ice_free(hw, (qi)->ring.dma_head); \
+} while (0)
+
+/**
+ * ice_shutdown_sq - shutdown the Control ATQ
+ * @hw: pointer to the hardware structure
+ * @cq: pointer to the specific Control queue
+ *
+ * The main shutdown routine for the Control Transmit Queue
+ */
+static enum ice_status
+ice_shutdown_sq(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+ enum ice_status ret_code = ICE_SUCCESS;
+
+ ice_acquire_lock(&cq->sq_lock);
+
+ if (!cq->sq.count) {
+ ret_code = ICE_ERR_NOT_READY;
+ goto shutdown_sq_out;
+ }
+
+ /* Stop firmware AdminQ processing */
+ wr32(hw, cq->sq.head, 0);
+ wr32(hw, cq->sq.tail, 0);
+ wr32(hw, cq->sq.len, 0);
+ wr32(hw, cq->sq.bal, 0);
+ wr32(hw, cq->sq.bah, 0);
+
+ cq->sq.count = 0; /* to indicate uninitialized queue */
+
+ /* free ring buffers and the ring itself */
+ ICE_FREE_CQ_BUFS(hw, cq, sq);
+ ice_free_cq_ring(hw, &cq->sq);
+
+shutdown_sq_out:
+ ice_release_lock(&cq->sq_lock);
+ return ret_code;
+}
+
+/**
+ * ice_aq_ver_check - Check the reported AQ API version.
+ * @hw: pointer to the hardware structure
+ *
+ * Checks if the driver should load on a given AQ API version.
+ *
+ * Return: 'true' iff the driver should attempt to load. 'false' otherwise.
+ */
+static bool ice_aq_ver_check(struct ice_hw *hw)
+{
+ if (hw->api_maj_ver > EXP_FW_API_VER_MAJOR) {
+ /* Major API version is newer than expected, don't load */
+ ice_warn(hw, "The driver for the device stopped because the NVM image is newer than expected. You must install the most recent version of the network driver.\n");
+ return false;
+ } else if (hw->api_maj_ver == EXP_FW_API_VER_MAJOR) {
+ if (hw->api_min_ver > (EXP_FW_API_VER_MINOR + 2))
+ ice_info(hw, "The driver for the device detected a newer version of the NVM image than expected. Please install the most recent version of the network driver.\n");
+ else if ((hw->api_min_ver + 2) < EXP_FW_API_VER_MINOR)
+ ice_info(hw, "The driver for the device detected an older version of the NVM image than expected. Please update the NVM image.\n");
+ } else {
+ /* Major API version is older than expected, log a warning */
+ ice_info(hw, "The driver for the device detected an older version of the NVM image than expected. Please update the NVM image.\n");
+ }
+ return true;
+}
+
+/**
+ * ice_shutdown_rq - shutdown Control ARQ
+ * @hw: pointer to the hardware structure
+ * @cq: pointer to the specific Control queue
+ *
+ * The main shutdown routine for the Control Receive Queue
+ */
+static enum ice_status
+ice_shutdown_rq(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+ enum ice_status ret_code = ICE_SUCCESS;
+
+ ice_acquire_lock(&cq->rq_lock);
+
+ if (!cq->rq.count) {
+ ret_code = ICE_ERR_NOT_READY;
+ goto shutdown_rq_out;
+ }
+
+ /* Stop Control Queue processing */
+ wr32(hw, cq->rq.head, 0);
+ wr32(hw, cq->rq.tail, 0);
+ wr32(hw, cq->rq.len, 0);
+ wr32(hw, cq->rq.bal, 0);
+ wr32(hw, cq->rq.bah, 0);
+
+ /* set rq.count to 0 to indicate uninitialized queue */
+ cq->rq.count = 0;
+
+ /* free ring buffers and the ring itself */
+ ICE_FREE_CQ_BUFS(hw, cq, rq);
+ ice_free_cq_ring(hw, &cq->rq);
+
+shutdown_rq_out:
+ ice_release_lock(&cq->rq_lock);
+ return ret_code;
+}
+
+
+/**
+ * ice_init_check_adminq - Check version for Admin Queue to know if its alive
+ * @hw: pointer to the hardware structure
+ */
+static enum ice_status ice_init_check_adminq(struct ice_hw *hw)
+{
+ struct ice_ctl_q_info *cq = &hw->adminq;
+ enum ice_status status;
+
+
+ status = ice_aq_get_fw_ver(hw, NULL);
+ if (status)
+ goto init_ctrlq_free_rq;
+
+
+ if (!ice_aq_ver_check(hw)) {
+ status = ICE_ERR_FW_API_VER;
+ goto init_ctrlq_free_rq;
+ }
+
+ return ICE_SUCCESS;
+
+init_ctrlq_free_rq:
+ if (cq->rq.count) {
+ ice_shutdown_rq(hw, cq);
+ ice_destroy_lock(&cq->rq_lock);
+ }
+ if (cq->sq.count) {
+ ice_shutdown_sq(hw, cq);
+ ice_destroy_lock(&cq->sq_lock);
+ }
+ return status;
+}
+
+/**
+ * ice_init_ctrlq - main initialization routine for any control Queue
+ * @hw: pointer to the hardware structure
+ * @q_type: specific Control queue type
+ *
+ * Prior to calling this function, drivers *MUST* set the following fields
+ * in the cq->structure:
+ * - cq->num_sq_entries
+ * - cq->num_rq_entries
+ * - cq->rq_buf_size
+ * - cq->sq_buf_size
+ */
+static enum ice_status ice_init_ctrlq(struct ice_hw *hw, enum ice_ctl_q q_type)
+{
+ struct ice_ctl_q_info *cq;
+ enum ice_status ret_code;
+
+ switch (q_type) {
+ case ICE_CTL_Q_ADMIN:
+ ice_adminq_init_regs(hw);
+ cq = &hw->adminq;
+ break;
+ case ICE_CTL_Q_MAILBOX:
+ ice_mailbox_init_regs(hw);
+ cq = &hw->mailboxq;
+ break;
+ default:
+ return ICE_ERR_PARAM;
+ }
+ cq->qtype = q_type;
+
+ /* verify input for valid configuration */
+ if (!cq->num_rq_entries || !cq->num_sq_entries ||
+ !cq->rq_buf_size || !cq->sq_buf_size) {
+ return ICE_ERR_CFG;
+ }
+ ice_init_lock(&cq->sq_lock);
+ ice_init_lock(&cq->rq_lock);
+
+ /* setup SQ command write back timeout */
+ cq->sq_cmd_timeout = ICE_CTL_Q_SQ_CMD_TIMEOUT;
+
+ /* allocate the ATQ */
+ ret_code = ice_init_sq(hw, cq);
+ if (ret_code)
+ goto init_ctrlq_destroy_locks;
+
+ /* allocate the ARQ */
+ ret_code = ice_init_rq(hw, cq);
+ if (ret_code)
+ goto init_ctrlq_free_sq;
+
+ /* success! */
+ return ICE_SUCCESS;
+
+init_ctrlq_free_sq:
+ ice_shutdown_sq(hw, cq);
+init_ctrlq_destroy_locks:
+ ice_destroy_lock(&cq->sq_lock);
+ ice_destroy_lock(&cq->rq_lock);
+ return ret_code;
+}
+
+/**
+ * ice_init_all_ctrlq - main initialization routine for all control queues
+ * @hw: pointer to the hardware structure
+ *
+ * Prior to calling this function, drivers *MUST* set the following fields
+ * in the cq->structure for all control queues:
+ * - cq->num_sq_entries
+ * - cq->num_rq_entries
+ * - cq->rq_buf_size
+ * - cq->sq_buf_size
+ */
+enum ice_status ice_init_all_ctrlq(struct ice_hw *hw)
+{
+ enum ice_status ret_code;
+
+
+ /* Init FW admin queue */
+ ret_code = ice_init_ctrlq(hw, ICE_CTL_Q_ADMIN);
+ if (ret_code)
+ return ret_code;
+
+ ret_code = ice_init_check_adminq(hw);
+ if (ret_code)
+ return ret_code;
+ /* Init Mailbox queue */
+ return ice_init_ctrlq(hw, ICE_CTL_Q_MAILBOX);
+}
+
+/**
+ * ice_shutdown_ctrlq - shutdown routine for any control queue
+ * @hw: pointer to the hardware structure
+ * @q_type: specific Control queue type
+ */
+static void ice_shutdown_ctrlq(struct ice_hw *hw, enum ice_ctl_q q_type)
+{
+ struct ice_ctl_q_info *cq;
+
+ switch (q_type) {
+ case ICE_CTL_Q_ADMIN:
+ cq = &hw->adminq;
+ if (ice_check_sq_alive(hw, cq))
+ ice_aq_q_shutdown(hw, true);
+ break;
+ case ICE_CTL_Q_MAILBOX:
+ cq = &hw->mailboxq;
+ break;
+ default:
+ return;
+ }
+
+ if (cq->sq.count) {
+ ice_shutdown_sq(hw, cq);
+ ice_destroy_lock(&cq->sq_lock);
+ }
+ if (cq->rq.count) {
+ ice_shutdown_rq(hw, cq);
+ ice_destroy_lock(&cq->rq_lock);
+ }
+}
+
+/**
+ * ice_shutdown_all_ctrlq - shutdown routine for all control queues
+ * @hw: pointer to the hardware structure
+ */
+void ice_shutdown_all_ctrlq(struct ice_hw *hw)
+{
+ /* Shutdown FW admin queue */
+ ice_shutdown_ctrlq(hw, ICE_CTL_Q_ADMIN);
+ /* Shutdown PF-VF Mailbox */
+ ice_shutdown_ctrlq(hw, ICE_CTL_Q_MAILBOX);
+}
+
+/**
+ * ice_clean_sq - cleans Admin send queue (ATQ)
+ * @hw: pointer to the hardware structure
+ * @cq: pointer to the specific Control queue
+ *
+ * returns the number of free desc
+ */
+static u16 ice_clean_sq(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+ struct ice_ctl_q_ring *sq = &cq->sq;
+ u16 ntc = sq->next_to_clean;
+ struct ice_sq_cd *details;
+#if 0
+ struct ice_aq_desc desc_cb;
+#endif
+ struct ice_aq_desc *desc;
+
+ desc = ICE_CTL_Q_DESC(*sq, ntc);
+ details = ICE_CTL_Q_DETAILS(*sq, ntc);
+
+ while (rd32(hw, cq->sq.head) != ntc) {
+ ice_debug(hw, ICE_DBG_AQ_MSG,
+ "ntc %d head %d.\n", ntc, rd32(hw, cq->sq.head));
+#if 0
+ if (details->callback) {
+ ICE_CTL_Q_CALLBACK cb_func =
+ (ICE_CTL_Q_CALLBACK)details->callback;
+ ice_memcpy(&desc_cb, desc, sizeof(desc_cb),
+ ICE_DMA_TO_DMA);
+ cb_func(hw, &desc_cb);
+ }
+#endif
+ ice_memset(desc, 0, sizeof(*desc), ICE_DMA_MEM);
+ ice_memset(details, 0, sizeof(*details), ICE_NONDMA_MEM);
+ ntc++;
+ if (ntc == sq->count)
+ ntc = 0;
+ desc = ICE_CTL_Q_DESC(*sq, ntc);
+ details = ICE_CTL_Q_DETAILS(*sq, ntc);
+ }
+
+ sq->next_to_clean = ntc;
+
+ return ICE_CTL_Q_DESC_UNUSED(sq);
+}
+
+/**
+ * ice_sq_done - check if FW has processed the Admin Send Queue (ATQ)
+ * @hw: pointer to the hw struct
+ * @cq: pointer to the specific Control queue
+ *
+ * Returns true if the firmware has processed all descriptors on the
+ * admin send queue. Returns false if there are still requests pending.
+ */
+static bool ice_sq_done(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+ /* AQ designers suggest use of head for better
+ * timing reliability than DD bit
+ */
+ return rd32(hw, cq->sq.head) == cq->sq.next_to_use;
+}
+
+/**
+ * ice_sq_send_cmd - send command to Control Queue (ATQ)
+ * @hw: pointer to the hw struct
+ * @cq: pointer to the specific Control queue
+ * @desc: prefilled descriptor describing the command (non DMA mem)
+ * @buf: buffer to use for indirect commands (or NULL for direct commands)
+ * @buf_size: size of buffer for indirect commands (or 0 for direct commands)
+ * @cd: pointer to command details structure
+ *
+ * This is the main send command routine for the ATQ. It runs the queue,
+ * cleans the queue, etc.
+ */
+enum ice_status
+ice_sq_send_cmd(struct ice_hw *hw, struct ice_ctl_q_info *cq,
+ struct ice_aq_desc *desc, void *buf, u16 buf_size,
+ struct ice_sq_cd *cd)
+{
+ struct ice_dma_mem *dma_buf = NULL;
+ struct ice_aq_desc *desc_on_ring;
+ bool cmd_completed = false;
+ enum ice_status status = ICE_SUCCESS;
+ struct ice_sq_cd *details;
+ u32 total_delay = 0;
+ u16 retval = 0;
+ u32 val = 0;
+
+ /* if reset is in progress return a soft error */
+ if (hw->reset_ongoing)
+ return ICE_ERR_RESET_ONGOING;
+ ice_acquire_lock(&cq->sq_lock);
+
+ cq->sq_last_status = ICE_AQ_RC_OK;
+
+ if (!cq->sq.count) {
+ ice_debug(hw, ICE_DBG_AQ_MSG,
+ "Control Send queue not initialized.\n");
+ status = ICE_ERR_AQ_EMPTY;
+ goto sq_send_command_error;
+ }
+
+ if ((buf && !buf_size) || (!buf && buf_size)) {
+ status = ICE_ERR_PARAM;
+ goto sq_send_command_error;
+ }
+
+ if (buf) {
+ if (buf_size > cq->sq_buf_size) {
+ ice_debug(hw, ICE_DBG_AQ_MSG,
+ "Invalid buffer size for Control Send queue: %d.\n",
+ buf_size);
+ status = ICE_ERR_INVAL_SIZE;
+ goto sq_send_command_error;
+ }
+
+ desc->flags |= CPU_TO_LE16(ICE_AQ_FLAG_BUF);
+ if (buf_size > ICE_AQ_LG_BUF)
+ desc->flags |= CPU_TO_LE16(ICE_AQ_FLAG_LB);
+ }
+
+ val = rd32(hw, cq->sq.head);
+ if (val >= cq->num_sq_entries) {
+ ice_debug(hw, ICE_DBG_AQ_MSG,
+ "head overrun at %d in the Control Send Queue ring\n",
+ val);
+ status = ICE_ERR_AQ_EMPTY;
+ goto sq_send_command_error;
+ }
+
+ details = ICE_CTL_Q_DETAILS(cq->sq, cq->sq.next_to_use);
+ if (cd)
+ *details = *cd;
+#if 0
+ /* FIXME: if/when this block gets enabled (when the #if 0
+ * is removed), add braces to both branches of the surrounding
+ * conditional expression. The braces have been removed to
+ * prevent checkpatch complaining.
+ */
+
+ /* If the command details are defined copy the cookie. The
+ * CPU_TO_LE32 is not needed here because the data is ignored
+ * by the FW, only used by the driver
+ */
+ if (details->cookie) {
+ desc->cookie_high =
+ CPU_TO_LE32(ICE_HI_DWORD(details->cookie));
+ desc->cookie_low =
+ CPU_TO_LE32(ICE_LO_DWORD(details->cookie));
+ }
+#endif
+ else
+ ice_memset(details, 0, sizeof(*details), ICE_NONDMA_MEM);
+#if 0
+ /* clear requested flags and then set additional flags if defined */
+ desc->flags &= ~CPU_TO_LE16(details->flags_dis);
+ desc->flags |= CPU_TO_LE16(details->flags_ena);
+
+ if (details->postpone && !details->async) {
+ ice_debug(hw, ICE_DBG_AQ_MSG,
+ "Async flag not set along with postpone flag\n");
+ status = ICE_ERR_PARAM;
+ goto sq_send_command_error;
+ }
+#endif
+
+ /* Call clean and check queue available function to reclaim the
+ * descriptors that were processed by FW/MBX; the function returns the
+ * number of desc available. The clean function called here could be
+ * called in a separate thread in case of asynchronous completions.
+ */
+ if (ice_clean_sq(hw, cq) == 0) {
+ ice_debug(hw, ICE_DBG_AQ_MSG,
+ "Error: Control Send Queue is full.\n");
+ status = ICE_ERR_AQ_FULL;
+ goto sq_send_command_error;
+ }
+
+ /* initialize the temp desc pointer with the right desc */
+ desc_on_ring = ICE_CTL_Q_DESC(cq->sq, cq->sq.next_to_use);
+
+ /* if the desc is available copy the temp desc to the right place */
+ ice_memcpy(desc_on_ring, desc, sizeof(*desc_on_ring),
+ ICE_NONDMA_TO_DMA);
+
+ /* if buf is not NULL assume indirect command */
+ if (buf) {
+ dma_buf = &cq->sq.r.sq_bi[cq->sq.next_to_use];
+ /* copy the user buf into the respective DMA buf */
+ ice_memcpy(dma_buf->va, buf, buf_size, ICE_NONDMA_TO_DMA);
+ desc_on_ring->datalen = CPU_TO_LE16(buf_size);
+
+ /* Update the address values in the desc with the pa value
+ * for respective buffer
+ */
+ desc_on_ring->params.generic.addr_high =
+ CPU_TO_LE32(ICE_HI_DWORD(dma_buf->pa));
+ desc_on_ring->params.generic.addr_low =
+ CPU_TO_LE32(ICE_LO_DWORD(dma_buf->pa));
+ }
+
+ /* Debug desc and buffer */
+ ice_debug(hw, ICE_DBG_AQ_MSG,
+ "ATQ: Control Send queue desc and buffer:\n");
+
+ ice_debug_cq(hw, ICE_DBG_AQ_CMD, (void *)desc_on_ring, buf, buf_size);
+
+
+ (cq->sq.next_to_use)++;
+ if (cq->sq.next_to_use == cq->sq.count)
+ cq->sq.next_to_use = 0;
+#if 0
+ /* FIXME - handle this case? */
+ if (!details->postpone)
+#endif
+ wr32(hw, cq->sq.tail, cq->sq.next_to_use);
+
+#if 0
+ /* if command details are not defined or async flag is not set,
+ * we need to wait for desc write back
+ */
+ if (!details->async && !details->postpone) {
+ /* FIXME - handle this case? */
+ }
+#endif
+ do {
+ if (ice_sq_done(hw, cq))
+ break;
+
+ ice_msec_delay(1, false);
+ total_delay++;
+ } while (total_delay < cq->sq_cmd_timeout);
+
+ /* if ready, copy the desc back to temp */
+ if (ice_sq_done(hw, cq)) {
+ ice_memcpy(desc, desc_on_ring, sizeof(*desc),
+ ICE_DMA_TO_NONDMA);
+ if (buf) {
+ /* get returned length to copy */
+ u16 copy_size = LE16_TO_CPU(desc->datalen);
+
+ if (copy_size > buf_size) {
+ ice_debug(hw, ICE_DBG_AQ_MSG,
+ "Return len %d > than buf len %d\n",
+ copy_size, buf_size);
+ status = ICE_ERR_AQ_ERROR;
+ } else {
+ ice_memcpy(buf, dma_buf->va, copy_size,
+ ICE_DMA_TO_NONDMA);
+ }
+ }
+ retval = LE16_TO_CPU(desc->retval);
+ if (retval) {
+ ice_debug(hw, ICE_DBG_AQ_MSG,
+ "Control Send Queue command completed with error 0x%x\n",
+ retval);
+
+ /* strip off FW internal code */
+ retval &= 0xff;
+ }
+ cmd_completed = true;
+ if (!status && retval != ICE_AQ_RC_OK)
+ status = ICE_ERR_AQ_ERROR;
+ cq->sq_last_status = (enum ice_aq_err)retval;
+ }
+
+ ice_debug(hw, ICE_DBG_AQ_MSG,
+ "ATQ: desc and buffer writeback:\n");
+
+ ice_debug_cq(hw, ICE_DBG_AQ_CMD, (void *)desc, buf, buf_size);
+
+
+ /* save writeback AQ if requested */
+ if (details->wb_desc)
+ ice_memcpy(details->wb_desc, desc_on_ring,
+ sizeof(*details->wb_desc), ICE_DMA_TO_NONDMA);
+
+ /* update the error if time out occurred */
+ if (!cmd_completed) {
+#if 0
+ (!details->async && !details->postpone)) {
+#endif
+ ice_debug(hw, ICE_DBG_AQ_MSG,
+ "Control Send Queue Writeback timeout.\n");
+ status = ICE_ERR_AQ_TIMEOUT;
+ }
+
+sq_send_command_error:
+ ice_release_lock(&cq->sq_lock);
+ return status;
+}
+
+/**
+ * ice_fill_dflt_direct_cmd_desc - AQ descriptor helper function
+ * @desc: pointer to the temp descriptor (non DMA mem)
+ * @opcode: the opcode can be used to decide which flags to turn off or on
+ *
+ * Fill the desc with default values
+ */
+void ice_fill_dflt_direct_cmd_desc(struct ice_aq_desc *desc, u16 opcode)
+{
+ /* zero out the desc */
+ ice_memset(desc, 0, sizeof(*desc), ICE_NONDMA_MEM);
+ desc->opcode = CPU_TO_LE16(opcode);
+ desc->flags = CPU_TO_LE16(ICE_AQ_FLAG_SI);
+}
+
+/**
+ * ice_clean_rq_elem
+ * @hw: pointer to the hw struct
+ * @cq: pointer to the specific Control queue
+ * @e: event info from the receive descriptor, includes any buffers
+ * @pending: number of events that could be left to process
+ *
+ * This function cleans one Admin Receive Queue element and returns
+ * the contents through e. It can also return how many events are
+ * left to process through 'pending'.
+ */
+enum ice_status
+ice_clean_rq_elem(struct ice_hw *hw, struct ice_ctl_q_info *cq,
+ struct ice_rq_event_info *e, u16 *pending)
+{
+ u16 ntc = cq->rq.next_to_clean;
+ enum ice_status ret_code = ICE_SUCCESS;
+ struct ice_aq_desc *desc;
+ struct ice_dma_mem *bi;
+ u16 desc_idx;
+ u16 datalen;
+ u16 flags;
+ u16 ntu;
+
+ /* pre-clean the event info */
+ ice_memset(&e->desc, 0, sizeof(e->desc), ICE_NONDMA_MEM);
+
+ /* take the lock before we start messing with the ring */
+ ice_acquire_lock(&cq->rq_lock);
+
+ if (!cq->rq.count) {
+ ice_debug(hw, ICE_DBG_AQ_MSG,
+ "Control Receive queue not initialized.\n");
+ ret_code = ICE_ERR_AQ_EMPTY;
+ goto clean_rq_elem_err;
+ }
+
+ /* set next_to_use to head */
+ ntu = (u16)(rd32(hw, cq->rq.head) & cq->rq.head_mask);
+
+ if (ntu == ntc) {
+ /* nothing to do - shouldn't need to update ring's values */
+ ret_code = ICE_ERR_AQ_NO_WORK;
+ goto clean_rq_elem_out;
+ }
+
+ /* now clean the next descriptor */
+ desc = ICE_CTL_Q_DESC(cq->rq, ntc);
+ desc_idx = ntc;
+
+ cq->rq_last_status = (enum ice_aq_err)LE16_TO_CPU(desc->retval);
+ flags = LE16_TO_CPU(desc->flags);
+ if (flags & ICE_AQ_FLAG_ERR) {
+ ret_code = ICE_ERR_AQ_ERROR;
+ ice_debug(hw, ICE_DBG_AQ_MSG,
+ "Control Receive Queue Event received with error 0x%x\n",
+ cq->rq_last_status);
+ }
+ ice_memcpy(&e->desc, desc, sizeof(e->desc), ICE_DMA_TO_NONDMA);
+ datalen = LE16_TO_CPU(desc->datalen);
+ e->msg_len = min(datalen, e->buf_len);
+ if (e->msg_buf && e->msg_len)
+ ice_memcpy(e->msg_buf, cq->rq.r.rq_bi[desc_idx].va,
+ e->msg_len, ICE_DMA_TO_NONDMA);
+
+ ice_debug(hw, ICE_DBG_AQ_MSG, "ARQ: desc and buffer:\n");
+
+ ice_debug_cq(hw, ICE_DBG_AQ_CMD, (void *)desc, e->msg_buf,
+ cq->rq_buf_size);
+
+
+ /* Restore the original datalen and buffer address in the desc,
+ * FW updates datalen to indicate the event message size
+ */
+ bi = &cq->rq.r.rq_bi[ntc];
+ ice_memset(desc, 0, sizeof(*desc), ICE_DMA_MEM);
+
+ desc->flags = CPU_TO_LE16(ICE_AQ_FLAG_BUF);
+ if (cq->rq_buf_size > ICE_AQ_LG_BUF)
+ desc->flags |= CPU_TO_LE16(ICE_AQ_FLAG_LB);
+ desc->datalen = CPU_TO_LE16(bi->size);
+ desc->params.generic.addr_high = CPU_TO_LE32(ICE_HI_DWORD(bi->pa));
+ desc->params.generic.addr_low = CPU_TO_LE32(ICE_LO_DWORD(bi->pa));
+
+ /* set tail = the last cleaned desc index. */
+ wr32(hw, cq->rq.tail, ntc);
+ /* ntc is updated to tail + 1 */
+ ntc++;
+ if (ntc == cq->num_rq_entries)
+ ntc = 0;
+ cq->rq.next_to_clean = ntc;
+ cq->rq.next_to_use = ntu;
+
+#if 0
+ ice_nvmupd_check_wait_event(hw, LE16_TO_CPU(e->desc.opcode));
+#endif
+clean_rq_elem_out:
+ /* Set pending if needed, unlock and return */
+ if (pending) {
+ /* re-read HW head to calculate actual pending messages */
+ ntu = (u16)(rd32(hw, cq->rq.head) & cq->rq.head_mask);
+ *pending = (u16)((ntc > ntu ? cq->rq.count : 0) + (ntu - ntc));
+ }
+clean_rq_elem_err:
+ ice_release_lock(&cq->rq_lock);
+
+ return ret_code;
+}
diff --git a/drivers/net/ice/base/ice_controlq.h b/drivers/net/ice/base/ice_controlq.h
new file mode 100644
index 0000000..db2db93
--- /dev/null
+++ b/drivers/net/ice/base/ice_controlq.h
@@ -0,0 +1,97 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_CONTROLQ_H_
+#define _ICE_CONTROLQ_H_
+
+#include "ice_adminq_cmd.h"
+
+
+/* Maximum buffer lengths for all control queue types */
+#define ICE_AQ_MAX_BUF_LEN 4096
+#define ICE_MBXQ_MAX_BUF_LEN 4096
+
+#define ICE_CTL_Q_DESC(R, i) \
+ (&(((struct ice_aq_desc *)((R).desc_buf.va))[i]))
+
+#define ICE_CTL_Q_DESC_UNUSED(R) \
+ (u16)((((R)->next_to_clean > (R)->next_to_use) ? 0 : (R)->count) + \
+ (R)->next_to_clean - (R)->next_to_use - 1)
+
+/* Defines that help manage the driver vs FW API checks.
+ * Take a look at ice_aq_ver_check in ice_controlq.c for actual usage.
+ */
+#define EXP_FW_API_VER_BRANCH 0x00
+#define EXP_FW_API_VER_MAJOR 0x01
+#define EXP_FW_API_VER_MINOR 0x03
+
+/* Different control queue types: These are mainly for SW consumption. */
+enum ice_ctl_q {
+ ICE_CTL_Q_UNKNOWN = 0,
+ ICE_CTL_Q_ADMIN,
+ ICE_CTL_Q_MAILBOX,
+};
+
+/* Control Queue default settings */
+#define ICE_CTL_Q_SQ_CMD_TIMEOUT 250 /* msecs */
+
+struct ice_ctl_q_ring {
+ void *dma_head; /* Virtual address to dma head */
+ struct ice_dma_mem desc_buf; /* descriptor ring memory */
+ void *cmd_buf; /* command buffer memory */
+
+ union {
+ struct ice_dma_mem *sq_bi;
+ struct ice_dma_mem *rq_bi;
+ } r;
+
+ u16 count; /* Number of descriptors */
+
+ /* used for interrupt processing */
+ u16 next_to_use;
+ u16 next_to_clean;
+
+ /* used for queue tracking */
+ u32 head;
+ u32 tail;
+ u32 len;
+ u32 bah;
+ u32 bal;
+ u32 len_mask;
+ u32 len_ena_mask;
+ u32 head_mask;
+};
+
+/* sq transaction details */
+struct ice_sq_cd {
+ struct ice_aq_desc *wb_desc;
+};
+
+#define ICE_CTL_Q_DETAILS(R, i) (&(((struct ice_sq_cd *)((R).cmd_buf))[i]))
+
+/* rq event information */
+struct ice_rq_event_info {
+ struct ice_aq_desc desc;
+ u16 msg_len;
+ u16 buf_len;
+ u8 *msg_buf;
+};
+
+/* Control Queue information */
+struct ice_ctl_q_info {
+ enum ice_ctl_q qtype;
+ struct ice_ctl_q_ring rq; /* receive queue */
+ struct ice_ctl_q_ring sq; /* send queue */
+ u32 sq_cmd_timeout; /* send queue cmd write back timeout */
+ u16 num_rq_entries; /* receive queue depth */
+ u16 num_sq_entries; /* send queue depth */
+ u16 rq_buf_size; /* receive queue buffer size */
+ u16 sq_buf_size; /* send queue buffer size */
+ struct ice_lock sq_lock; /* Send queue lock */
+ struct ice_lock rq_lock; /* Receive queue lock */
+ enum ice_aq_err sq_last_status; /* last status on send queue */
+ enum ice_aq_err rq_last_status; /* last status on receive queue */
+};
+
+#endif /* _ICE_CONTROLQ_H_ */
diff --git a/drivers/net/ice/base/ice_devids.h b/drivers/net/ice/base/ice_devids.h
new file mode 100644
index 0000000..87f17ab
--- /dev/null
+++ b/drivers/net/ice/base/ice_devids.h
@@ -0,0 +1,17 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_DEVIDS_H_
+#define _ICE_DEVIDS_H_
+
+
+/* Device IDs */
+/* Intel(R) Ethernet Controller E810-C for backplane */
+#define ICE_DEV_ID_E810C_BACKPLANE 0x1591
+/* Intel(R) Ethernet Controller E810-C for QSFP */
+#define ICE_DEV_ID_E810C_QSFP 0x1592
+/* Intel(R) Ethernet Controller E810-C for SFP */
+#define ICE_DEV_ID_E810C_SFP 0x1593
+
+#endif /* _ICE_DEVIDS_H_ */
diff --git a/drivers/net/ice/base/ice_flex_pipe.c b/drivers/net/ice/base/ice_flex_pipe.c
new file mode 100644
index 0000000..934cb26
--- /dev/null
+++ b/drivers/net/ice/base/ice_flex_pipe.c
@@ -0,0 +1,5 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#include "ice_common.h"
diff --git a/drivers/net/ice/base/ice_flex_pipe.h b/drivers/net/ice/base/ice_flex_pipe.h
new file mode 100644
index 0000000..f52f7a4
--- /dev/null
+++ b/drivers/net/ice/base/ice_flex_pipe.h
@@ -0,0 +1,83 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_FLEX_PIPE_H_
+#define _ICE_FLEX_PIPE_H_
+
+#include "ice_type.h"
+
+/* Package format version */
+#define ICE_PKG_FMT_VER_MAJ 1
+#define ICE_PKG_FMT_VER_MNR 0
+#define ICE_PKG_FMT_VER_UPD 0
+#define ICE_PKG_FMT_VER_DFT 0
+
+enum ice_status
+ice_update_pkg(struct ice_hw *hw, struct ice_buf *bufs, u32 count);
+
+
+/* package buffer building routines */
+
+struct ice_buf_build *ice_pkg_buf_alloc(struct ice_hw *hw);
+enum ice_status
+ice_pkg_buf_reserve_section(struct ice_buf_build *bld, u16 count);
+void *ice_pkg_buf_alloc_section(struct ice_buf_build *bld, u32 type, u16 size);
+struct ice_buf *ice_pkg_buf(struct ice_buf_build *bld);
+void ice_pkg_buf_free(struct ice_hw *hw, struct ice_buf_build *bld);
+
+/* XLT1/PType group functions */
+enum ice_status ice_ptg_update_xlt1(struct ice_hw *hw, enum ice_block blk);
+enum ice_status
+ice_ptg_find_ptype(struct ice_hw *hw, enum ice_block blk, u16 ptype, u8 *ptg);
+u8 ice_ptg_alloc(struct ice_hw *hw, enum ice_block blk);
+void ice_ptg_free(struct ice_hw *hw, enum ice_block blk, u8 ptg);
+enum ice_status
+ice_ptg_add_mv_ptype(struct ice_hw *hw, enum ice_block blk, u16 ptype, u8 ptg);
+
+/* XLT2/Vsi group functions */
+enum ice_status ice_vsig_update_xlt2(struct ice_hw *hw, enum ice_block blk);
+enum ice_status
+ice_vsig_find_vsi(struct ice_hw *hw, enum ice_block blk, u16 vsi, u16 *vsig);
+enum ice_status
+ice_find_dup_props_vsig(struct ice_hw *hw, enum ice_block blk,
+ struct LIST_HEAD_TYPE *chs, u16 *vsig);
+
+enum ice_status
+ice_vsig_add_mv_vsi(struct ice_hw *hw, enum ice_block blk, u16 vsi, u16 vsig);
+enum ice_status ice_vsig_free(struct ice_hw *hw, enum ice_block blk, u16 vsig);
+enum ice_status
+ice_vsig_add_mv_vsi(struct ice_hw *hw, enum ice_block blk, u16 vsi, u16 vsig);
+enum ice_status
+ice_vsig_remove_vsi(struct ice_hw *hw, enum ice_block blk, u16 vsi, u16 vsig);
+enum ice_status
+ice_add_prof(struct ice_hw *hw, enum ice_block blk, u64 id, u8 ptypes[],
+ struct ice_fv_word *es);
+struct ice_prof_map *
+ice_search_prof_id(struct ice_hw *hw, enum ice_block blk, u64 id);
+enum ice_status
+ice_add_prof_id_flow(struct ice_hw *hw, enum ice_block blk, u16 vsi, u64 hdl);
+enum ice_status
+ice_rem_prof_id_flow(struct ice_hw *hw, enum ice_block blk, u16 vsi, u64 hdl);
+struct ice_prof_map *
+ice_set_prof_context(struct ice_hw *hw, enum ice_block blk, u64 id, u64 cntxt);
+struct ice_prof_map *
+ice_get_prof_context(struct ice_hw *hw, enum ice_block blk, u64 id, u64 *cntxt);
+enum ice_status
+ice_copy_and_init_pkg(struct ice_hw *hw, const u8 *buf, u32 len);
+enum ice_status ice_init_hw_tbls(struct ice_hw *hw);
+void ice_free_seg(struct ice_hw *hw);
+void ice_free_hw_tbls(struct ice_hw *hw);
+enum ice_status
+ice_add_flow(struct ice_hw *hw, enum ice_block blk, u16 vsi[], u8 count,
+ u64 id);
+enum ice_status
+ice_rem_flow(struct ice_hw *hw, enum ice_block blk, u16 vsi[], u8 count,
+ u64 id);
+enum ice_status
+ice_rem_prof(struct ice_hw *hw, enum ice_block blk, u64 id);
+
+enum ice_status
+ice_set_key(u8 *key, u16 size, u8 *val, u8 *upd, u8 *dc, u8 *nm, u16 off,
+ u16 len);
+#endif /* _ICE_FLEX_PIPE_H_ */
diff --git a/drivers/net/ice/base/ice_flex_type.h b/drivers/net/ice/base/ice_flex_type.h
new file mode 100644
index 0000000..84a38cb
--- /dev/null
+++ b/drivers/net/ice/base/ice_flex_type.h
@@ -0,0 +1,19 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_FLEX_TYPE_H_
+#define _ICE_FLEX_TYPE_H_
+
+/* Extraction Sequence (Field Vector) Table */
+struct ice_fv_word {
+ u8 prot_id;
+ u8 off; /* Offset within the protocol header */
+};
+
+#define ICE_MAX_FV_WORDS 48
+struct ice_fv {
+ struct ice_fv_word ew[ICE_MAX_FV_WORDS];
+};
+
+#endif /* _ICE_FLEX_TYPE_H_ */
diff --git a/drivers/net/ice/base/ice_flow.c b/drivers/net/ice/base/ice_flow.c
new file mode 100644
index 0000000..49b22bc
--- /dev/null
+++ b/drivers/net/ice/base/ice_flow.c
@@ -0,0 +1,4 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
diff --git a/drivers/net/ice/base/ice_flow.h b/drivers/net/ice/base/ice_flow.h
new file mode 100644
index 0000000..228a2c0
--- /dev/null
+++ b/drivers/net/ice/base/ice_flow.h
@@ -0,0 +1,8 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_FLOW_H_
+#define _ICE_FLOW_H_
+
+#endif /* _ICE_FLOW_H_ */
diff --git a/drivers/net/ice/base/ice_hw_autogen.h b/drivers/net/ice/base/ice_hw_autogen.h
new file mode 100644
index 0000000..8c79891
--- /dev/null
+++ b/drivers/net/ice/base/ice_hw_autogen.h
@@ -0,0 +1,9815 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+/* Machine-generated file; do not edit */
+#ifndef _ICE_HW_AUTOGEN_H_
+#define _ICE_HW_AUTOGEN_H_
+
+
+
+#define GL_RDPU_CNTRL 0x00052054 /* Reset Source: CORER */
+#define GL_RDPU_CNTRL_RX_PAD_EN_S 0
+#define GL_RDPU_CNTRL_RX_PAD_EN_M BIT(0)
+#define GL_RDPU_CNTRL_UDP_ZERO_EN_S 1
+#define GL_RDPU_CNTRL_UDP_ZERO_EN_M BIT(1)
+#define GL_RDPU_CNTRL_BLNC_EN_S 2
+#define GL_RDPU_CNTRL_BLNC_EN_M BIT(2)
+#define GL_RDPU_CNTRL_RECIPE_BYPASS_S 3
+#define GL_RDPU_CNTRL_RECIPE_BYPASS_M BIT(3)
+#define GL_RDPU_CNTRL_RLAN_ACK_REQ_PM_TH_S 4
+#define GL_RDPU_CNTRL_RLAN_ACK_REQ_PM_TH_M MAKEMASK(0x3F, 4)
+#define GL_RDPU_CNTRL_PE_ACK_REQ_PM_TH_S 10
+#define GL_RDPU_CNTRL_PE_ACK_REQ_PM_TH_M MAKEMASK(0x3F, 10)
+#define GL_RDPU_CNTRL_REQ_WB_PM_TH_S 16
+#define GL_RDPU_CNTRL_REQ_WB_PM_TH_M MAKEMASK(0x1F, 16)
+#define GL_RDPU_CNTRL_ECO_S 21
+#define GL_RDPU_CNTRL_ECO_M MAKEMASK(0x7FF, 21)
+#define MSIX_PBA(_i) (0x00008000 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: FLR */
+#define MSIX_PBA_MAX_INDEX 2
+#define MSIX_PBA_PENBIT_S 0
+#define MSIX_PBA_PENBIT_M MAKEMASK(0xFFFFFFFF, 0)
+#define MSIX_TADD(_i) (0x00000000 + ((_i) * 16)) /* _i=0...64 */ /* Reset Source: FLR */
+#define MSIX_TADD_MAX_INDEX 64
+#define MSIX_TADD_MSIXTADD10_S 0
+#define MSIX_TADD_MSIXTADD10_M MAKEMASK(0x3, 0)
+#define MSIX_TADD_MSIXTADD_S 2
+#define MSIX_TADD_MSIXTADD_M MAKEMASK(0x3FFFFFFF, 2)
+#define MSIX_TUADD(_i) (0x00000004 + ((_i) * 16)) /* _i=0...64 */ /* Reset Source: FLR */
+#define MSIX_TUADD_MAX_INDEX 64
+#define MSIX_TUADD_MSIXTUADD_S 0
+#define MSIX_TUADD_MSIXTUADD_M MAKEMASK(0xFFFFFFFF, 0)
+#define MSIX_TVCTRL(_i) (0x0000000C + ((_i) * 16)) /* _i=0...64 */ /* Reset Source: FLR */
+#define MSIX_TVCTRL_MAX_INDEX 64
+#define MSIX_TVCTRL_MASK_S 0
+#define MSIX_TVCTRL_MASK_M BIT(0)
+#define PF0_FW_HLP_ARQBAH_PAGE 0x02D00180 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ARQBAH_PAGE_ARQBAH_S 0
+#define PF0_FW_HLP_ARQBAH_PAGE_ARQBAH_M MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_FW_HLP_ARQBAL_PAGE 0x02D00080 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ARQBAL_PAGE_ARQBAL_LSB_S 0
+#define PF0_FW_HLP_ARQBAL_PAGE_ARQBAL_LSB_M MAKEMASK(0x3F, 0)
+#define PF0_FW_HLP_ARQBAL_PAGE_ARQBAL_S 6
+#define PF0_FW_HLP_ARQBAL_PAGE_ARQBAL_M MAKEMASK(0x3FFFFFF, 6)
+#define PF0_FW_HLP_ARQH_PAGE 0x02D00380 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ARQH_PAGE_ARQH_S 0
+#define PF0_FW_HLP_ARQH_PAGE_ARQH_M MAKEMASK(0x3FF, 0)
+#define PF0_FW_HLP_ARQLEN_PAGE 0x02D00280 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ARQLEN_PAGE_ARQLEN_S 0
+#define PF0_FW_HLP_ARQLEN_PAGE_ARQLEN_M MAKEMASK(0x3FF, 0)
+#define PF0_FW_HLP_ARQLEN_PAGE_ARQVFE_S 28
+#define PF0_FW_HLP_ARQLEN_PAGE_ARQVFE_M BIT(28)
+#define PF0_FW_HLP_ARQLEN_PAGE_ARQOVFL_S 29
+#define PF0_FW_HLP_ARQLEN_PAGE_ARQOVFL_M BIT(29)
+#define PF0_FW_HLP_ARQLEN_PAGE_ARQCRIT_S 30
+#define PF0_FW_HLP_ARQLEN_PAGE_ARQCRIT_M BIT(30)
+#define PF0_FW_HLP_ARQLEN_PAGE_ARQENABLE_S 31
+#define PF0_FW_HLP_ARQLEN_PAGE_ARQENABLE_M BIT(31)
+#define PF0_FW_HLP_ARQT_PAGE 0x02D00480 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ARQT_PAGE_ARQT_S 0
+#define PF0_FW_HLP_ARQT_PAGE_ARQT_M MAKEMASK(0x3FF, 0)
+#define PF0_FW_HLP_ATQBAH_PAGE 0x02D00100 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ATQBAH_PAGE_ATQBAH_S 0
+#define PF0_FW_HLP_ATQBAH_PAGE_ATQBAH_M MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_FW_HLP_ATQBAL_PAGE 0x02D00000 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ATQBAL_PAGE_ATQBAL_LSB_S 0
+#define PF0_FW_HLP_ATQBAL_PAGE_ATQBAL_LSB_M MAKEMASK(0x3F, 0)
+#define PF0_FW_HLP_ATQBAL_PAGE_ATQBAL_S 6
+#define PF0_FW_HLP_ATQBAL_PAGE_ATQBAL_M MAKEMASK(0x3FFFFFF, 6)
+#define PF0_FW_HLP_ATQH_PAGE 0x02D00300 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ATQH_PAGE_ATQH_S 0
+#define PF0_FW_HLP_ATQH_PAGE_ATQH_M MAKEMASK(0x3FF, 0)
+#define PF0_FW_HLP_ATQLEN_PAGE 0x02D00200 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ATQLEN_PAGE_ATQLEN_S 0
+#define PF0_FW_HLP_ATQLEN_PAGE_ATQLEN_M MAKEMASK(0x3FF, 0)
+#define PF0_FW_HLP_ATQLEN_PAGE_ATQVFE_S 28
+#define PF0_FW_HLP_ATQLEN_PAGE_ATQVFE_M BIT(28)
+#define PF0_FW_HLP_ATQLEN_PAGE_ATQOVFL_S 29
+#define PF0_FW_HLP_ATQLEN_PAGE_ATQOVFL_M BIT(29)
+#define PF0_FW_HLP_ATQLEN_PAGE_ATQCRIT_S 30
+#define PF0_FW_HLP_ATQLEN_PAGE_ATQCRIT_M BIT(30)
+#define PF0_FW_HLP_ATQLEN_PAGE_ATQENABLE_S 31
+#define PF0_FW_HLP_ATQLEN_PAGE_ATQENABLE_M BIT(31)
+#define PF0_FW_HLP_ATQT_PAGE 0x02D00400 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ATQT_PAGE_ATQT_S 0
+#define PF0_FW_HLP_ATQT_PAGE_ATQT_M MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ARQBAH_PAGE 0x02D40180 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ARQBAH_PAGE_ARQBAH_S 0
+#define PF0_FW_PSM_ARQBAH_PAGE_ARQBAH_M MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_FW_PSM_ARQBAL_PAGE 0x02D40080 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ARQBAL_PAGE_ARQBAL_LSB_S 0
+#define PF0_FW_PSM_ARQBAL_PAGE_ARQBAL_LSB_M MAKEMASK(0x3F, 0)
+#define PF0_FW_PSM_ARQBAL_PAGE_ARQBAL_S 6
+#define PF0_FW_PSM_ARQBAL_PAGE_ARQBAL_M MAKEMASK(0x3FFFFFF, 6)
+#define PF0_FW_PSM_ARQH_PAGE 0x02D40380 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ARQH_PAGE_ARQH_S 0
+#define PF0_FW_PSM_ARQH_PAGE_ARQH_M MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ARQLEN_PAGE 0x02D40280 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ARQLEN_PAGE_ARQLEN_S 0
+#define PF0_FW_PSM_ARQLEN_PAGE_ARQLEN_M MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ARQLEN_PAGE_ARQVFE_S 28
+#define PF0_FW_PSM_ARQLEN_PAGE_ARQVFE_M BIT(28)
+#define PF0_FW_PSM_ARQLEN_PAGE_ARQOVFL_S 29
+#define PF0_FW_PSM_ARQLEN_PAGE_ARQOVFL_M BIT(29)
+#define PF0_FW_PSM_ARQLEN_PAGE_ARQCRIT_S 30
+#define PF0_FW_PSM_ARQLEN_PAGE_ARQCRIT_M BIT(30)
+#define PF0_FW_PSM_ARQLEN_PAGE_ARQENABLE_S 31
+#define PF0_FW_PSM_ARQLEN_PAGE_ARQENABLE_M BIT(31)
+#define PF0_FW_PSM_ARQT_PAGE 0x02D40480 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ARQT_PAGE_ARQT_S 0
+#define PF0_FW_PSM_ARQT_PAGE_ARQT_M MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ATQBAH_PAGE 0x02D40100 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ATQBAH_PAGE_ATQBAH_S 0
+#define PF0_FW_PSM_ATQBAH_PAGE_ATQBAH_M MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_FW_PSM_ATQBAL_PAGE 0x02D40000 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ATQBAL_PAGE_ATQBAL_LSB_S 0
+#define PF0_FW_PSM_ATQBAL_PAGE_ATQBAL_LSB_M MAKEMASK(0x3F, 0)
+#define PF0_FW_PSM_ATQBAL_PAGE_ATQBAL_S 6
+#define PF0_FW_PSM_ATQBAL_PAGE_ATQBAL_M MAKEMASK(0x3FFFFFF, 6)
+#define PF0_FW_PSM_ATQH_PAGE 0x02D40300 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ATQH_PAGE_ATQH_S 0
+#define PF0_FW_PSM_ATQH_PAGE_ATQH_M MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ATQLEN_PAGE 0x02D40200 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ATQLEN_PAGE_ATQLEN_S 0
+#define PF0_FW_PSM_ATQLEN_PAGE_ATQLEN_M MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ATQLEN_PAGE_ATQVFE_S 28
+#define PF0_FW_PSM_ATQLEN_PAGE_ATQVFE_M BIT(28)
+#define PF0_FW_PSM_ATQLEN_PAGE_ATQOVFL_S 29
+#define PF0_FW_PSM_ATQLEN_PAGE_ATQOVFL_M BIT(29)
+#define PF0_FW_PSM_ATQLEN_PAGE_ATQCRIT_S 30
+#define PF0_FW_PSM_ATQLEN_PAGE_ATQCRIT_M BIT(30)
+#define PF0_FW_PSM_ATQLEN_PAGE_ATQENABLE_S 31
+#define PF0_FW_PSM_ATQLEN_PAGE_ATQENABLE_M BIT(31)
+#define PF0_FW_PSM_ATQT_PAGE 0x02D40400 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ATQT_PAGE_ATQT_S 0
+#define PF0_FW_PSM_ATQT_PAGE_ATQT_M MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ARQBAH_PAGE 0x02D80190 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ARQBAH_PAGE_ARQBAH_S 0
+#define PF0_MBX_CPM_ARQBAH_PAGE_ARQBAH_M MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_CPM_ARQBAL_PAGE 0x02D80090 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ARQBAL_PAGE_ARQBAL_LSB_S 0
+#define PF0_MBX_CPM_ARQBAL_PAGE_ARQBAL_LSB_M MAKEMASK(0x3F, 0)
+#define PF0_MBX_CPM_ARQBAL_PAGE_ARQBAL_S 6
+#define PF0_MBX_CPM_ARQBAL_PAGE_ARQBAL_M MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_CPM_ARQH_PAGE 0x02D80390 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ARQH_PAGE_ARQH_S 0
+#define PF0_MBX_CPM_ARQH_PAGE_ARQH_M MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ARQLEN_PAGE 0x02D80290 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ARQLEN_PAGE_ARQLEN_S 0
+#define PF0_MBX_CPM_ARQLEN_PAGE_ARQLEN_M MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ARQLEN_PAGE_ARQVFE_S 28
+#define PF0_MBX_CPM_ARQLEN_PAGE_ARQVFE_M BIT(28)
+#define PF0_MBX_CPM_ARQLEN_PAGE_ARQOVFL_S 29
+#define PF0_MBX_CPM_ARQLEN_PAGE_ARQOVFL_M BIT(29)
+#define PF0_MBX_CPM_ARQLEN_PAGE_ARQCRIT_S 30
+#define PF0_MBX_CPM_ARQLEN_PAGE_ARQCRIT_M BIT(30)
+#define PF0_MBX_CPM_ARQLEN_PAGE_ARQENABLE_S 31
+#define PF0_MBX_CPM_ARQLEN_PAGE_ARQENABLE_M BIT(31)
+#define PF0_MBX_CPM_ARQT_PAGE 0x02D80490 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ARQT_PAGE_ARQT_S 0
+#define PF0_MBX_CPM_ARQT_PAGE_ARQT_M MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ATQBAH_PAGE 0x02D80110 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ATQBAH_PAGE_ATQBAH_S 0
+#define PF0_MBX_CPM_ATQBAH_PAGE_ATQBAH_M MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_CPM_ATQBAL_PAGE 0x02D80010 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ATQBAL_PAGE_ATQBAL_S 6
+#define PF0_MBX_CPM_ATQBAL_PAGE_ATQBAL_M MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_CPM_ATQH_PAGE 0x02D80310 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ATQH_PAGE_ATQH_S 0
+#define PF0_MBX_CPM_ATQH_PAGE_ATQH_M MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ATQLEN_PAGE 0x02D80210 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ATQLEN_PAGE_ATQLEN_S 0
+#define PF0_MBX_CPM_ATQLEN_PAGE_ATQLEN_M MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ATQLEN_PAGE_ATQVFE_S 28
+#define PF0_MBX_CPM_ATQLEN_PAGE_ATQVFE_M BIT(28)
+#define PF0_MBX_CPM_ATQLEN_PAGE_ATQOVFL_S 29
+#define PF0_MBX_CPM_ATQLEN_PAGE_ATQOVFL_M BIT(29)
+#define PF0_MBX_CPM_ATQLEN_PAGE_ATQCRIT_S 30
+#define PF0_MBX_CPM_ATQLEN_PAGE_ATQCRIT_M BIT(30)
+#define PF0_MBX_CPM_ATQLEN_PAGE_ATQENABLE_S 31
+#define PF0_MBX_CPM_ATQLEN_PAGE_ATQENABLE_M BIT(31)
+#define PF0_MBX_CPM_ATQT_PAGE 0x02D80410 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ATQT_PAGE_ATQT_S 0
+#define PF0_MBX_CPM_ATQT_PAGE_ATQT_M MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ARQBAH_PAGE 0x02D00190 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ARQBAH_PAGE_ARQBAH_S 0
+#define PF0_MBX_HLP_ARQBAH_PAGE_ARQBAH_M MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_HLP_ARQBAL_PAGE 0x02D00090 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ARQBAL_PAGE_ARQBAL_LSB_S 0
+#define PF0_MBX_HLP_ARQBAL_PAGE_ARQBAL_LSB_M MAKEMASK(0x3F, 0)
+#define PF0_MBX_HLP_ARQBAL_PAGE_ARQBAL_S 6
+#define PF0_MBX_HLP_ARQBAL_PAGE_ARQBAL_M MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_HLP_ARQH_PAGE 0x02D00390 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ARQH_PAGE_ARQH_S 0
+#define PF0_MBX_HLP_ARQH_PAGE_ARQH_M MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ARQLEN_PAGE 0x02D00290 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ARQLEN_PAGE_ARQLEN_S 0
+#define PF0_MBX_HLP_ARQLEN_PAGE_ARQLEN_M MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ARQLEN_PAGE_ARQVFE_S 28
+#define PF0_MBX_HLP_ARQLEN_PAGE_ARQVFE_M BIT(28)
+#define PF0_MBX_HLP_ARQLEN_PAGE_ARQOVFL_S 29
+#define PF0_MBX_HLP_ARQLEN_PAGE_ARQOVFL_M BIT(29)
+#define PF0_MBX_HLP_ARQLEN_PAGE_ARQCRIT_S 30
+#define PF0_MBX_HLP_ARQLEN_PAGE_ARQCRIT_M BIT(30)
+#define PF0_MBX_HLP_ARQLEN_PAGE_ARQENABLE_S 31
+#define PF0_MBX_HLP_ARQLEN_PAGE_ARQENABLE_M BIT(31)
+#define PF0_MBX_HLP_ARQT_PAGE 0x02D00490 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ARQT_PAGE_ARQT_S 0
+#define PF0_MBX_HLP_ARQT_PAGE_ARQT_M MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ATQBAH_PAGE 0x02D00110 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ATQBAH_PAGE_ATQBAH_S 0
+#define PF0_MBX_HLP_ATQBAH_PAGE_ATQBAH_M MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_HLP_ATQBAL_PAGE 0x02D00010 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ATQBAL_PAGE_ATQBAL_S 6
+#define PF0_MBX_HLP_ATQBAL_PAGE_ATQBAL_M MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_HLP_ATQH_PAGE 0x02D00310 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ATQH_PAGE_ATQH_S 0
+#define PF0_MBX_HLP_ATQH_PAGE_ATQH_M MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ATQLEN_PAGE 0x02D00210 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ATQLEN_PAGE_ATQLEN_S 0
+#define PF0_MBX_HLP_ATQLEN_PAGE_ATQLEN_M MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ATQLEN_PAGE_ATQVFE_S 28
+#define PF0_MBX_HLP_ATQLEN_PAGE_ATQVFE_M BIT(28)
+#define PF0_MBX_HLP_ATQLEN_PAGE_ATQOVFL_S 29
+#define PF0_MBX_HLP_ATQLEN_PAGE_ATQOVFL_M BIT(29)
+#define PF0_MBX_HLP_ATQLEN_PAGE_ATQCRIT_S 30
+#define PF0_MBX_HLP_ATQLEN_PAGE_ATQCRIT_M BIT(30)
+#define PF0_MBX_HLP_ATQLEN_PAGE_ATQENABLE_S 31
+#define PF0_MBX_HLP_ATQLEN_PAGE_ATQENABLE_M BIT(31)
+#define PF0_MBX_HLP_ATQT_PAGE 0x02D00410 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ATQT_PAGE_ATQT_S 0
+#define PF0_MBX_HLP_ATQT_PAGE_ATQT_M MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ARQBAH_PAGE 0x02D40190 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ARQBAH_PAGE_ARQBAH_S 0
+#define PF0_MBX_PSM_ARQBAH_PAGE_ARQBAH_M MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_PSM_ARQBAL_PAGE 0x02D40090 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ARQBAL_PAGE_ARQBAL_LSB_S 0
+#define PF0_MBX_PSM_ARQBAL_PAGE_ARQBAL_LSB_M MAKEMASK(0x3F, 0)
+#define PF0_MBX_PSM_ARQBAL_PAGE_ARQBAL_S 6
+#define PF0_MBX_PSM_ARQBAL_PAGE_ARQBAL_M MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_PSM_ARQH_PAGE 0x02D40390 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ARQH_PAGE_ARQH_S 0
+#define PF0_MBX_PSM_ARQH_PAGE_ARQH_M MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ARQLEN_PAGE 0x02D40290 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ARQLEN_PAGE_ARQLEN_S 0
+#define PF0_MBX_PSM_ARQLEN_PAGE_ARQLEN_M MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ARQLEN_PAGE_ARQVFE_S 28
+#define PF0_MBX_PSM_ARQLEN_PAGE_ARQVFE_M BIT(28)
+#define PF0_MBX_PSM_ARQLEN_PAGE_ARQOVFL_S 29
+#define PF0_MBX_PSM_ARQLEN_PAGE_ARQOVFL_M BIT(29)
+#define PF0_MBX_PSM_ARQLEN_PAGE_ARQCRIT_S 30
+#define PF0_MBX_PSM_ARQLEN_PAGE_ARQCRIT_M BIT(30)
+#define PF0_MBX_PSM_ARQLEN_PAGE_ARQENABLE_S 31
+#define PF0_MBX_PSM_ARQLEN_PAGE_ARQENABLE_M BIT(31)
+#define PF0_MBX_PSM_ARQT_PAGE 0x02D40490 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ARQT_PAGE_ARQT_S 0
+#define PF0_MBX_PSM_ARQT_PAGE_ARQT_M MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ATQBAH_PAGE 0x02D40110 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ATQBAH_PAGE_ATQBAH_S 0
+#define PF0_MBX_PSM_ATQBAH_PAGE_ATQBAH_M MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_PSM_ATQBAL_PAGE 0x02D40010 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ATQBAL_PAGE_ATQBAL_S 6
+#define PF0_MBX_PSM_ATQBAL_PAGE_ATQBAL_M MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_PSM_ATQH_PAGE 0x02D40310 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ATQH_PAGE_ATQH_S 0
+#define PF0_MBX_PSM_ATQH_PAGE_ATQH_M MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ATQLEN_PAGE 0x02D40210 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ATQLEN_PAGE_ATQLEN_S 0
+#define PF0_MBX_PSM_ATQLEN_PAGE_ATQLEN_M MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ATQLEN_PAGE_ATQVFE_S 28
+#define PF0_MBX_PSM_ATQLEN_PAGE_ATQVFE_M BIT(28)
+#define PF0_MBX_PSM_ATQLEN_PAGE_ATQOVFL_S 29
+#define PF0_MBX_PSM_ATQLEN_PAGE_ATQOVFL_M BIT(29)
+#define PF0_MBX_PSM_ATQLEN_PAGE_ATQCRIT_S 30
+#define PF0_MBX_PSM_ATQLEN_PAGE_ATQCRIT_M BIT(30)
+#define PF0_MBX_PSM_ATQLEN_PAGE_ATQENABLE_S 31
+#define PF0_MBX_PSM_ATQLEN_PAGE_ATQENABLE_M BIT(31)
+#define PF0_MBX_PSM_ATQT_PAGE 0x02D40410 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ATQT_PAGE_ATQT_S 0
+#define PF0_MBX_PSM_ATQT_PAGE_ATQT_M MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ARQBAH_PAGE 0x02D801A0 /* Reset Source: CORER */
+#define PF0_SB_CPM_ARQBAH_PAGE_ARQBAH_S 0
+#define PF0_SB_CPM_ARQBAH_PAGE_ARQBAH_M MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_SB_CPM_ARQBAL_PAGE 0x02D800A0 /* Reset Source: CORER */
+#define PF0_SB_CPM_ARQBAL_PAGE_ARQBAL_LSB_S 0
+#define PF0_SB_CPM_ARQBAL_PAGE_ARQBAL_LSB_M MAKEMASK(0x3F, 0)
+#define PF0_SB_CPM_ARQBAL_PAGE_ARQBAL_S 6
+#define PF0_SB_CPM_ARQBAL_PAGE_ARQBAL_M MAKEMASK(0x3FFFFFF, 6)
+#define PF0_SB_CPM_ARQH_PAGE 0x02D803A0 /* Reset Source: CORER */
+#define PF0_SB_CPM_ARQH_PAGE_ARQH_S 0
+#define PF0_SB_CPM_ARQH_PAGE_ARQH_M MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ARQLEN_PAGE 0x02D802A0 /* Reset Source: CORER */
+#define PF0_SB_CPM_ARQLEN_PAGE_ARQLEN_S 0
+#define PF0_SB_CPM_ARQLEN_PAGE_ARQLEN_M MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ARQLEN_PAGE_ARQVFE_S 28
+#define PF0_SB_CPM_ARQLEN_PAGE_ARQVFE_M BIT(28)
+#define PF0_SB_CPM_ARQLEN_PAGE_ARQOVFL_S 29
+#define PF0_SB_CPM_ARQLEN_PAGE_ARQOVFL_M BIT(29)
+#define PF0_SB_CPM_ARQLEN_PAGE_ARQCRIT_S 30
+#define PF0_SB_CPM_ARQLEN_PAGE_ARQCRIT_M BIT(30)
+#define PF0_SB_CPM_ARQLEN_PAGE_ARQENABLE_S 31
+#define PF0_SB_CPM_ARQLEN_PAGE_ARQENABLE_M BIT(31)
+#define PF0_SB_CPM_ARQT_PAGE 0x02D804A0 /* Reset Source: CORER */
+#define PF0_SB_CPM_ARQT_PAGE_ARQT_S 0
+#define PF0_SB_CPM_ARQT_PAGE_ARQT_M MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ATQBAH_PAGE 0x02D80120 /* Reset Source: CORER */
+#define PF0_SB_CPM_ATQBAH_PAGE_ATQBAH_S 0
+#define PF0_SB_CPM_ATQBAH_PAGE_ATQBAH_M MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_SB_CPM_ATQBAL_PAGE 0x02D80020 /* Reset Source: CORER */
+#define PF0_SB_CPM_ATQBAL_PAGE_ATQBAL_S 6
+#define PF0_SB_CPM_ATQBAL_PAGE_ATQBAL_M MAKEMASK(0x3FFFFFF, 6)
+#define PF0_SB_CPM_ATQH_PAGE 0x02D80320 /* Reset Source: CORER */
+#define PF0_SB_CPM_ATQH_PAGE_ATQH_S 0
+#define PF0_SB_CPM_ATQH_PAGE_ATQH_M MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ATQLEN_PAGE 0x02D80220 /* Reset Source: CORER */
+#define PF0_SB_CPM_ATQLEN_PAGE_ATQLEN_S 0
+#define PF0_SB_CPM_ATQLEN_PAGE_ATQLEN_M MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ATQLEN_PAGE_ATQVFE_S 28
+#define PF0_SB_CPM_ATQLEN_PAGE_ATQVFE_M BIT(28)
+#define PF0_SB_CPM_ATQLEN_PAGE_ATQOVFL_S 29
+#define PF0_SB_CPM_ATQLEN_PAGE_ATQOVFL_M BIT(29)
+#define PF0_SB_CPM_ATQLEN_PAGE_ATQCRIT_S 30
+#define PF0_SB_CPM_ATQLEN_PAGE_ATQCRIT_M BIT(30)
+#define PF0_SB_CPM_ATQLEN_PAGE_ATQENABLE_S 31
+#define PF0_SB_CPM_ATQLEN_PAGE_ATQENABLE_M BIT(31)
+#define PF0_SB_CPM_ATQT_PAGE 0x02D80420 /* Reset Source: CORER */
+#define PF0_SB_CPM_ATQT_PAGE_ATQT_S 0
+#define PF0_SB_CPM_ATQT_PAGE_ATQT_M MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_ARQBAH_PAGE 0x02D001A0 /* Reset Source: CORER */
+#define PF0_SB_HLP_ARQBAH_PAGE_ARQBAH_S 0
+#define PF0_SB_HLP_ARQBAH_PAGE_ARQBAH_M MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_SB_HLP_ARQBAL_PAGE 0x02D000A0 /* Reset Source: CORER */
+#define PF0_SB_HLP_ARQBAL_PAGE_ARQBAL_LSB_S 0
+#define PF0_SB_HLP_ARQBAL_PAGE_ARQBAL_LSB_M MAKEMASK(0x3F, 0)
+#define PF0_SB_HLP_ARQBAL_PAGE_ARQBAL_S 6
+#define PF0_SB_HLP_ARQBAL_PAGE_ARQBAL_M MAKEMASK(0x3FFFFFF, 6)
+#define PF0_SB_HLP_ARQH_PAGE 0x02D003A0 /* Reset Source: CORER */
+#define PF0_SB_HLP_ARQH_PAGE_ARQH_S 0
+#define PF0_SB_HLP_ARQH_PAGE_ARQH_M MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_ARQLEN_PAGE 0x02D002A0 /* Reset Source: CORER */
+#define PF0_SB_HLP_ARQLEN_PAGE_ARQLEN_S 0
+#define PF0_SB_HLP_ARQLEN_PAGE_ARQLEN_M MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_ARQLEN_PAGE_ARQVFE_S 28
+#define PF0_SB_HLP_ARQLEN_PAGE_ARQVFE_M BIT(28)
+#define PF0_SB_HLP_ARQLEN_PAGE_ARQOVFL_S 29
+#define PF0_SB_HLP_ARQLEN_PAGE_ARQOVFL_M BIT(29)
+#define PF0_SB_HLP_ARQLEN_PAGE_ARQCRIT_S 30
+#define PF0_SB_HLP_ARQLEN_PAGE_ARQCRIT_M BIT(30)
+#define PF0_SB_HLP_ARQLEN_PAGE_ARQENABLE_S 31
+#define PF0_SB_HLP_ARQLEN_PAGE_ARQENABLE_M BIT(31)
+#define PF0_SB_HLP_ARQT_PAGE 0x02D004A0 /* Reset Source: CORER */
+#define PF0_SB_HLP_ARQT_PAGE_ARQT_S 0
+#define PF0_SB_HLP_ARQT_PAGE_ARQT_M MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_ATQBAH_PAGE 0x02D00120 /* Reset Source: CORER */
+#define PF0_SB_HLP_ATQBAH_PAGE_ATQBAH_S 0
+#define PF0_SB_HLP_ATQBAH_PAGE_ATQBAH_M MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_SB_HLP_ATQBAL_PAGE 0x02D00020 /* Reset Source: CORER */
+#define PF0_SB_HLP_ATQBAL_PAGE_ATQBAL_S 6
+#define PF0_SB_HLP_ATQBAL_PAGE_ATQBAL_M MAKEMASK(0x3FFFFFF, 6)
+#define PF0_SB_HLP_ATQH_PAGE 0x02D00320 /* Reset Source: CORER */
+#define PF0_SB_HLP_ATQH_PAGE_ATQH_S 0
+#define PF0_SB_HLP_ATQH_PAGE_ATQH_M MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_ATQLEN_PAGE 0x02D00220 /* Reset Source: CORER */
+#define PF0_SB_HLP_ATQLEN_PAGE_ATQLEN_S 0
+#define PF0_SB_HLP_ATQLEN_PAGE_ATQLEN_M MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_ATQLEN_PAGE_ATQVFE_S 28
+#define PF0_SB_HLP_ATQLEN_PAGE_ATQVFE_M BIT(28)
+#define PF0_SB_HLP_ATQLEN_PAGE_ATQOVFL_S 29
+#define PF0_SB_HLP_ATQLEN_PAGE_ATQOVFL_M BIT(29)
+#define PF0_SB_HLP_ATQLEN_PAGE_ATQCRIT_S 30
+#define PF0_SB_HLP_ATQLEN_PAGE_ATQCRIT_M BIT(30)
+#define PF0_SB_HLP_ATQLEN_PAGE_ATQENABLE_S 31
+#define PF0_SB_HLP_ATQLEN_PAGE_ATQENABLE_M BIT(31)
+#define PF0_SB_HLP_ATQT_PAGE 0x02D00420 /* Reset Source: CORER */
+#define PF0_SB_HLP_ATQT_PAGE_ATQT_S 0
+#define PF0_SB_HLP_ATQT_PAGE_ATQT_M MAKEMASK(0x3FF, 0)
+#define PF0INT_DYN_CTL(_i) (0x03000000 + ((_i) * 4096)) /* _i=0...2047 */ /* Reset Source: PFR */
+#define PF0INT_DYN_CTL_MAX_INDEX 2047
+#define PF0INT_DYN_CTL_INTENA_S 0
+#define PF0INT_DYN_CTL_INTENA_M BIT(0)
+#define PF0INT_DYN_CTL_CLEARPBA_S 1
+#define PF0INT_DYN_CTL_CLEARPBA_M BIT(1)
+#define PF0INT_DYN_CTL_SWINT_TRIG_S 2
+#define PF0INT_DYN_CTL_SWINT_TRIG_M BIT(2)
+#define PF0INT_DYN_CTL_ITR_INDX_S 3
+#define PF0INT_DYN_CTL_ITR_INDX_M MAKEMASK(0x3, 3)
+#define PF0INT_DYN_CTL_INTERVAL_S 5
+#define PF0INT_DYN_CTL_INTERVAL_M MAKEMASK(0xFFF, 5)
+#define PF0INT_DYN_CTL_SW_ITR_INDX_ENA_S 24
+#define PF0INT_DYN_CTL_SW_ITR_INDX_ENA_M BIT(24)
+#define PF0INT_DYN_CTL_SW_ITR_INDX_S 25
+#define PF0INT_DYN_CTL_SW_ITR_INDX_M MAKEMASK(0x3, 25)
+#define PF0INT_DYN_CTL_WB_ON_ITR_S 30
+#define PF0INT_DYN_CTL_WB_ON_ITR_M BIT(30)
+#define PF0INT_DYN_CTL_INTENA_MSK_S 31
+#define PF0INT_DYN_CTL_INTENA_MSK_M BIT(31)
+#define PF0INT_ITR_0(_i) (0x03000004 + ((_i) * 4096)) /* _i=0...2047 */ /* Reset Source: PFR */
+#define PF0INT_ITR_0_MAX_INDEX 2047
+#define PF0INT_ITR_0_INTERVAL_S 0
+#define PF0INT_ITR_0_INTERVAL_M MAKEMASK(0xFFF, 0)
+#define PF0INT_ITR_1(_i) (0x03000008 + ((_i) * 4096)) /* _i=0...2047 */ /* Reset Source: PFR */
+#define PF0INT_ITR_1_MAX_INDEX 2047
+#define PF0INT_ITR_1_INTERVAL_S 0
+#define PF0INT_ITR_1_INTERVAL_M MAKEMASK(0xFFF, 0)
+#define PF0INT_ITR_2(_i) (0x0300000C + ((_i) * 4096)) /* _i=0...2047 */ /* Reset Source: PFR */
+#define PF0INT_ITR_2_MAX_INDEX 2047
+#define PF0INT_ITR_2_INTERVAL_S 0
+#define PF0INT_ITR_2_INTERVAL_M MAKEMASK(0xFFF, 0)
+#define PF0INT_OICR_CPM_PAGE 0x02D03000 /* Reset Source: CORER */
+#define PF0INT_OICR_CPM_PAGE_INTEVENT_S 0
+#define PF0INT_OICR_CPM_PAGE_INTEVENT_M BIT(0)
+#define PF0INT_OICR_CPM_PAGE_QUEUE_S 1
+#define PF0INT_OICR_CPM_PAGE_QUEUE_M BIT(1)
+#define PF0INT_OICR_CPM_PAGE_RSV1_S 2
+#define PF0INT_OICR_CPM_PAGE_RSV1_M MAKEMASK(0xFF, 2)
+#define PF0INT_OICR_CPM_PAGE_HH_COMP_S 10
+#define PF0INT_OICR_CPM_PAGE_HH_COMP_M BIT(10)
+#define PF0INT_OICR_CPM_PAGE_TSYN_TX_S 11
+#define PF0INT_OICR_CPM_PAGE_TSYN_TX_M BIT(11)
+#define PF0INT_OICR_CPM_PAGE_TSYN_EVNT_S 12
+#define PF0INT_OICR_CPM_PAGE_TSYN_EVNT_M BIT(12)
+#define PF0INT_OICR_CPM_PAGE_TSYN_TGT_S 13
+#define PF0INT_OICR_CPM_PAGE_TSYN_TGT_M BIT(13)
+#define PF0INT_OICR_CPM_PAGE_HLP_RDY_S 14
+#define PF0INT_OICR_CPM_PAGE_HLP_RDY_M BIT(14)
+#define PF0INT_OICR_CPM_PAGE_CPM_RDY_S 15
+#define PF0INT_OICR_CPM_PAGE_CPM_RDY_M BIT(15)
+#define PF0INT_OICR_CPM_PAGE_ECC_ERR_S 16
+#define PF0INT_OICR_CPM_PAGE_ECC_ERR_M BIT(16)
+#define PF0INT_OICR_CPM_PAGE_RSV2_S 17
+#define PF0INT_OICR_CPM_PAGE_RSV2_M MAKEMASK(0x3, 17)
+#define PF0INT_OICR_CPM_PAGE_MAL_DETECT_S 19
+#define PF0INT_OICR_CPM_PAGE_MAL_DETECT_M BIT(19)
+#define PF0INT_OICR_CPM_PAGE_GRST_S 20
+#define PF0INT_OICR_CPM_PAGE_GRST_M BIT(20)
+#define PF0INT_OICR_CPM_PAGE_PCI_EXCEPTION_S 21
+#define PF0INT_OICR_CPM_PAGE_PCI_EXCEPTION_M BIT(21)
+#define PF0INT_OICR_CPM_PAGE_GPIO_S 22
+#define PF0INT_OICR_CPM_PAGE_GPIO_M BIT(22)
+#define PF0INT_OICR_CPM_PAGE_RSV3_S 23
+#define PF0INT_OICR_CPM_PAGE_RSV3_M BIT(23)
+#define PF0INT_OICR_CPM_PAGE_STORM_DETECT_S 24
+#define PF0INT_OICR_CPM_PAGE_STORM_DETECT_M BIT(24)
+#define PF0INT_OICR_CPM_PAGE_LINK_STAT_CHANGE_S 25
+#define PF0INT_OICR_CPM_PAGE_LINK_STAT_CHANGE_M BIT(25)
+#define PF0INT_OICR_CPM_PAGE_HMC_ERR_S 26
+#define PF0INT_OICR_CPM_PAGE_HMC_ERR_M BIT(26)
+#define PF0INT_OICR_CPM_PAGE_PE_PUSH_S 27
+#define PF0INT_OICR_CPM_PAGE_PE_PUSH_M BIT(27)
+#define PF0INT_OICR_CPM_PAGE_PE_CRITERR_S 28
+#define PF0INT_OICR_CPM_PAGE_PE_CRITERR_M BIT(28)
+#define PF0INT_OICR_CPM_PAGE_VFLR_S 29
+#define PF0INT_OICR_CPM_PAGE_VFLR_M BIT(29)
+#define PF0INT_OICR_CPM_PAGE_XLR_HW_DONE_S 30
+#define PF0INT_OICR_CPM_PAGE_XLR_HW_DONE_M BIT(30)
+#define PF0INT_OICR_CPM_PAGE_SWINT_S 31
+#define PF0INT_OICR_CPM_PAGE_SWINT_M BIT(31)
+#define PF0INT_OICR_ENA_CPM_PAGE 0x02D03100 /* Reset Source: CORER */
+#define PF0INT_OICR_ENA_CPM_PAGE_RSV0_S 0
+#define PF0INT_OICR_ENA_CPM_PAGE_RSV0_M BIT(0)
+#define PF0INT_OICR_ENA_CPM_PAGE_INT_ENA_S 1
+#define PF0INT_OICR_ENA_CPM_PAGE_INT_ENA_M MAKEMASK(0x7FFFFFFF, 1)
+#define PF0INT_OICR_ENA_HLP_PAGE 0x02D01100 /* Reset Source: CORER */
+#define PF0INT_OICR_ENA_HLP_PAGE_RSV0_S 0
+#define PF0INT_OICR_ENA_HLP_PAGE_RSV0_M BIT(0)
+#define PF0INT_OICR_ENA_HLP_PAGE_INT_ENA_S 1
+#define PF0INT_OICR_ENA_HLP_PAGE_INT_ENA_M MAKEMASK(0x7FFFFFFF, 1)
+#define PF0INT_OICR_ENA_PSM_PAGE 0x02D02100 /* Reset Source: CORER */
+#define PF0INT_OICR_ENA_PSM_PAGE_RSV0_S 0
+#define PF0INT_OICR_ENA_PSM_PAGE_RSV0_M BIT(0)
+#define PF0INT_OICR_ENA_PSM_PAGE_INT_ENA_S 1
+#define PF0INT_OICR_ENA_PSM_PAGE_INT_ENA_M MAKEMASK(0x7FFFFFFF, 1)
+#define PF0INT_OICR_HLP_PAGE 0x02D01000 /* Reset Source: CORER */
+#define PF0INT_OICR_HLP_PAGE_INTEVENT_S 0
+#define PF0INT_OICR_HLP_PAGE_INTEVENT_M BIT(0)
+#define PF0INT_OICR_HLP_PAGE_QUEUE_S 1
+#define PF0INT_OICR_HLP_PAGE_QUEUE_M BIT(1)
+#define PF0INT_OICR_HLP_PAGE_RSV1_S 2
+#define PF0INT_OICR_HLP_PAGE_RSV1_M MAKEMASK(0xFF, 2)
+#define PF0INT_OICR_HLP_PAGE_HH_COMP_S 10
+#define PF0INT_OICR_HLP_PAGE_HH_COMP_M BIT(10)
+#define PF0INT_OICR_HLP_PAGE_TSYN_TX_S 11
+#define PF0INT_OICR_HLP_PAGE_TSYN_TX_M BIT(11)
+#define PF0INT_OICR_HLP_PAGE_TSYN_EVNT_S 12
+#define PF0INT_OICR_HLP_PAGE_TSYN_EVNT_M BIT(12)
+#define PF0INT_OICR_HLP_PAGE_TSYN_TGT_S 13
+#define PF0INT_OICR_HLP_PAGE_TSYN_TGT_M BIT(13)
+#define PF0INT_OICR_HLP_PAGE_HLP_RDY_S 14
+#define PF0INT_OICR_HLP_PAGE_HLP_RDY_M BIT(14)
+#define PF0INT_OICR_HLP_PAGE_CPM_RDY_S 15
+#define PF0INT_OICR_HLP_PAGE_CPM_RDY_M BIT(15)
+#define PF0INT_OICR_HLP_PAGE_ECC_ERR_S 16
+#define PF0INT_OICR_HLP_PAGE_ECC_ERR_M BIT(16)
+#define PF0INT_OICR_HLP_PAGE_RSV2_S 17
+#define PF0INT_OICR_HLP_PAGE_RSV2_M MAKEMASK(0x3, 17)
+#define PF0INT_OICR_HLP_PAGE_MAL_DETECT_S 19
+#define PF0INT_OICR_HLP_PAGE_MAL_DETECT_M BIT(19)
+#define PF0INT_OICR_HLP_PAGE_GRST_S 20
+#define PF0INT_OICR_HLP_PAGE_GRST_M BIT(20)
+#define PF0INT_OICR_HLP_PAGE_PCI_EXCEPTION_S 21
+#define PF0INT_OICR_HLP_PAGE_PCI_EXCEPTION_M BIT(21)
+#define PF0INT_OICR_HLP_PAGE_GPIO_S 22
+#define PF0INT_OICR_HLP_PAGE_GPIO_M BIT(22)
+#define PF0INT_OICR_HLP_PAGE_RSV3_S 23
+#define PF0INT_OICR_HLP_PAGE_RSV3_M BIT(23)
+#define PF0INT_OICR_HLP_PAGE_STORM_DETECT_S 24
+#define PF0INT_OICR_HLP_PAGE_STORM_DETECT_M BIT(24)
+#define PF0INT_OICR_HLP_PAGE_LINK_STAT_CHANGE_S 25
+#define PF0INT_OICR_HLP_PAGE_LINK_STAT_CHANGE_M BIT(25)
+#define PF0INT_OICR_HLP_PAGE_HMC_ERR_S 26
+#define PF0INT_OICR_HLP_PAGE_HMC_ERR_M BIT(26)
+#define PF0INT_OICR_HLP_PAGE_PE_PUSH_S 27
+#define PF0INT_OICR_HLP_PAGE_PE_PUSH_M BIT(27)
+#define PF0INT_OICR_HLP_PAGE_PE_CRITERR_S 28
+#define PF0INT_OICR_HLP_PAGE_PE_CRITERR_M BIT(28)
+#define PF0INT_OICR_HLP_PAGE_VFLR_S 29
+#define PF0INT_OICR_HLP_PAGE_VFLR_M BIT(29)
+#define PF0INT_OICR_HLP_PAGE_XLR_HW_DONE_S 30
+#define PF0INT_OICR_HLP_PAGE_XLR_HW_DONE_M BIT(30)
+#define PF0INT_OICR_HLP_PAGE_SWINT_S 31
+#define PF0INT_OICR_HLP_PAGE_SWINT_M BIT(31)
+#define PF0INT_OICR_PSM_PAGE 0x02D02000 /* Reset Source: CORER */
+#define PF0INT_OICR_PSM_PAGE_INTEVENT_S 0
+#define PF0INT_OICR_PSM_PAGE_INTEVENT_M BIT(0)
+#define PF0INT_OICR_PSM_PAGE_QUEUE_S 1
+#define PF0INT_OICR_PSM_PAGE_QUEUE_M BIT(1)
+#define PF0INT_OICR_PSM_PAGE_RSV1_S 2
+#define PF0INT_OICR_PSM_PAGE_RSV1_M MAKEMASK(0xFF, 2)
+#define PF0INT_OICR_PSM_PAGE_HH_COMP_S 10
+#define PF0INT_OICR_PSM_PAGE_HH_COMP_M BIT(10)
+#define PF0INT_OICR_PSM_PAGE_TSYN_TX_S 11
+#define PF0INT_OICR_PSM_PAGE_TSYN_TX_M BIT(11)
+#define PF0INT_OICR_PSM_PAGE_TSYN_EVNT_S 12
+#define PF0INT_OICR_PSM_PAGE_TSYN_EVNT_M BIT(12)
+#define PF0INT_OICR_PSM_PAGE_TSYN_TGT_S 13
+#define PF0INT_OICR_PSM_PAGE_TSYN_TGT_M BIT(13)
+#define PF0INT_OICR_PSM_PAGE_HLP_RDY_S 14
+#define PF0INT_OICR_PSM_PAGE_HLP_RDY_M BIT(14)
+#define PF0INT_OICR_PSM_PAGE_CPM_RDY_S 15
+#define PF0INT_OICR_PSM_PAGE_CPM_RDY_M BIT(15)
+#define PF0INT_OICR_PSM_PAGE_ECC_ERR_S 16
+#define PF0INT_OICR_PSM_PAGE_ECC_ERR_M BIT(16)
+#define PF0INT_OICR_PSM_PAGE_RSV2_S 17
+#define PF0INT_OICR_PSM_PAGE_RSV2_M MAKEMASK(0x3, 17)
+#define PF0INT_OICR_PSM_PAGE_MAL_DETECT_S 19
+#define PF0INT_OICR_PSM_PAGE_MAL_DETECT_M BIT(19)
+#define PF0INT_OICR_PSM_PAGE_GRST_S 20
+#define PF0INT_OICR_PSM_PAGE_GRST_M BIT(20)
+#define PF0INT_OICR_PSM_PAGE_PCI_EXCEPTION_S 21
+#define PF0INT_OICR_PSM_PAGE_PCI_EXCEPTION_M BIT(21)
+#define PF0INT_OICR_PSM_PAGE_GPIO_S 22
+#define PF0INT_OICR_PSM_PAGE_GPIO_M BIT(22)
+#define PF0INT_OICR_PSM_PAGE_RSV3_S 23
+#define PF0INT_OICR_PSM_PAGE_RSV3_M BIT(23)
+#define PF0INT_OICR_PSM_PAGE_STORM_DETECT_S 24
+#define PF0INT_OICR_PSM_PAGE_STORM_DETECT_M BIT(24)
+#define PF0INT_OICR_PSM_PAGE_LINK_STAT_CHANGE_S 25
+#define PF0INT_OICR_PSM_PAGE_LINK_STAT_CHANGE_M BIT(25)
+#define PF0INT_OICR_PSM_PAGE_HMC_ERR_S 26
+#define PF0INT_OICR_PSM_PAGE_HMC_ERR_M BIT(26)
+#define PF0INT_OICR_PSM_PAGE_PE_PUSH_S 27
+#define PF0INT_OICR_PSM_PAGE_PE_PUSH_M BIT(27)
+#define PF0INT_OICR_PSM_PAGE_PE_CRITERR_S 28
+#define PF0INT_OICR_PSM_PAGE_PE_CRITERR_M BIT(28)
+#define PF0INT_OICR_PSM_PAGE_VFLR_S 29
+#define PF0INT_OICR_PSM_PAGE_VFLR_M BIT(29)
+#define PF0INT_OICR_PSM_PAGE_XLR_HW_DONE_S 30
+#define PF0INT_OICR_PSM_PAGE_XLR_HW_DONE_M BIT(30)
+#define PF0INT_OICR_PSM_PAGE_SWINT_S 31
+#define PF0INT_OICR_PSM_PAGE_SWINT_M BIT(31)
+#define QRX_TAIL_PAGE(_QRX) (0x03800000 + ((_QRX) * 4096)) /* _i=0...2047 */ /* Reset Source: CORER */
+#define QRX_TAIL_PAGE_MAX_INDEX 2047
+#define QRX_TAIL_PAGE_TAIL_S 0
+#define QRX_TAIL_PAGE_TAIL_M MAKEMASK(0x1FFF, 0)
+#define QTX_COMM_DBELL_PAGE(_DBQM) (0x04000000 + ((_DBQM) * 4096)) /* _i=0...16383 */ /* Reset Source: CORER */
+#define QTX_COMM_DBELL_PAGE_MAX_INDEX 16383
+#define QTX_COMM_DBELL_PAGE_QTX_COMM_DBELL_S 0
+#define QTX_COMM_DBELL_PAGE_QTX_COMM_DBELL_M MAKEMASK(0xFFFFFFFF, 0)
+#define QTX_COMM_DBLQ_DBELL_PAGE(_DBLQ) (0x02F00000 + ((_DBLQ) * 8)) /* _i=0...255 */ /* Reset Source: CORER */
+#define QTX_COMM_DBLQ_DBELL_PAGE_MAX_INDEX 255
+#define QTX_COMM_DBLQ_DBELL_PAGE_TAIL_S 0
+#define QTX_COMM_DBLQ_DBELL_PAGE_TAIL_M MAKEMASK(0x1FFF, 0)
+#define VSI_MBX_ARQBAH(_VSI) (0x02000018 + ((_VSI) * 4096)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_MBX_ARQBAH_MAX_INDEX 767
+#define VSI_MBX_ARQBAH_ARQBAH_S 0
+#define VSI_MBX_ARQBAH_ARQBAH_M MAKEMASK(0xFFFFFFFF, 0)
+#define VSI_MBX_ARQBAL(_VSI) (0x02000014 + ((_VSI) * 4096)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_MBX_ARQBAL_MAX_INDEX 767
+#define VSI_MBX_ARQBAL_ARQBAL_LSB_S 0
+#define VSI_MBX_ARQBAL_ARQBAL_LSB_M MAKEMASK(0x3F, 0)
+#define VSI_MBX_ARQBAL_ARQBAL_S 6
+#define VSI_MBX_ARQBAL_ARQBAL_M MAKEMASK(0x3FFFFFF, 6)
+#define VSI_MBX_ARQH(_VSI) (0x02000020 + ((_VSI) * 4096)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_MBX_ARQH_MAX_INDEX 767
+#define VSI_MBX_ARQH_ARQH_S 0
+#define VSI_MBX_ARQH_ARQH_M MAKEMASK(0x3FF, 0)
+#define VSI_MBX_ARQLEN(_VSI) (0x0200001C + ((_VSI) * 4096)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_MBX_ARQLEN_MAX_INDEX 767
+#define VSI_MBX_ARQLEN_ARQLEN_S 0
+#define VSI_MBX_ARQLEN_ARQLEN_M MAKEMASK(0x3FF, 0)
+#define VSI_MBX_ARQLEN_ARQVFE_S 28
+#define VSI_MBX_ARQLEN_ARQVFE_M BIT(28)
+#define VSI_MBX_ARQLEN_ARQOVFL_S 29
+#define VSI_MBX_ARQLEN_ARQOVFL_M BIT(29)
+#define VSI_MBX_ARQLEN_ARQCRIT_S 30
+#define VSI_MBX_ARQLEN_ARQCRIT_M BIT(30)
+#define VSI_MBX_ARQLEN_ARQENABLE_S 31
+#define VSI_MBX_ARQLEN_ARQENABLE_M BIT(31)
+#define VSI_MBX_ARQT(_VSI) (0x02000024 + ((_VSI) * 4096)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_MBX_ARQT_MAX_INDEX 767
+#define VSI_MBX_ARQT_ARQT_S 0
+#define VSI_MBX_ARQT_ARQT_M MAKEMASK(0x3FF, 0)
+#define VSI_MBX_ATQBAH(_VSI) (0x02000004 + ((_VSI) * 4096)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_MBX_ATQBAH_MAX_INDEX 767
+#define VSI_MBX_ATQBAH_ATQBAH_S 0
+#define VSI_MBX_ATQBAH_ATQBAH_M MAKEMASK(0xFFFFFFFF, 0)
+#define VSI_MBX_ATQBAL(_VSI) (0x02000000 + ((_VSI) * 4096)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_MBX_ATQBAL_MAX_INDEX 767
+#define VSI_MBX_ATQBAL_ATQBAL_S 6
+#define VSI_MBX_ATQBAL_ATQBAL_M MAKEMASK(0x3FFFFFF, 6)
+#define VSI_MBX_ATQH(_VSI) (0x0200000C + ((_VSI) * 4096)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_MBX_ATQH_MAX_INDEX 767
+#define VSI_MBX_ATQH_ATQH_S 0
+#define VSI_MBX_ATQH_ATQH_M MAKEMASK(0x3FF, 0)
+#define VSI_MBX_ATQLEN(_VSI) (0x02000008 + ((_VSI) * 4096)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_MBX_ATQLEN_MAX_INDEX 767
+#define VSI_MBX_ATQLEN_ATQLEN_S 0
+#define VSI_MBX_ATQLEN_ATQLEN_M MAKEMASK(0x3FF, 0)
+#define VSI_MBX_ATQLEN_ATQVFE_S 28
+#define VSI_MBX_ATQLEN_ATQVFE_M BIT(28)
+#define VSI_MBX_ATQLEN_ATQOVFL_S 29
+#define VSI_MBX_ATQLEN_ATQOVFL_M BIT(29)
+#define VSI_MBX_ATQLEN_ATQCRIT_S 30
+#define VSI_MBX_ATQLEN_ATQCRIT_M BIT(30)
+#define VSI_MBX_ATQLEN_ATQENABLE_S 31
+#define VSI_MBX_ATQLEN_ATQENABLE_M BIT(31)
+#define VSI_MBX_ATQT(_VSI) (0x02000010 + ((_VSI) * 4096)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_MBX_ATQT_MAX_INDEX 767
+#define VSI_MBX_ATQT_ATQT_S 0
+#define VSI_MBX_ATQT_ATQT_M MAKEMASK(0x3FF, 0)
+#define GL_ACL_ACCESS_CMD 0x00391000 /* Reset Source: CORER */
+#define GL_ACL_ACCESS_CMD_TABLE_ID_S 0
+#define GL_ACL_ACCESS_CMD_TABLE_ID_M MAKEMASK(0xFF, 0)
+#define GL_ACL_ACCESS_CMD_ENTRY_INDEX_S 8
+#define GL_ACL_ACCESS_CMD_ENTRY_INDEX_M MAKEMASK(0xFFF, 8)
+#define GL_ACL_ACCESS_CMD_OPERATION_S 20
+#define GL_ACL_ACCESS_CMD_OPERATION_M BIT(20)
+#define GL_ACL_ACCESS_CMD_OBJ_TYPE_S 24
+#define GL_ACL_ACCESS_CMD_OBJ_TYPE_M MAKEMASK(0xF, 24)
+#define GL_ACL_ACCESS_CMD_EXECUTE_S 31
+#define GL_ACL_ACCESS_CMD_EXECUTE_M BIT(31)
+#define GL_ACL_ACCESS_STATUS 0x00391004 /* Reset Source: CORER */
+#define GL_ACL_ACCESS_STATUS_BUSY_S 0
+#define GL_ACL_ACCESS_STATUS_BUSY_M BIT(0)
+#define GL_ACL_ACCESS_STATUS_DONE_S 1
+#define GL_ACL_ACCESS_STATUS_DONE_M BIT(1)
+#define GL_ACL_ACCESS_STATUS_ERROR_S 2
+#define GL_ACL_ACCESS_STATUS_ERROR_M BIT(2)
+#define GL_ACL_ACCESS_STATUS_OPERATION_S 3
+#define GL_ACL_ACCESS_STATUS_OPERATION_M BIT(3)
+#define GL_ACL_ACCESS_STATUS_ERROR_CODE_S 4
+#define GL_ACL_ACCESS_STATUS_ERROR_CODE_M MAKEMASK(0xF, 4)
+#define GL_ACL_ACCESS_STATUS_TABLE_ID_S 8
+#define GL_ACL_ACCESS_STATUS_TABLE_ID_M MAKEMASK(0xFF, 8)
+#define GL_ACL_ACCESS_STATUS_ENTRY_INDEX_S 16
+#define GL_ACL_ACCESS_STATUS_ENTRY_INDEX_M MAKEMASK(0xFFF, 16)
+#define GL_ACL_ACCESS_STATUS_OBJ_TYPE_S 28
+#define GL_ACL_ACCESS_STATUS_OBJ_TYPE_M MAKEMASK(0xF, 28)
+#define GL_ACL_ACTMEM_ACT(_i) (0x00393824 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GL_ACL_ACTMEM_ACT_MAX_INDEX 1
+#define GL_ACL_ACTMEM_ACT_VALUE_S 0
+#define GL_ACL_ACTMEM_ACT_VALUE_M MAKEMASK(0xFFFF, 0)
+#define GL_ACL_ACTMEM_ACT_MDID_S 20
+#define GL_ACL_ACTMEM_ACT_MDID_M MAKEMASK(0x3F, 20)
+#define GL_ACL_ACTMEM_ACT_PRIORITY_S 28
+#define GL_ACL_ACTMEM_ACT_PRIORITY_M MAKEMASK(0x7, 28)
+#define GL_ACL_CHICKEN_REGISTER 0x00393810 /* Reset Source: CORER */
+#define GL_ACL_CHICKEN_REGISTER_TCAM_DATA_POL_CH_S 0
+#define GL_ACL_CHICKEN_REGISTER_TCAM_DATA_POL_CH_M BIT(0)
+#define GL_ACL_CHICKEN_REGISTER_TCAM_ADDR_POL_CH_S 1
+#define GL_ACL_CHICKEN_REGISTER_TCAM_ADDR_POL_CH_M BIT(1)
+#define GL_ACL_DEFAULT_ACT(_i) (0x00391168 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GL_ACL_DEFAULT_ACT_MAX_INDEX 15
+#define GL_ACL_DEFAULT_ACT_VALUE_S 0
+#define GL_ACL_DEFAULT_ACT_VALUE_M MAKEMASK(0xFFFF, 0)
+#define GL_ACL_DEFAULT_ACT_MDID_S 20
+#define GL_ACL_DEFAULT_ACT_MDID_M MAKEMASK(0x3F, 20)
+#define GL_ACL_DEFAULT_ACT_PRIORITY_S 28
+#define GL_ACL_DEFAULT_ACT_PRIORITY_M MAKEMASK(0x7, 28)
+#define GL_ACL_PROFILE_BWSB_SEL(_i) (0x00391008 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GL_ACL_PROFILE_BWSB_SEL_MAX_INDEX 31
+#define GL_ACL_PROFILE_BWSB_SEL_BSB_SRC_OFF_S 0
+#define GL_ACL_PROFILE_BWSB_SEL_BSB_SRC_OFF_M MAKEMASK(0x3F, 0)
+#define GL_ACL_PROFILE_BWSB_SEL_WSB_SRC_OFF_S 8
+#define GL_ACL_PROFILE_BWSB_SEL_WSB_SRC_OFF_M MAKEMASK(0x1F, 8)
+#define GL_ACL_PROFILE_DWSB_SEL(_i) (0x00391088 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GL_ACL_PROFILE_DWSB_SEL_MAX_INDEX 15
+#define GL_ACL_PROFILE_DWSB_SEL_DWORD_SEL_OFF_S 0
+#define GL_ACL_PROFILE_DWSB_SEL_DWORD_SEL_OFF_M MAKEMASK(0xF, 0)
+#define GL_ACL_PROFILE_PF_CFG(_i) (0x003910C8 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GL_ACL_PROFILE_PF_CFG_MAX_INDEX 7
+#define GL_ACL_PROFILE_PF_CFG_SCEN_SEL_S 0
+#define GL_ACL_PROFILE_PF_CFG_SCEN_SEL_M MAKEMASK(0x3F, 0)
+#define GL_ACL_PROFILE_RC_CFG(_i) (0x003910E8 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GL_ACL_PROFILE_RC_CFG_MAX_INDEX 7
+#define GL_ACL_PROFILE_RC_CFG_LOW_BOUND_S 0
+#define GL_ACL_PROFILE_RC_CFG_LOW_BOUND_M MAKEMASK(0xFFFF, 0)
+#define GL_ACL_PROFILE_RC_CFG_HIGH_BOUND_S 16
+#define GL_ACL_PROFILE_RC_CFG_HIGH_BOUND_M MAKEMASK(0xFFFF, 16)
+#define GL_ACL_PROFILE_RCF_MASK(_i) (0x00391108 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GL_ACL_PROFILE_RCF_MASK_MAX_INDEX 7
+#define GL_ACL_PROFILE_RCF_MASK_MASK_S 0
+#define GL_ACL_PROFILE_RCF_MASK_MASK_M MAKEMASK(0xFFFF, 0)
+#define GL_ACL_SCENARIO_ACT_CFG(_i) (0x003938AC + ((_i) * 4)) /* _i=0...19 */ /* Reset Source: CORER */
+#define GL_ACL_SCENARIO_ACT_CFG_MAX_INDEX 19
+#define GL_ACL_SCENARIO_ACT_CFG_ACTMEM_SEL_S 0
+#define GL_ACL_SCENARIO_ACT_CFG_ACTMEM_SEL_M MAKEMASK(0xF, 0)
+#define GL_ACL_SCENARIO_ACT_CFG_ACTMEM_EN_S 8
+#define GL_ACL_SCENARIO_ACT_CFG_ACTMEM_EN_M BIT(8)
+#define GL_ACL_SCENARIO_CFG_H(_i) (0x0039386C + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GL_ACL_SCENARIO_CFG_H_MAX_INDEX 15
+#define GL_ACL_SCENARIO_CFG_H_SELECT4_S 0
+#define GL_ACL_SCENARIO_CFG_H_SELECT4_M MAKEMASK(0x1F, 0)
+#define GL_ACL_SCENARIO_CFG_H_CHUNKMASK_S 8
+#define GL_ACL_SCENARIO_CFG_H_CHUNKMASK_M MAKEMASK(0xFF, 8)
+#define GL_ACL_SCENARIO_CFG_H_START_COMPARE_S 24
+#define GL_ACL_SCENARIO_CFG_H_START_COMPARE_M BIT(24)
+#define GL_ACL_SCENARIO_CFG_H_START_SET_S 28
+#define GL_ACL_SCENARIO_CFG_H_START_SET_M BIT(28)
+#define GL_ACL_SCENARIO_CFG_L(_i) (0x0039382C + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GL_ACL_SCENARIO_CFG_L_MAX_INDEX 15
+#define GL_ACL_SCENARIO_CFG_L_SELECT0_S 0
+#define GL_ACL_SCENARIO_CFG_L_SELECT0_M MAKEMASK(0x7F, 0)
+#define GL_ACL_SCENARIO_CFG_L_SELECT1_S 8
+#define GL_ACL_SCENARIO_CFG_L_SELECT1_M MAKEMASK(0x7F, 8)
+#define GL_ACL_SCENARIO_CFG_L_SELECT2_S 16
+#define GL_ACL_SCENARIO_CFG_L_SELECT2_M MAKEMASK(0x7F, 16)
+#define GL_ACL_SCENARIO_CFG_L_SELECT3_S 24
+#define GL_ACL_SCENARIO_CFG_L_SELECT3_M MAKEMASK(0x7F, 24)
+#define GL_ACL_TCAM_KEY_H 0x00393818 /* Reset Source: CORER */
+#define GL_ACL_TCAM_KEY_H_GL_ACL_FFU_TCAM_KEY_H_S 0
+#define GL_ACL_TCAM_KEY_H_GL_ACL_FFU_TCAM_KEY_H_M MAKEMASK(0xFF, 0)
+#define GL_ACL_TCAM_KEY_INV_H 0x00393820 /* Reset Source: CORER */
+#define GL_ACL_TCAM_KEY_INV_H_GL_ACL_FFU_TCAM_KEY_INV_H_S 0
+#define GL_ACL_TCAM_KEY_INV_H_GL_ACL_FFU_TCAM_KEY_INV_H_M MAKEMASK(0xFF, 0)
+#define GL_ACL_TCAM_KEY_INV_L 0x0039381C /* Reset Source: CORER */
+#define GL_ACL_TCAM_KEY_INV_L_GL_ACL_FFU_TCAM_KEY_INV_L_S 0
+#define GL_ACL_TCAM_KEY_INV_L_GL_ACL_FFU_TCAM_KEY_INV_L_M MAKEMASK(0xFFFFFFFF, 0)
+#define GL_ACL_TCAM_KEY_L 0x00393814 /* Reset Source: CORER */
+#define GL_ACL_TCAM_KEY_L_GL_ACL_FFU_TCAM_KEY_L_S 0
+#define GL_ACL_TCAM_KEY_L_GL_ACL_FFU_TCAM_KEY_L_M MAKEMASK(0xFFFFFFFF, 0)
+#define VSI_ACL_DEF_SEL(_VSI) (0x00391800 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_ACL_DEF_SEL_MAX_INDEX 767
+#define VSI_ACL_DEF_SEL_RX_PROFILE_MISS_SEL_S 0
+#define VSI_ACL_DEF_SEL_RX_PROFILE_MISS_SEL_M MAKEMASK(0x3, 0)
+#define VSI_ACL_DEF_SEL_RX_TABLES_MISS_SEL_S 4
+#define VSI_ACL_DEF_SEL_RX_TABLES_MISS_SEL_M MAKEMASK(0x3, 4)
+#define VSI_ACL_DEF_SEL_TX_PROFILE_MISS_SEL_S 8
+#define VSI_ACL_DEF_SEL_TX_PROFILE_MISS_SEL_M MAKEMASK(0x3, 8)
+#define VSI_ACL_DEF_SEL_TX_TABLES_MISS_SEL_S 12
+#define VSI_ACL_DEF_SEL_TX_TABLES_MISS_SEL_M MAKEMASK(0x3, 12)
+#define GL_SWT_L2TAG0(_i) (0x000492A8 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GL_SWT_L2TAG0_MAX_INDEX 7
+#define GL_SWT_L2TAG0_DATA_S 0
+#define GL_SWT_L2TAG0_DATA_M MAKEMASK(0xFFFFFFFF, 0)
+#define GL_SWT_L2TAG1(_i) (0x000492C8 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GL_SWT_L2TAG1_MAX_INDEX 7
+#define GL_SWT_L2TAG1_DATA_S 0
+#define GL_SWT_L2TAG1_DATA_M MAKEMASK(0xFFFFFFFF, 0)
+#define GL_SWT_L2TAGCTRL(_i) (0x001D2660 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GL_SWT_L2TAGCTRL_MAX_INDEX 7
+#define GL_SWT_L2TAGCTRL_LENGTH_S 0
+#define GL_SWT_L2TAGCTRL_LENGTH_M MAKEMASK(0x7F, 0)
+#define GL_SWT_L2TAGCTRL_HAS_UP_S 7
+#define GL_SWT_L2TAGCTRL_HAS_UP_M BIT(7)
+#define GL_SWT_L2TAGCTRL_ISVLAN_S 9
+#define GL_SWT_L2TAGCTRL_ISVLAN_M BIT(9)
+#define GL_SWT_L2TAGCTRL_INNERUP_S 10
+#define GL_SWT_L2TAGCTRL_INNERUP_M BIT(10)
+#define GL_SWT_L2TAGCTRL_OUTERUP_S 11
+#define GL_SWT_L2TAGCTRL_OUTERUP_M BIT(11)
+#define GL_SWT_L2TAGCTRL_LONG_S 12
+#define GL_SWT_L2TAGCTRL_LONG_M BIT(12)
+#define GL_SWT_L2TAGCTRL_ISMPLS_S 13
+#define GL_SWT_L2TAGCTRL_ISMPLS_M BIT(13)
+#define GL_SWT_L2TAGCTRL_ISNSH_S 14
+#define GL_SWT_L2TAGCTRL_ISNSH_M BIT(14)
+#define GL_SWT_L2TAGCTRL_ETHERTYPE_S 16
+#define GL_SWT_L2TAGCTRL_ETHERTYPE_M MAKEMASK(0xFFFF, 16)
+#define GL_SWT_L2TAGRXEB(_i) (0x00052000 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GL_SWT_L2TAGRXEB_MAX_INDEX 7
+#define GL_SWT_L2TAGRXEB_OFFSET_S 0
+#define GL_SWT_L2TAGRXEB_OFFSET_M MAKEMASK(0xFF, 0)
+#define GL_SWT_L2TAGRXEB_LENGTH_S 8
+#define GL_SWT_L2TAGRXEB_LENGTH_M MAKEMASK(0x3, 8)
+#define GL_SWT_L2TAGTXIB(_i) (0x000492E8 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GL_SWT_L2TAGTXIB_MAX_INDEX 7
+#define GL_SWT_L2TAGTXIB_OFFSET_S 0
+#define GL_SWT_L2TAGTXIB_OFFSET_M MAKEMASK(0xFF, 0)
+#define GL_SWT_L2TAGTXIB_LENGTH_S 8
+#define GL_SWT_L2TAGTXIB_LENGTH_M MAKEMASK(0x3, 8)
+#define PRT_TDPUL2TAGSEN 0x00040BA0 /* Reset Source: CORER */
+#define PRT_TDPUL2TAGSEN_ENABLE_S 0
+#define PRT_TDPUL2TAGSEN_ENABLE_M MAKEMASK(0xFF, 0)
+#define PRT_TDPUL2TAGSEN_NONLAST_TAG_S 8
+#define PRT_TDPUL2TAGSEN_NONLAST_TAG_M MAKEMASK(0xFF, 8)
+#define GLCM_PE_CACHESIZE 0x005046B4 /* Reset Source: CORER */
+#define GLCM_PE_CACHESIZE_WORD_SIZE_S 0
+#define GLCM_PE_CACHESIZE_WORD_SIZE_M MAKEMASK(0xFFF, 0)
+#define GLCM_PE_CACHESIZE_SETS_S 12
+#define GLCM_PE_CACHESIZE_SETS_M MAKEMASK(0xF, 12)
+#define GLCM_PE_CACHESIZE_WAYS_S 16
+#define GLCM_PE_CACHESIZE_WAYS_M MAKEMASK(0x1FF, 16)
+#define GLCOMM_CQ_CTL(_CQ) (0x000F0000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLCOMM_CQ_CTL_MAX_INDEX 511
+#define GLCOMM_CQ_CTL_COMP_TYPE_S 0
+#define GLCOMM_CQ_CTL_COMP_TYPE_M MAKEMASK(0x7, 0)
+#define GLCOMM_CQ_CTL_CMD_S 4
+#define GLCOMM_CQ_CTL_CMD_M MAKEMASK(0x7, 4)
+#define GLCOMM_CQ_CTL_ID_S 16
+#define GLCOMM_CQ_CTL_ID_M MAKEMASK(0x3FFF, 16)
+#define GLCOMM_MIN_MAX_PKT 0x000FC064 /* Reset Source: CORER */
+#define GLCOMM_MIN_MAX_PKT_MAHDL_S 0
+#define GLCOMM_MIN_MAX_PKT_MAHDL_M MAKEMASK(0x3FFF, 0)
+#define GLCOMM_MIN_MAX_PKT_MIHDL_S 16
+#define GLCOMM_MIN_MAX_PKT_MIHDL_M MAKEMASK(0x3F, 16)
+#define GLCOMM_MIN_MAX_PKT_LSO_COMS_MIHDL_S 22
+#define GLCOMM_MIN_MAX_PKT_LSO_COMS_MIHDL_M MAKEMASK(0x3FF, 22)
+#define GLCOMM_PKT_SHAPER_PROF(_i) (0x002D2DA8 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLCOMM_PKT_SHAPER_PROF_MAX_INDEX 7
+#define GLCOMM_PKT_SHAPER_PROF_PKTCNT_S 0
+#define GLCOMM_PKT_SHAPER_PROF_PKTCNT_M MAKEMASK(0x3F, 0)
+#define GLCOMM_QTX_CNTX_CTL 0x002D2DC8 /* Reset Source: CORER */
+#define GLCOMM_QTX_CNTX_CTL_QUEUE_ID_S 0
+#define GLCOMM_QTX_CNTX_CTL_QUEUE_ID_M MAKEMASK(0x3FFF, 0)
+#define GLCOMM_QTX_CNTX_CTL_CMD_S 16
+#define GLCOMM_QTX_CNTX_CTL_CMD_M MAKEMASK(0x7, 16)
+#define GLCOMM_QTX_CNTX_CTL_CMD_EXEC_S 19
+#define GLCOMM_QTX_CNTX_CTL_CMD_EXEC_M BIT(19)
+#define GLCOMM_QTX_CNTX_DATA(_i) (0x002D2D40 + ((_i) * 4)) /* _i=0...9 */ /* Reset Source: CORER */
+#define GLCOMM_QTX_CNTX_DATA_MAX_INDEX 9
+#define GLCOMM_QTX_CNTX_DATA_DATA_S 0
+#define GLCOMM_QTX_CNTX_DATA_DATA_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLCOMM_QTX_CNTX_STAT 0x002D2DCC /* Reset Source: CORER */
+#define GLCOMM_QTX_CNTX_STAT_CMD_IN_PROG_S 0
+#define GLCOMM_QTX_CNTX_STAT_CMD_IN_PROG_M BIT(0)
+#define GLCOMM_QUANTA_PROF(_i) (0x002D2D68 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLCOMM_QUANTA_PROF_MAX_INDEX 15
+#define GLCOMM_QUANTA_PROF_QUANTA_SIZE_S 0
+#define GLCOMM_QUANTA_PROF_QUANTA_SIZE_M MAKEMASK(0x3FFF, 0)
+#define GLCOMM_QUANTA_PROF_MAX_CMD_S 16
+#define GLCOMM_QUANTA_PROF_MAX_CMD_M MAKEMASK(0xFF, 16)
+#define GLCOMM_QUANTA_PROF_MAX_DESC_S 24
+#define GLCOMM_QUANTA_PROF_MAX_DESC_M MAKEMASK(0x3F, 24)
+#define GLLAN_TCLAN_CACHE_CTL 0x000FC0B8 /* Reset Source: CORER */
+#define GLLAN_TCLAN_CACHE_CTL_MIN_FETCH_THRESH_S 0
+#define GLLAN_TCLAN_CACHE_CTL_MIN_FETCH_THRESH_M MAKEMASK(0x3F, 0)
+#define GLLAN_TCLAN_CACHE_CTL_FETCH_CL_ALIGN_S 6
+#define GLLAN_TCLAN_CACHE_CTL_FETCH_CL_ALIGN_M BIT(6)
+#define GLLAN_TCLAN_CACHE_CTL_MIN_ALLOC_THRESH_S 7
+#define GLLAN_TCLAN_CACHE_CTL_MIN_ALLOC_THRESH_M MAKEMASK(0x7F, 7)
+#define GLLAN_TCLAN_CACHE_CTL_CACHE_ENTRY_CNT_S 14
+#define GLLAN_TCLAN_CACHE_CTL_CACHE_ENTRY_CNT_M MAKEMASK(0xFF, 14)
+#define GLLAN_TCLAN_CACHE_CTL_CACHE_DESC_LIM_S 22
+#define GLLAN_TCLAN_CACHE_CTL_CACHE_DESC_LIM_M MAKEMASK(0x3FF, 22)
+#define GLTCLAN_CQ_CNTX0(_CQ) (0x000F0800 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX0_MAX_INDEX 511
+#define GLTCLAN_CQ_CNTX0_RING_ADDR_LSB_S 0
+#define GLTCLAN_CQ_CNTX0_RING_ADDR_LSB_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX1(_CQ) (0x000F1000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX1_MAX_INDEX 511
+#define GLTCLAN_CQ_CNTX1_RING_ADDR_MSB_S 0
+#define GLTCLAN_CQ_CNTX1_RING_ADDR_MSB_M MAKEMASK(0x1FFFFFF, 0)
+#define GLTCLAN_CQ_CNTX10(_CQ) (0x000F5800 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX10_MAX_INDEX 511
+#define GLTCLAN_CQ_CNTX10_CQ_CACHLINE_S 0
+#define GLTCLAN_CQ_CNTX10_CQ_CACHLINE_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX11(_CQ) (0x000F6000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX11_MAX_INDEX 511
+#define GLTCLAN_CQ_CNTX11_CQ_CACHLINE_S 0
+#define GLTCLAN_CQ_CNTX11_CQ_CACHLINE_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX12(_CQ) (0x000F6800 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX12_MAX_INDEX 511
+#define GLTCLAN_CQ_CNTX12_CQ_CACHLINE_S 0
+#define GLTCLAN_CQ_CNTX12_CQ_CACHLINE_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX13(_CQ) (0x000F7000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX13_MAX_INDEX 511
+#define GLTCLAN_CQ_CNTX13_CQ_CACHLINE_S 0
+#define GLTCLAN_CQ_CNTX13_CQ_CACHLINE_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX14(_CQ) (0x000F7800 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX14_MAX_INDEX 511
+#define GLTCLAN_CQ_CNTX14_CQ_CACHLINE_S 0
+#define GLTCLAN_CQ_CNTX14_CQ_CACHLINE_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX15(_CQ) (0x000F8000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX15_MAX_INDEX 511
+#define GLTCLAN_CQ_CNTX15_CQ_CACHLINE_S 0
+#define GLTCLAN_CQ_CNTX15_CQ_CACHLINE_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX16(_CQ) (0x000F8800 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX16_MAX_INDEX 511
+#define GLTCLAN_CQ_CNTX16_CQ_CACHLINE_S 0
+#define GLTCLAN_CQ_CNTX16_CQ_CACHLINE_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX17(_CQ) (0x000F9000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX17_MAX_INDEX 511
+#define GLTCLAN_CQ_CNTX17_CQ_CACHLINE_S 0
+#define GLTCLAN_CQ_CNTX17_CQ_CACHLINE_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX18(_CQ) (0x000F9800 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX18_MAX_INDEX 511
+#define GLTCLAN_CQ_CNTX18_CQ_CACHLINE_S 0
+#define GLTCLAN_CQ_CNTX18_CQ_CACHLINE_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX19(_CQ) (0x000FA000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX19_MAX_INDEX 511
+#define GLTCLAN_CQ_CNTX19_CQ_CACHLINE_S 0
+#define GLTCLAN_CQ_CNTX19_CQ_CACHLINE_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX2(_CQ) (0x000F1800 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX2_MAX_INDEX 511
+#define GLTCLAN_CQ_CNTX2_RING_LEN_S 0
+#define GLTCLAN_CQ_CNTX2_RING_LEN_M MAKEMASK(0x3FFFF, 0)
+#define GLTCLAN_CQ_CNTX20(_CQ) (0x000FA800 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX20_MAX_INDEX 511
+#define GLTCLAN_CQ_CNTX20_CQ_CACHLINE_S 0
+#define GLTCLAN_CQ_CNTX20_CQ_CACHLINE_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX21(_CQ) (0x000FB000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX21_MAX_INDEX 511
+#define GLTCLAN_CQ_CNTX21_CQ_CACHLINE_S 0
+#define GLTCLAN_CQ_CNTX21_CQ_CACHLINE_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX3(_CQ) (0x000F2000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX3_MAX_INDEX 511
+#define GLTCLAN_CQ_CNTX3_GENERATION_S 0
+#define GLTCLAN_CQ_CNTX3_GENERATION_M BIT(0)
+#define GLTCLAN_CQ_CNTX3_CQ_WR_PTR_S 1
+#define GLTCLAN_CQ_CNTX3_CQ_WR_PTR_M MAKEMASK(0x3FFFFF, 1)
+#define GLTCLAN_CQ_CNTX4(_CQ) (0x000F2800 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX4_MAX_INDEX 511
+#define GLTCLAN_CQ_CNTX4_PF_NUM_S 0
+#define GLTCLAN_CQ_CNTX4_PF_NUM_M MAKEMASK(0x7, 0)
+#define GLTCLAN_CQ_CNTX4_VMVF_NUM_S 3
+#define GLTCLAN_CQ_CNTX4_VMVF_NUM_M MAKEMASK(0x3FF, 3)
+#define GLTCLAN_CQ_CNTX4_VMVF_TYPE_S 13
+#define GLTCLAN_CQ_CNTX4_VMVF_TYPE_M MAKEMASK(0x3, 13)
+#define GLTCLAN_CQ_CNTX5(_CQ) (0x000F3000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX5_MAX_INDEX 511
+#define GLTCLAN_CQ_CNTX5_TPH_EN_S 0
+#define GLTCLAN_CQ_CNTX5_TPH_EN_M BIT(0)
+#define GLTCLAN_CQ_CNTX5_CPU_ID_S 1
+#define GLTCLAN_CQ_CNTX5_CPU_ID_M MAKEMASK(0xFF, 1)
+#define GLTCLAN_CQ_CNTX5_FLUSH_ON_ITR_DIS_S 9
+#define GLTCLAN_CQ_CNTX5_FLUSH_ON_ITR_DIS_M BIT(9)
+#define GLTCLAN_CQ_CNTX6(_CQ) (0x000F3800 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX6_MAX_INDEX 511
+#define GLTCLAN_CQ_CNTX6_CQ_CACHLINE_S 0
+#define GLTCLAN_CQ_CNTX6_CQ_CACHLINE_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX7(_CQ) (0x000F4000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX7_MAX_INDEX 511
+#define GLTCLAN_CQ_CNTX7_CQ_CACHLINE_S 0
+#define GLTCLAN_CQ_CNTX7_CQ_CACHLINE_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX8(_CQ) (0x000F4800 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX8_MAX_INDEX 511
+#define GLTCLAN_CQ_CNTX8_CQ_CACHLINE_S 0
+#define GLTCLAN_CQ_CNTX8_CQ_CACHLINE_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX9(_CQ) (0x000F5000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX9_MAX_INDEX 511
+#define GLTCLAN_CQ_CNTX9_CQ_CACHLINE_S 0
+#define GLTCLAN_CQ_CNTX9_CQ_CACHLINE_M MAKEMASK(0xFFFFFFFF, 0)
+#define QTX_COMM_DBELL(_DBQM) (0x002C0000 + ((_DBQM) * 4)) /* _i=0...16383 */ /* Reset Source: CORER */
+#define QTX_COMM_DBELL_MAX_INDEX 16383
+#define QTX_COMM_DBELL_QTX_COMM_DBELL_S 0
+#define QTX_COMM_DBELL_QTX_COMM_DBELL_M MAKEMASK(0xFFFFFFFF, 0)
+#define QTX_COMM_DBLQ_CNTX(_i, _DBLQ) (0x002D0000 + ((_i) * 1024 + (_DBLQ) * 4)) /* _i=0...4, _DBLQ=0...255 */ /* Reset Source: CORER */
+#define QTX_COMM_DBLQ_CNTX_MAX_INDEX 4
+#define QTX_COMM_DBLQ_CNTX_DATA_S 0
+#define QTX_COMM_DBLQ_CNTX_DATA_M MAKEMASK(0xFFFFFFFF, 0)
+#define QTX_COMM_DBLQ_DBELL(_DBLQ) (0x002D1400 + ((_DBLQ) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define QTX_COMM_DBLQ_DBELL_MAX_INDEX 255
+#define QTX_COMM_DBLQ_DBELL_TAIL_S 0
+#define QTX_COMM_DBLQ_DBELL_TAIL_M MAKEMASK(0x1FFF, 0)
+#define QTX_COMM_HEAD(_DBQM) (0x000E0000 + ((_DBQM) * 4)) /* _i=0...16383 */ /* Reset Source: CORER */
+#define QTX_COMM_HEAD_MAX_INDEX 16383
+#define QTX_COMM_HEAD_HEAD_S 0
+#define QTX_COMM_HEAD_HEAD_M MAKEMASK(0x1FFF, 0)
+#define QTX_COMM_HEAD_RS_PENDING_S 16
+#define QTX_COMM_HEAD_RS_PENDING_M BIT(16)
+#define GL_FW_TOOL_ARQBAH 0x000801C0 /* Reset Source: EMPR */
+#define GL_FW_TOOL_ARQBAH_ARQBAH_S 0
+#define GL_FW_TOOL_ARQBAH_ARQBAH_M MAKEMASK(0xFFFFFFFF, 0)
+#define GL_FW_TOOL_ARQBAL 0x000800C0 /* Reset Source: EMPR */
+#define GL_FW_TOOL_ARQBAL_ARQBAL_LSB_S 0
+#define GL_FW_TOOL_ARQBAL_ARQBAL_LSB_M MAKEMASK(0x3F, 0)
+#define GL_FW_TOOL_ARQBAL_ARQBAL_S 6
+#define GL_FW_TOOL_ARQBAL_ARQBAL_M MAKEMASK(0x3FFFFFF, 6)
+#define GL_FW_TOOL_ARQH 0x000803C0 /* Reset Source: EMPR */
+#define GL_FW_TOOL_ARQH_ARQH_S 0
+#define GL_FW_TOOL_ARQH_ARQH_M MAKEMASK(0x3FF, 0)
+#define GL_FW_TOOL_ARQLEN 0x000802C0 /* Reset Source: EMPR */
+#define GL_FW_TOOL_ARQLEN_ARQLEN_S 0
+#define GL_FW_TOOL_ARQLEN_ARQLEN_M MAKEMASK(0x3FF, 0)
+#define GL_FW_TOOL_ARQLEN_ARQVFE_S 28
+#define GL_FW_TOOL_ARQLEN_ARQVFE_M BIT(28)
+#define GL_FW_TOOL_ARQLEN_ARQOVFL_S 29
+#define GL_FW_TOOL_ARQLEN_ARQOVFL_M BIT(29)
+#define GL_FW_TOOL_ARQLEN_ARQCRIT_S 30
+#define GL_FW_TOOL_ARQLEN_ARQCRIT_M BIT(30)
+#define GL_FW_TOOL_ARQLEN_ARQENABLE_S 31
+#define GL_FW_TOOL_ARQLEN_ARQENABLE_M BIT(31)
+#define GL_FW_TOOL_ARQT 0x000804C0 /* Reset Source: EMPR */
+#define GL_FW_TOOL_ARQT_ARQT_S 0
+#define GL_FW_TOOL_ARQT_ARQT_M MAKEMASK(0x3FF, 0)
+#define GL_FW_TOOL_ATQBAH 0x00080140 /* Reset Source: EMPR */
+#define GL_FW_TOOL_ATQBAH_ATQBAH_S 0
+#define GL_FW_TOOL_ATQBAH_ATQBAH_M MAKEMASK(0xFFFFFFFF, 0)
+#define GL_FW_TOOL_ATQBAL 0x00080040 /* Reset Source: EMPR */
+#define GL_FW_TOOL_ATQBAL_ATQBAL_LSB_S 0
+#define GL_FW_TOOL_ATQBAL_ATQBAL_LSB_M MAKEMASK(0x3F, 0)
+#define GL_FW_TOOL_ATQBAL_ATQBAL_S 6
+#define GL_FW_TOOL_ATQBAL_ATQBAL_M MAKEMASK(0x3FFFFFF, 6)
+#define GL_FW_TOOL_ATQH 0x00080340 /* Reset Source: EMPR */
+#define GL_FW_TOOL_ATQH_ATQH_S 0
+#define GL_FW_TOOL_ATQH_ATQH_M MAKEMASK(0x3FF, 0)
+#define GL_FW_TOOL_ATQLEN 0x00080240 /* Reset Source: EMPR */
+#define GL_FW_TOOL_ATQLEN_ATQLEN_S 0
+#define GL_FW_TOOL_ATQLEN_ATQLEN_M MAKEMASK(0x3FF, 0)
+#define GL_FW_TOOL_ATQLEN_ATQVFE_S 28
+#define GL_FW_TOOL_ATQLEN_ATQVFE_M BIT(28)
+#define GL_FW_TOOL_ATQLEN_ATQOVFL_S 29
+#define GL_FW_TOOL_ATQLEN_ATQOVFL_M BIT(29)
+#define GL_FW_TOOL_ATQLEN_ATQCRIT_S 30
+#define GL_FW_TOOL_ATQLEN_ATQCRIT_M BIT(30)
+#define GL_FW_TOOL_ATQLEN_ATQENABLE_S 31
+#define GL_FW_TOOL_ATQLEN_ATQENABLE_M BIT(31)
+#define GL_FW_TOOL_ATQT 0x00080440 /* Reset Source: EMPR */
+#define GL_FW_TOOL_ATQT_ATQT_S 0
+#define GL_FW_TOOL_ATQT_ATQT_M MAKEMASK(0x3FF, 0)
+#define GL_MBX_PASID 0x00231EC0 /* Reset Source: CORER */
+#define GL_MBX_PASID_PASID_MODE_S 0
+#define GL_MBX_PASID_PASID_MODE_M BIT(0)
+#define GL_MBX_PASID_PASID_MODE_VALID_S 1
+#define GL_MBX_PASID_PASID_MODE_VALID_M BIT(1)
+#define PF_FW_ARQBAH 0x00080180 /* Reset Source: EMPR */
+#define PF_FW_ARQBAH_ARQBAH_S 0
+#define PF_FW_ARQBAH_ARQBAH_M MAKEMASK(0xFFFFFFFF, 0)
+#define PF_FW_ARQBAL 0x00080080 /* Reset Source: EMPR */
+#define PF_FW_ARQBAL_ARQBAL_LSB_S 0
+#define PF_FW_ARQBAL_ARQBAL_LSB_M MAKEMASK(0x3F, 0)
+#define PF_FW_ARQBAL_ARQBAL_S 6
+#define PF_FW_ARQBAL_ARQBAL_M MAKEMASK(0x3FFFFFF, 6)
+#define PF_FW_ARQH 0x00080380 /* Reset Source: EMPR */
+#define PF_FW_ARQH_ARQH_S 0
+#define PF_FW_ARQH_ARQH_M MAKEMASK(0x3FF, 0)
+#define PF_FW_ARQLEN 0x00080280 /* Reset Source: EMPR */
+#define PF_FW_ARQLEN_ARQLEN_S 0
+#define PF_FW_ARQLEN_ARQLEN_M MAKEMASK(0x3FF, 0)
+#define PF_FW_ARQLEN_ARQVFE_S 28
+#define PF_FW_ARQLEN_ARQVFE_M BIT(28)
+#define PF_FW_ARQLEN_ARQOVFL_S 29
+#define PF_FW_ARQLEN_ARQOVFL_M BIT(29)
+#define PF_FW_ARQLEN_ARQCRIT_S 30
+#define PF_FW_ARQLEN_ARQCRIT_M BIT(30)
+#define PF_FW_ARQLEN_ARQENABLE_S 31
+#define PF_FW_ARQLEN_ARQENABLE_M BIT(31)
+#define PF_FW_ARQT 0x00080480 /* Reset Source: EMPR */
+#define PF_FW_ARQT_ARQT_S 0
+#define PF_FW_ARQT_ARQT_M MAKEMASK(0x3FF, 0)
+#define PF_FW_ATQBAH 0x00080100 /* Reset Source: EMPR */
+#define PF_FW_ATQBAH_ATQBAH_S 0
+#define PF_FW_ATQBAH_ATQBAH_M MAKEMASK(0xFFFFFFFF, 0)
+#define PF_FW_ATQBAL 0x00080000 /* Reset Source: EMPR */
+#define PF_FW_ATQBAL_ATQBAL_LSB_S 0
+#define PF_FW_ATQBAL_ATQBAL_LSB_M MAKEMASK(0x3F, 0)
+#define PF_FW_ATQBAL_ATQBAL_S 6
+#define PF_FW_ATQBAL_ATQBAL_M MAKEMASK(0x3FFFFFF, 6)
+#define PF_FW_ATQH 0x00080300 /* Reset Source: EMPR */
+#define PF_FW_ATQH_ATQH_S 0
+#define PF_FW_ATQH_ATQH_M MAKEMASK(0x3FF, 0)
+#define PF_FW_ATQLEN 0x00080200 /* Reset Source: EMPR */
+#define PF_FW_ATQLEN_ATQLEN_S 0
+#define PF_FW_ATQLEN_ATQLEN_M MAKEMASK(0x3FF, 0)
+#define PF_FW_ATQLEN_ATQVFE_S 28
+#define PF_FW_ATQLEN_ATQVFE_M BIT(28)
+#define PF_FW_ATQLEN_ATQOVFL_S 29
+#define PF_FW_ATQLEN_ATQOVFL_M BIT(29)
+#define PF_FW_ATQLEN_ATQCRIT_S 30
+#define PF_FW_ATQLEN_ATQCRIT_M BIT(30)
+#define PF_FW_ATQLEN_ATQENABLE_S 31
+#define PF_FW_ATQLEN_ATQENABLE_M BIT(31)
+#define PF_FW_ATQT 0x00080400 /* Reset Source: EMPR */
+#define PF_FW_ATQT_ATQT_S 0
+#define PF_FW_ATQT_ATQT_M MAKEMASK(0x3FF, 0)
+#define PF_MBX_ARQBAH 0x0022E400 /* Reset Source: CORER */
+#define PF_MBX_ARQBAH_ARQBAH_S 0
+#define PF_MBX_ARQBAH_ARQBAH_M MAKEMASK(0xFFFFFFFF, 0)
+#define PF_MBX_ARQBAL 0x0022E380 /* Reset Source: CORER */
+#define PF_MBX_ARQBAL_ARQBAL_LSB_S 0
+#define PF_MBX_ARQBAL_ARQBAL_LSB_M MAKEMASK(0x3F, 0)
+#define PF_MBX_ARQBAL_ARQBAL_S 6
+#define PF_MBX_ARQBAL_ARQBAL_M MAKEMASK(0x3FFFFFF, 6)
+#define PF_MBX_ARQH 0x0022E500 /* Reset Source: CORER */
+#define PF_MBX_ARQH_ARQH_S 0
+#define PF_MBX_ARQH_ARQH_M MAKEMASK(0x3FF, 0)
+#define PF_MBX_ARQLEN 0x0022E480 /* Reset Source: CORER */
+#define PF_MBX_ARQLEN_ARQLEN_S 0
+#define PF_MBX_ARQLEN_ARQLEN_M MAKEMASK(0x3FF, 0)
+#define PF_MBX_ARQLEN_ARQVFE_S 28
+#define PF_MBX_ARQLEN_ARQVFE_M BIT(28)
+#define PF_MBX_ARQLEN_ARQOVFL_S 29
+#define PF_MBX_ARQLEN_ARQOVFL_M BIT(29)
+#define PF_MBX_ARQLEN_ARQCRIT_S 30
+#define PF_MBX_ARQLEN_ARQCRIT_M BIT(30)
+#define PF_MBX_ARQLEN_ARQENABLE_S 31
+#define PF_MBX_ARQLEN_ARQENABLE_M BIT(31)
+#define PF_MBX_ARQT 0x0022E580 /* Reset Source: CORER */
+#define PF_MBX_ARQT_ARQT_S 0
+#define PF_MBX_ARQT_ARQT_M MAKEMASK(0x3FF, 0)
+#define PF_MBX_ATQBAH 0x0022E180 /* Reset Source: CORER */
+#define PF_MBX_ATQBAH_ATQBAH_S 0
+#define PF_MBX_ATQBAH_ATQBAH_M MAKEMASK(0xFFFFFFFF, 0)
+#define PF_MBX_ATQBAL 0x0022E100 /* Reset Source: CORER */
+#define PF_MBX_ATQBAL_ATQBAL_S 6
+#define PF_MBX_ATQBAL_ATQBAL_M MAKEMASK(0x3FFFFFF, 6)
+#define PF_MBX_ATQH 0x0022E280 /* Reset Source: CORER */
+#define PF_MBX_ATQH_ATQH_S 0
+#define PF_MBX_ATQH_ATQH_M MAKEMASK(0x3FF, 0)
+#define PF_MBX_ATQLEN 0x0022E200 /* Reset Source: CORER */
+#define PF_MBX_ATQLEN_ATQLEN_S 0
+#define PF_MBX_ATQLEN_ATQLEN_M MAKEMASK(0x3FF, 0)
+#define PF_MBX_ATQLEN_ATQVFE_S 28
+#define PF_MBX_ATQLEN_ATQVFE_M BIT(28)
+#define PF_MBX_ATQLEN_ATQOVFL_S 29
+#define PF_MBX_ATQLEN_ATQOVFL_M BIT(29)
+#define PF_MBX_ATQLEN_ATQCRIT_S 30
+#define PF_MBX_ATQLEN_ATQCRIT_M BIT(30)
+#define PF_MBX_ATQLEN_ATQENABLE_S 31
+#define PF_MBX_ATQLEN_ATQENABLE_M BIT(31)
+#define PF_MBX_ATQT 0x0022E300 /* Reset Source: CORER */
+#define PF_MBX_ATQT_ATQT_S 0
+#define PF_MBX_ATQT_ATQT_M MAKEMASK(0x3FF, 0)
+#define PF_SB_ARQBAH 0x0022FF00 /* Reset Source: CORER */
+#define PF_SB_ARQBAH_ARQBAH_S 0
+#define PF_SB_ARQBAH_ARQBAH_M MAKEMASK(0xFFFFFFFF, 0)
+#define PF_SB_ARQBAL 0x0022FE80 /* Reset Source: CORER */
+#define PF_SB_ARQBAL_ARQBAL_LSB_S 0
+#define PF_SB_ARQBAL_ARQBAL_LSB_M MAKEMASK(0x3F, 0)
+#define PF_SB_ARQBAL_ARQBAL_S 6
+#define PF_SB_ARQBAL_ARQBAL_M MAKEMASK(0x3FFFFFF, 6)
+#define PF_SB_ARQH 0x00230000 /* Reset Source: CORER */
+#define PF_SB_ARQH_ARQH_S 0
+#define PF_SB_ARQH_ARQH_M MAKEMASK(0x3FF, 0)
+#define PF_SB_ARQLEN 0x0022FF80 /* Reset Source: CORER */
+#define PF_SB_ARQLEN_ARQLEN_S 0
+#define PF_SB_ARQLEN_ARQLEN_M MAKEMASK(0x3FF, 0)
+#define PF_SB_ARQLEN_ARQVFE_S 28
+#define PF_SB_ARQLEN_ARQVFE_M BIT(28)
+#define PF_SB_ARQLEN_ARQOVFL_S 29
+#define PF_SB_ARQLEN_ARQOVFL_M BIT(29)
+#define PF_SB_ARQLEN_ARQCRIT_S 30
+#define PF_SB_ARQLEN_ARQCRIT_M BIT(30)
+#define PF_SB_ARQLEN_ARQENABLE_S 31
+#define PF_SB_ARQLEN_ARQENABLE_M BIT(31)
+#define PF_SB_ARQT 0x00230080 /* Reset Source: CORER */
+#define PF_SB_ARQT_ARQT_S 0
+#define PF_SB_ARQT_ARQT_M MAKEMASK(0x3FF, 0)
+#define PF_SB_ATQBAH 0x0022FC80 /* Reset Source: CORER */
+#define PF_SB_ATQBAH_ATQBAH_S 0
+#define PF_SB_ATQBAH_ATQBAH_M MAKEMASK(0xFFFFFFFF, 0)
+#define PF_SB_ATQBAL 0x0022FC00 /* Reset Source: CORER */
+#define PF_SB_ATQBAL_ATQBAL_S 6
+#define PF_SB_ATQBAL_ATQBAL_M MAKEMASK(0x3FFFFFF, 6)
+#define PF_SB_ATQH 0x0022FD80 /* Reset Source: CORER */
+#define PF_SB_ATQH_ATQH_S 0
+#define PF_SB_ATQH_ATQH_M MAKEMASK(0x3FF, 0)
+#define PF_SB_ATQLEN 0x0022FD00 /* Reset Source: CORER */
+#define PF_SB_ATQLEN_ATQLEN_S 0
+#define PF_SB_ATQLEN_ATQLEN_M MAKEMASK(0x3FF, 0)
+#define PF_SB_ATQLEN_ATQVFE_S 28
+#define PF_SB_ATQLEN_ATQVFE_M BIT(28)
+#define PF_SB_ATQLEN_ATQOVFL_S 29
+#define PF_SB_ATQLEN_ATQOVFL_M BIT(29)
+#define PF_SB_ATQLEN_ATQCRIT_S 30
+#define PF_SB_ATQLEN_ATQCRIT_M BIT(30)
+#define PF_SB_ATQLEN_ATQENABLE_S 31
+#define PF_SB_ATQLEN_ATQENABLE_M BIT(31)
+#define PF_SB_ATQT 0x0022FE00 /* Reset Source: CORER */
+#define PF_SB_ATQT_ATQT_S 0
+#define PF_SB_ATQT_ATQT_M MAKEMASK(0x3FF, 0)
+#define PF_SB_REM_DEV_CTL 0x002300F0 /* Reset Source: CORER */
+#define PF_SB_REM_DEV_CTL_DEST_EN_S 0
+#define PF_SB_REM_DEV_CTL_DEST_EN_M MAKEMASK(0xFFFF, 0)
+#define PF0_FW_HLP_ARQBAH 0x000801C8 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ARQBAH_ARQBAH_S 0
+#define PF0_FW_HLP_ARQBAH_ARQBAH_M MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_FW_HLP_ARQBAL 0x000800C8 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ARQBAL_ARQBAL_LSB_S 0
+#define PF0_FW_HLP_ARQBAL_ARQBAL_LSB_M MAKEMASK(0x3F, 0)
+#define PF0_FW_HLP_ARQBAL_ARQBAL_S 6
+#define PF0_FW_HLP_ARQBAL_ARQBAL_M MAKEMASK(0x3FFFFFF, 6)
+#define PF0_FW_HLP_ARQH 0x000803C8 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ARQH_ARQH_S 0
+#define PF0_FW_HLP_ARQH_ARQH_M MAKEMASK(0x3FF, 0)
+#define PF0_FW_HLP_ARQLEN 0x000802C8 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ARQLEN_ARQLEN_S 0
+#define PF0_FW_HLP_ARQLEN_ARQLEN_M MAKEMASK(0x3FF, 0)
+#define PF0_FW_HLP_ARQLEN_ARQVFE_S 28
+#define PF0_FW_HLP_ARQLEN_ARQVFE_M BIT(28)
+#define PF0_FW_HLP_ARQLEN_ARQOVFL_S 29
+#define PF0_FW_HLP_ARQLEN_ARQOVFL_M BIT(29)
+#define PF0_FW_HLP_ARQLEN_ARQCRIT_S 30
+#define PF0_FW_HLP_ARQLEN_ARQCRIT_M BIT(30)
+#define PF0_FW_HLP_ARQLEN_ARQENABLE_S 31
+#define PF0_FW_HLP_ARQLEN_ARQENABLE_M BIT(31)
+#define PF0_FW_HLP_ARQT 0x000804C8 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ARQT_ARQT_S 0
+#define PF0_FW_HLP_ARQT_ARQT_M MAKEMASK(0x3FF, 0)
+#define PF0_FW_HLP_ATQBAH 0x00080148 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ATQBAH_ATQBAH_S 0
+#define PF0_FW_HLP_ATQBAH_ATQBAH_M MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_FW_HLP_ATQBAL 0x00080048 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ATQBAL_ATQBAL_LSB_S 0
+#define PF0_FW_HLP_ATQBAL_ATQBAL_LSB_M MAKEMASK(0x3F, 0)
+#define PF0_FW_HLP_ATQBAL_ATQBAL_S 6
+#define PF0_FW_HLP_ATQBAL_ATQBAL_M MAKEMASK(0x3FFFFFF, 6)
+#define PF0_FW_HLP_ATQH 0x00080348 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ATQH_ATQH_S 0
+#define PF0_FW_HLP_ATQH_ATQH_M MAKEMASK(0x3FF, 0)
+#define PF0_FW_HLP_ATQLEN 0x00080248 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ATQLEN_ATQLEN_S 0
+#define PF0_FW_HLP_ATQLEN_ATQLEN_M MAKEMASK(0x3FF, 0)
+#define PF0_FW_HLP_ATQLEN_ATQVFE_S 28
+#define PF0_FW_HLP_ATQLEN_ATQVFE_M BIT(28)
+#define PF0_FW_HLP_ATQLEN_ATQOVFL_S 29
+#define PF0_FW_HLP_ATQLEN_ATQOVFL_M BIT(29)
+#define PF0_FW_HLP_ATQLEN_ATQCRIT_S 30
+#define PF0_FW_HLP_ATQLEN_ATQCRIT_M BIT(30)
+#define PF0_FW_HLP_ATQLEN_ATQENABLE_S 31
+#define PF0_FW_HLP_ATQLEN_ATQENABLE_M BIT(31)
+#define PF0_FW_HLP_ATQT 0x00080448 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ATQT_ATQT_S 0
+#define PF0_FW_HLP_ATQT_ATQT_M MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ARQBAH 0x000801C4 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ARQBAH_ARQBAH_S 0
+#define PF0_FW_PSM_ARQBAH_ARQBAH_M MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_FW_PSM_ARQBAL 0x000800C4 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ARQBAL_ARQBAL_LSB_S 0
+#define PF0_FW_PSM_ARQBAL_ARQBAL_LSB_M MAKEMASK(0x3F, 0)
+#define PF0_FW_PSM_ARQBAL_ARQBAL_S 6
+#define PF0_FW_PSM_ARQBAL_ARQBAL_M MAKEMASK(0x3FFFFFF, 6)
+#define PF0_FW_PSM_ARQH 0x000803C4 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ARQH_ARQH_S 0
+#define PF0_FW_PSM_ARQH_ARQH_M MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ARQLEN 0x000802C4 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ARQLEN_ARQLEN_S 0
+#define PF0_FW_PSM_ARQLEN_ARQLEN_M MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ARQLEN_ARQVFE_S 28
+#define PF0_FW_PSM_ARQLEN_ARQVFE_M BIT(28)
+#define PF0_FW_PSM_ARQLEN_ARQOVFL_S 29
+#define PF0_FW_PSM_ARQLEN_ARQOVFL_M BIT(29)
+#define PF0_FW_PSM_ARQLEN_ARQCRIT_S 30
+#define PF0_FW_PSM_ARQLEN_ARQCRIT_M BIT(30)
+#define PF0_FW_PSM_ARQLEN_ARQENABLE_S 31
+#define PF0_FW_PSM_ARQLEN_ARQENABLE_M BIT(31)
+#define PF0_FW_PSM_ARQT 0x000804C4 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ARQT_ARQT_S 0
+#define PF0_FW_PSM_ARQT_ARQT_M MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ATQBAH 0x00080144 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ATQBAH_ATQBAH_S 0
+#define PF0_FW_PSM_ATQBAH_ATQBAH_M MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_FW_PSM_ATQBAL 0x00080044 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ATQBAL_ATQBAL_LSB_S 0
+#define PF0_FW_PSM_ATQBAL_ATQBAL_LSB_M MAKEMASK(0x3F, 0)
+#define PF0_FW_PSM_ATQBAL_ATQBAL_S 6
+#define PF0_FW_PSM_ATQBAL_ATQBAL_M MAKEMASK(0x3FFFFFF, 6)
+#define PF0_FW_PSM_ATQH 0x00080344 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ATQH_ATQH_S 0
+#define PF0_FW_PSM_ATQH_ATQH_M MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ATQLEN 0x00080244 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ATQLEN_ATQLEN_S 0
+#define PF0_FW_PSM_ATQLEN_ATQLEN_M MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ATQLEN_ATQVFE_S 28
+#define PF0_FW_PSM_ATQLEN_ATQVFE_M BIT(28)
+#define PF0_FW_PSM_ATQLEN_ATQOVFL_S 29
+#define PF0_FW_PSM_ATQLEN_ATQOVFL_M BIT(29)
+#define PF0_FW_PSM_ATQLEN_ATQCRIT_S 30
+#define PF0_FW_PSM_ATQLEN_ATQCRIT_M BIT(30)
+#define PF0_FW_PSM_ATQLEN_ATQENABLE_S 31
+#define PF0_FW_PSM_ATQLEN_ATQENABLE_M BIT(31)
+#define PF0_FW_PSM_ATQT 0x00080444 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ATQT_ATQT_S 0
+#define PF0_FW_PSM_ATQT_ATQT_M MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ARQBAH 0x0022E5D8 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ARQBAH_ARQBAH_S 0
+#define PF0_MBX_CPM_ARQBAH_ARQBAH_M MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_CPM_ARQBAL 0x0022E5D4 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ARQBAL_ARQBAL_LSB_S 0
+#define PF0_MBX_CPM_ARQBAL_ARQBAL_LSB_M MAKEMASK(0x3F, 0)
+#define PF0_MBX_CPM_ARQBAL_ARQBAL_S 6
+#define PF0_MBX_CPM_ARQBAL_ARQBAL_M MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_CPM_ARQH 0x0022E5E0 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ARQH_ARQH_S 0
+#define PF0_MBX_CPM_ARQH_ARQH_M MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ARQLEN 0x0022E5DC /* Reset Source: CORER */
+#define PF0_MBX_CPM_ARQLEN_ARQLEN_S 0
+#define PF0_MBX_CPM_ARQLEN_ARQLEN_M MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ARQLEN_ARQVFE_S 28
+#define PF0_MBX_CPM_ARQLEN_ARQVFE_M BIT(28)
+#define PF0_MBX_CPM_ARQLEN_ARQOVFL_S 29
+#define PF0_MBX_CPM_ARQLEN_ARQOVFL_M BIT(29)
+#define PF0_MBX_CPM_ARQLEN_ARQCRIT_S 30
+#define PF0_MBX_CPM_ARQLEN_ARQCRIT_M BIT(30)
+#define PF0_MBX_CPM_ARQLEN_ARQENABLE_S 31
+#define PF0_MBX_CPM_ARQLEN_ARQENABLE_M BIT(31)
+#define PF0_MBX_CPM_ARQT 0x0022E5E4 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ARQT_ARQT_S 0
+#define PF0_MBX_CPM_ARQT_ARQT_M MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ATQBAH 0x0022E5C4 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ATQBAH_ATQBAH_S 0
+#define PF0_MBX_CPM_ATQBAH_ATQBAH_M MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_CPM_ATQBAL 0x0022E5C0 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ATQBAL_ATQBAL_S 6
+#define PF0_MBX_CPM_ATQBAL_ATQBAL_M MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_CPM_ATQH 0x0022E5CC /* Reset Source: CORER */
+#define PF0_MBX_CPM_ATQH_ATQH_S 0
+#define PF0_MBX_CPM_ATQH_ATQH_M MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ATQLEN 0x0022E5C8 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ATQLEN_ATQLEN_S 0
+#define PF0_MBX_CPM_ATQLEN_ATQLEN_M MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ATQLEN_ATQVFE_S 28
+#define PF0_MBX_CPM_ATQLEN_ATQVFE_M BIT(28)
+#define PF0_MBX_CPM_ATQLEN_ATQOVFL_S 29
+#define PF0_MBX_CPM_ATQLEN_ATQOVFL_M BIT(29)
+#define PF0_MBX_CPM_ATQLEN_ATQCRIT_S 30
+#define PF0_MBX_CPM_ATQLEN_ATQCRIT_M BIT(30)
+#define PF0_MBX_CPM_ATQLEN_ATQENABLE_S 31
+#define PF0_MBX_CPM_ATQLEN_ATQENABLE_M BIT(31)
+#define PF0_MBX_CPM_ATQT 0x0022E5D0 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ATQT_ATQT_S 0
+#define PF0_MBX_CPM_ATQT_ATQT_M MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ARQBAH 0x0022E600 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ARQBAH_ARQBAH_S 0
+#define PF0_MBX_HLP_ARQBAH_ARQBAH_M MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_HLP_ARQBAL 0x0022E5FC /* Reset Source: CORER */
+#define PF0_MBX_HLP_ARQBAL_ARQBAL_LSB_S 0
+#define PF0_MBX_HLP_ARQBAL_ARQBAL_LSB_M MAKEMASK(0x3F, 0)
+#define PF0_MBX_HLP_ARQBAL_ARQBAL_S 6
+#define PF0_MBX_HLP_ARQBAL_ARQBAL_M MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_HLP_ARQH 0x0022E608 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ARQH_ARQH_S 0
+#define PF0_MBX_HLP_ARQH_ARQH_M MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ARQLEN 0x0022E604 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ARQLEN_ARQLEN_S 0
+#define PF0_MBX_HLP_ARQLEN_ARQLEN_M MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ARQLEN_ARQVFE_S 28
+#define PF0_MBX_HLP_ARQLEN_ARQVFE_M BIT(28)
+#define PF0_MBX_HLP_ARQLEN_ARQOVFL_S 29
+#define PF0_MBX_HLP_ARQLEN_ARQOVFL_M BIT(29)
+#define PF0_MBX_HLP_ARQLEN_ARQCRIT_S 30
+#define PF0_MBX_HLP_ARQLEN_ARQCRIT_M BIT(30)
+#define PF0_MBX_HLP_ARQLEN_ARQENABLE_S 31
+#define PF0_MBX_HLP_ARQLEN_ARQENABLE_M BIT(31)
+#define PF0_MBX_HLP_ARQT 0x0022E60C /* Reset Source: CORER */
+#define PF0_MBX_HLP_ARQT_ARQT_S 0
+#define PF0_MBX_HLP_ARQT_ARQT_M MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ATQBAH 0x0022E5EC /* Reset Source: CORER */
+#define PF0_MBX_HLP_ATQBAH_ATQBAH_S 0
+#define PF0_MBX_HLP_ATQBAH_ATQBAH_M MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_HLP_ATQBAL 0x0022E5E8 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ATQBAL_ATQBAL_S 6
+#define PF0_MBX_HLP_ATQBAL_ATQBAL_M MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_HLP_ATQH 0x0022E5F4 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ATQH_ATQH_S 0
+#define PF0_MBX_HLP_ATQH_ATQH_M MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ATQLEN 0x0022E5F0 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ATQLEN_ATQLEN_S 0
+#define PF0_MBX_HLP_ATQLEN_ATQLEN_M MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ATQLEN_ATQVFE_S 28
+#define PF0_MBX_HLP_ATQLEN_ATQVFE_M BIT(28)
+#define PF0_MBX_HLP_ATQLEN_ATQOVFL_S 29
+#define PF0_MBX_HLP_ATQLEN_ATQOVFL_M BIT(29)
+#define PF0_MBX_HLP_ATQLEN_ATQCRIT_S 30
+#define PF0_MBX_HLP_ATQLEN_ATQCRIT_M BIT(30)
+#define PF0_MBX_HLP_ATQLEN_ATQENABLE_S 31
+#define PF0_MBX_HLP_ATQLEN_ATQENABLE_M BIT(31)
+#define PF0_MBX_HLP_ATQT 0x0022E5F8 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ATQT_ATQT_S 0
+#define PF0_MBX_HLP_ATQT_ATQT_M MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ARQBAH 0x0022E628 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ARQBAH_ARQBAH_S 0
+#define PF0_MBX_PSM_ARQBAH_ARQBAH_M MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_PSM_ARQBAL 0x0022E624 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ARQBAL_ARQBAL_LSB_S 0
+#define PF0_MBX_PSM_ARQBAL_ARQBAL_LSB_M MAKEMASK(0x3F, 0)
+#define PF0_MBX_PSM_ARQBAL_ARQBAL_S 6
+#define PF0_MBX_PSM_ARQBAL_ARQBAL_M MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_PSM_ARQH 0x0022E630 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ARQH_ARQH_S 0
+#define PF0_MBX_PSM_ARQH_ARQH_M MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ARQLEN 0x0022E62C /* Reset Source: CORER */
+#define PF0_MBX_PSM_ARQLEN_ARQLEN_S 0
+#define PF0_MBX_PSM_ARQLEN_ARQLEN_M MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ARQLEN_ARQVFE_S 28
+#define PF0_MBX_PSM_ARQLEN_ARQVFE_M BIT(28)
+#define PF0_MBX_PSM_ARQLEN_ARQOVFL_S 29
+#define PF0_MBX_PSM_ARQLEN_ARQOVFL_M BIT(29)
+#define PF0_MBX_PSM_ARQLEN_ARQCRIT_S 30
+#define PF0_MBX_PSM_ARQLEN_ARQCRIT_M BIT(30)
+#define PF0_MBX_PSM_ARQLEN_ARQENABLE_S 31
+#define PF0_MBX_PSM_ARQLEN_ARQENABLE_M BIT(31)
+#define PF0_MBX_PSM_ARQT 0x0022E634 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ARQT_ARQT_S 0
+#define PF0_MBX_PSM_ARQT_ARQT_M MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ATQBAH 0x0022E614 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ATQBAH_ATQBAH_S 0
+#define PF0_MBX_PSM_ATQBAH_ATQBAH_M MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_PSM_ATQBAL 0x0022E610 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ATQBAL_ATQBAL_S 6
+#define PF0_MBX_PSM_ATQBAL_ATQBAL_M MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_PSM_ATQH 0x0022E61C /* Reset Source: CORER */
+#define PF0_MBX_PSM_ATQH_ATQH_S 0
+#define PF0_MBX_PSM_ATQH_ATQH_M MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ATQLEN 0x0022E618 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ATQLEN_ATQLEN_S 0
+#define PF0_MBX_PSM_ATQLEN_ATQLEN_M MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ATQLEN_ATQVFE_S 28
+#define PF0_MBX_PSM_ATQLEN_ATQVFE_M BIT(28)
+#define PF0_MBX_PSM_ATQLEN_ATQOVFL_S 29
+#define PF0_MBX_PSM_ATQLEN_ATQOVFL_M BIT(29)
+#define PF0_MBX_PSM_ATQLEN_ATQCRIT_S 30
+#define PF0_MBX_PSM_ATQLEN_ATQCRIT_M BIT(30)
+#define PF0_MBX_PSM_ATQLEN_ATQENABLE_S 31
+#define PF0_MBX_PSM_ATQLEN_ATQENABLE_M BIT(31)
+#define PF0_MBX_PSM_ATQT 0x0022E620 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ATQT_ATQT_S 0
+#define PF0_MBX_PSM_ATQT_ATQT_M MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ARQBAH 0x0022E650 /* Reset Source: CORER */
+#define PF0_SB_CPM_ARQBAH_ARQBAH_S 0
+#define PF0_SB_CPM_ARQBAH_ARQBAH_M MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_SB_CPM_ARQBAL 0x0022E64C /* Reset Source: CORER */
+#define PF0_SB_CPM_ARQBAL_ARQBAL_LSB_S 0
+#define PF0_SB_CPM_ARQBAL_ARQBAL_LSB_M MAKEMASK(0x3F, 0)
+#define PF0_SB_CPM_ARQBAL_ARQBAL_S 6
+#define PF0_SB_CPM_ARQBAL_ARQBAL_M MAKEMASK(0x3FFFFFF, 6)
+#define PF0_SB_CPM_ARQH 0x0022E658 /* Reset Source: CORER */
+#define PF0_SB_CPM_ARQH_ARQH_S 0
+#define PF0_SB_CPM_ARQH_ARQH_M MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ARQLEN 0x0022E654 /* Reset Source: CORER */
+#define PF0_SB_CPM_ARQLEN_ARQLEN_S 0
+#define PF0_SB_CPM_ARQLEN_ARQLEN_M MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ARQLEN_ARQVFE_S 28
+#define PF0_SB_CPM_ARQLEN_ARQVFE_M BIT(28)
+#define PF0_SB_CPM_ARQLEN_ARQOVFL_S 29
+#define PF0_SB_CPM_ARQLEN_ARQOVFL_M BIT(29)
+#define PF0_SB_CPM_ARQLEN_ARQCRIT_S 30
+#define PF0_SB_CPM_ARQLEN_ARQCRIT_M BIT(30)
+#define PF0_SB_CPM_ARQLEN_ARQENABLE_S 31
+#define PF0_SB_CPM_ARQLEN_ARQENABLE_M BIT(31)
+#define PF0_SB_CPM_ARQT 0x0022E65C /* Reset Source: CORER */
+#define PF0_SB_CPM_ARQT_ARQT_S 0
+#define PF0_SB_CPM_ARQT_ARQT_M MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ATQBAH 0x0022E63C /* Reset Source: CORER */
+#define PF0_SB_CPM_ATQBAH_ATQBAH_S 0
+#define PF0_SB_CPM_ATQBAH_ATQBAH_M MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_SB_CPM_ATQBAL 0x0022E638 /* Reset Source: CORER */
+#define PF0_SB_CPM_ATQBAL_ATQBAL_S 6
+#define PF0_SB_CPM_ATQBAL_ATQBAL_M MAKEMASK(0x3FFFFFF, 6)
+#define PF0_SB_CPM_ATQH 0x0022E644 /* Reset Source: CORER */
+#define PF0_SB_CPM_ATQH_ATQH_S 0
+#define PF0_SB_CPM_ATQH_ATQH_M MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ATQLEN 0x0022E640 /* Reset Source: CORER */
+#define PF0_SB_CPM_ATQLEN_ATQLEN_S 0
+#define PF0_SB_CPM_ATQLEN_ATQLEN_M MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ATQLEN_ATQVFE_S 28
+#define PF0_SB_CPM_ATQLEN_ATQVFE_M BIT(28)
+#define PF0_SB_CPM_ATQLEN_ATQOVFL_S 29
+#define PF0_SB_CPM_ATQLEN_ATQOVFL_M BIT(29)
+#define PF0_SB_CPM_ATQLEN_ATQCRIT_S 30
+#define PF0_SB_CPM_ATQLEN_ATQCRIT_M BIT(30)
+#define PF0_SB_CPM_ATQLEN_ATQENABLE_S 31
+#define PF0_SB_CPM_ATQLEN_ATQENABLE_M BIT(31)
+#define PF0_SB_CPM_ATQT 0x0022E648 /* Reset Source: CORER */
+#define PF0_SB_CPM_ATQT_ATQT_S 0
+#define PF0_SB_CPM_ATQT_ATQT_M MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_REM_DEV_CTL 0x002300F4 /* Reset Source: CORER */
+#define PF0_SB_CPM_REM_DEV_CTL_DEST_EN_S 0
+#define PF0_SB_CPM_REM_DEV_CTL_DEST_EN_M MAKEMASK(0xFFFF, 0)
+#define PF0_SB_HLP_ARQBAH 0x002300D8 /* Reset Source: CORER */
+#define PF0_SB_HLP_ARQBAH_ARQBAH_S 0
+#define PF0_SB_HLP_ARQBAH_ARQBAH_M MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_SB_HLP_ARQBAL 0x002300D4 /* Reset Source: CORER */
+#define PF0_SB_HLP_ARQBAL_ARQBAL_LSB_S 0
+#define PF0_SB_HLP_ARQBAL_ARQBAL_LSB_M MAKEMASK(0x3F, 0)
+#define PF0_SB_HLP_ARQBAL_ARQBAL_S 6
+#define PF0_SB_HLP_ARQBAL_ARQBAL_M MAKEMASK(0x3FFFFFF, 6)
+#define PF0_SB_HLP_ARQH 0x002300E0 /* Reset Source: CORER */
+#define PF0_SB_HLP_ARQH_ARQH_S 0
+#define PF0_SB_HLP_ARQH_ARQH_M MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_ARQLEN 0x002300DC /* Reset Source: CORER */
+#define PF0_SB_HLP_ARQLEN_ARQLEN_S 0
+#define PF0_SB_HLP_ARQLEN_ARQLEN_M MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_ARQLEN_ARQVFE_S 28
+#define PF0_SB_HLP_ARQLEN_ARQVFE_M BIT(28)
+#define PF0_SB_HLP_ARQLEN_ARQOVFL_S 29
+#define PF0_SB_HLP_ARQLEN_ARQOVFL_M BIT(29)
+#define PF0_SB_HLP_ARQLEN_ARQCRIT_S 30
+#define PF0_SB_HLP_ARQLEN_ARQCRIT_M BIT(30)
+#define PF0_SB_HLP_ARQLEN_ARQENABLE_S 31
+#define PF0_SB_HLP_ARQLEN_ARQENABLE_M BIT(31)
+#define PF0_SB_HLP_ARQT 0x002300E4 /* Reset Source: CORER */
+#define PF0_SB_HLP_ARQT_ARQT_S 0
+#define PF0_SB_HLP_ARQT_ARQT_M MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_ATQBAH 0x002300C4 /* Reset Source: CORER */
+#define PF0_SB_HLP_ATQBAH_ATQBAH_S 0
+#define PF0_SB_HLP_ATQBAH_ATQBAH_M MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_SB_HLP_ATQBAL 0x002300C0 /* Reset Source: CORER */
+#define PF0_SB_HLP_ATQBAL_ATQBAL_S 6
+#define PF0_SB_HLP_ATQBAL_ATQBAL_M MAKEMASK(0x3FFFFFF, 6)
+#define PF0_SB_HLP_ATQH 0x002300CC /* Reset Source: CORER */
+#define PF0_SB_HLP_ATQH_ATQH_S 0
+#define PF0_SB_HLP_ATQH_ATQH_M MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_ATQLEN 0x002300C8 /* Reset Source: CORER */
+#define PF0_SB_HLP_ATQLEN_ATQLEN_S 0
+#define PF0_SB_HLP_ATQLEN_ATQLEN_M MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_ATQLEN_ATQVFE_S 28
+#define PF0_SB_HLP_ATQLEN_ATQVFE_M BIT(28)
+#define PF0_SB_HLP_ATQLEN_ATQOVFL_S 29
+#define PF0_SB_HLP_ATQLEN_ATQOVFL_M BIT(29)
+#define PF0_SB_HLP_ATQLEN_ATQCRIT_S 30
+#define PF0_SB_HLP_ATQLEN_ATQCRIT_M BIT(30)
+#define PF0_SB_HLP_ATQLEN_ATQENABLE_S 31
+#define PF0_SB_HLP_ATQLEN_ATQENABLE_M BIT(31)
+#define PF0_SB_HLP_ATQT 0x002300D0 /* Reset Source: CORER */
+#define PF0_SB_HLP_ATQT_ATQT_S 0
+#define PF0_SB_HLP_ATQT_ATQT_M MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_REM_DEV_CTL 0x002300E8 /* Reset Source: CORER */
+#define PF0_SB_HLP_REM_DEV_CTL_DEST_EN_S 0
+#define PF0_SB_HLP_REM_DEV_CTL_DEST_EN_M MAKEMASK(0xFFFF, 0)
+#define SB_REM_DEV_DEST(_i) (0x002300F8 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define SB_REM_DEV_DEST_MAX_INDEX 7
+#define SB_REM_DEV_DEST_DEST_S 0
+#define SB_REM_DEV_DEST_DEST_M MAKEMASK(0xF, 0)
+#define SB_REM_DEV_DEST_DEST_VALID_S 31
+#define SB_REM_DEV_DEST_DEST_VALID_M BIT(31)
+#define VF_MBX_ARQBAH(_VF) (0x0022B800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VF_MBX_ARQBAH_MAX_INDEX 255
+#define VF_MBX_ARQBAH_ARQBAH_S 0
+#define VF_MBX_ARQBAH_ARQBAH_M MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_ARQBAL(_VF) (0x0022B400 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VF_MBX_ARQBAL_MAX_INDEX 255
+#define VF_MBX_ARQBAL_ARQBAL_LSB_S 0
+#define VF_MBX_ARQBAL_ARQBAL_LSB_M MAKEMASK(0x3F, 0)
+#define VF_MBX_ARQBAL_ARQBAL_S 6
+#define VF_MBX_ARQBAL_ARQBAL_M MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_ARQH(_VF) (0x0022C000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VF_MBX_ARQH_MAX_INDEX 255
+#define VF_MBX_ARQH_ARQH_S 0
+#define VF_MBX_ARQH_ARQH_M MAKEMASK(0x3FF, 0)
+#define VF_MBX_ARQLEN(_VF) (0x0022BC00 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VF_MBX_ARQLEN_MAX_INDEX 255
+#define VF_MBX_ARQLEN_ARQLEN_S 0
+#define VF_MBX_ARQLEN_ARQLEN_M MAKEMASK(0x3FF, 0)
+#define VF_MBX_ARQLEN_ARQVFE_S 28
+#define VF_MBX_ARQLEN_ARQVFE_M BIT(28)
+#define VF_MBX_ARQLEN_ARQOVFL_S 29
+#define VF_MBX_ARQLEN_ARQOVFL_M BIT(29)
+#define VF_MBX_ARQLEN_ARQCRIT_S 30
+#define VF_MBX_ARQLEN_ARQCRIT_M BIT(30)
+#define VF_MBX_ARQLEN_ARQENABLE_S 31
+#define VF_MBX_ARQLEN_ARQENABLE_M BIT(31)
+#define VF_MBX_ARQT(_VF) (0x0022C400 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VF_MBX_ARQT_MAX_INDEX 255
+#define VF_MBX_ARQT_ARQT_S 0
+#define VF_MBX_ARQT_ARQT_M MAKEMASK(0x3FF, 0)
+#define VF_MBX_ATQBAH(_VF) (0x0022A400 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VF_MBX_ATQBAH_MAX_INDEX 255
+#define VF_MBX_ATQBAH_ATQBAH_S 0
+#define VF_MBX_ATQBAH_ATQBAH_M MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_ATQBAL(_VF) (0x0022A000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VF_MBX_ATQBAL_MAX_INDEX 255
+#define VF_MBX_ATQBAL_ATQBAL_S 6
+#define VF_MBX_ATQBAL_ATQBAL_M MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_ATQH(_VF) (0x0022AC00 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VF_MBX_ATQH_MAX_INDEX 255
+#define VF_MBX_ATQH_ATQH_S 0
+#define VF_MBX_ATQH_ATQH_M MAKEMASK(0x3FF, 0)
+#define VF_MBX_ATQLEN(_VF) (0x0022A800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VF_MBX_ATQLEN_MAX_INDEX 255
+#define VF_MBX_ATQLEN_ATQLEN_S 0
+#define VF_MBX_ATQLEN_ATQLEN_M MAKEMASK(0x3FF, 0)
+#define VF_MBX_ATQLEN_ATQVFE_S 28
+#define VF_MBX_ATQLEN_ATQVFE_M BIT(28)
+#define VF_MBX_ATQLEN_ATQOVFL_S 29
+#define VF_MBX_ATQLEN_ATQOVFL_M BIT(29)
+#define VF_MBX_ATQLEN_ATQCRIT_S 30
+#define VF_MBX_ATQLEN_ATQCRIT_M BIT(30)
+#define VF_MBX_ATQLEN_ATQENABLE_S 31
+#define VF_MBX_ATQLEN_ATQENABLE_M BIT(31)
+#define VF_MBX_ATQT(_VF) (0x0022B000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VF_MBX_ATQT_MAX_INDEX 255
+#define VF_MBX_ATQT_ATQT_S 0
+#define VF_MBX_ATQT_ATQT_M MAKEMASK(0x3FF, 0)
+#define VF_MBX_CPM_ARQBAH(_VF128) (0x0022D400 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_MBX_CPM_ARQBAH_MAX_INDEX 127
+#define VF_MBX_CPM_ARQBAH_ARQBAH_S 0
+#define VF_MBX_CPM_ARQBAH_ARQBAH_M MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_CPM_ARQBAL(_VF128) (0x0022D200 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_MBX_CPM_ARQBAL_MAX_INDEX 127
+#define VF_MBX_CPM_ARQBAL_ARQBAL_LSB_S 0
+#define VF_MBX_CPM_ARQBAL_ARQBAL_LSB_M MAKEMASK(0x3F, 0)
+#define VF_MBX_CPM_ARQBAL_ARQBAL_S 6
+#define VF_MBX_CPM_ARQBAL_ARQBAL_M MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_CPM_ARQH(_VF128) (0x0022D800 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_MBX_CPM_ARQH_MAX_INDEX 127
+#define VF_MBX_CPM_ARQH_ARQH_S 0
+#define VF_MBX_CPM_ARQH_ARQH_M MAKEMASK(0x3FF, 0)
+#define VF_MBX_CPM_ARQLEN(_VF128) (0x0022D600 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_MBX_CPM_ARQLEN_MAX_INDEX 127
+#define VF_MBX_CPM_ARQLEN_ARQLEN_S 0
+#define VF_MBX_CPM_ARQLEN_ARQLEN_M MAKEMASK(0x3FF, 0)
+#define VF_MBX_CPM_ARQLEN_ARQVFE_S 28
+#define VF_MBX_CPM_ARQLEN_ARQVFE_M BIT(28)
+#define VF_MBX_CPM_ARQLEN_ARQOVFL_S 29
+#define VF_MBX_CPM_ARQLEN_ARQOVFL_M BIT(29)
+#define VF_MBX_CPM_ARQLEN_ARQCRIT_S 30
+#define VF_MBX_CPM_ARQLEN_ARQCRIT_M BIT(30)
+#define VF_MBX_CPM_ARQLEN_ARQENABLE_S 31
+#define VF_MBX_CPM_ARQLEN_ARQENABLE_M BIT(31)
+#define VF_MBX_CPM_ARQT(_VF128) (0x0022DA00 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_MBX_CPM_ARQT_MAX_INDEX 127
+#define VF_MBX_CPM_ARQT_ARQT_S 0
+#define VF_MBX_CPM_ARQT_ARQT_M MAKEMASK(0x3FF, 0)
+#define VF_MBX_CPM_ATQBAH(_VF128) (0x0022CA00 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_MBX_CPM_ATQBAH_MAX_INDEX 127
+#define VF_MBX_CPM_ATQBAH_ATQBAH_S 0
+#define VF_MBX_CPM_ATQBAH_ATQBAH_M MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_CPM_ATQBAL(_VF128) (0x0022C800 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_MBX_CPM_ATQBAL_MAX_INDEX 127
+#define VF_MBX_CPM_ATQBAL_ATQBAL_S 6
+#define VF_MBX_CPM_ATQBAL_ATQBAL_M MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_CPM_ATQH(_VF128) (0x0022CE00 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_MBX_CPM_ATQH_MAX_INDEX 127
+#define VF_MBX_CPM_ATQH_ATQH_S 0
+#define VF_MBX_CPM_ATQH_ATQH_M MAKEMASK(0x3FF, 0)
+#define VF_MBX_CPM_ATQLEN(_VF128) (0x0022CC00 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_MBX_CPM_ATQLEN_MAX_INDEX 127
+#define VF_MBX_CPM_ATQLEN_ATQLEN_S 0
+#define VF_MBX_CPM_ATQLEN_ATQLEN_M MAKEMASK(0x3FF, 0)
+#define VF_MBX_CPM_ATQLEN_ATQVFE_S 28
+#define VF_MBX_CPM_ATQLEN_ATQVFE_M BIT(28)
+#define VF_MBX_CPM_ATQLEN_ATQOVFL_S 29
+#define VF_MBX_CPM_ATQLEN_ATQOVFL_M BIT(29)
+#define VF_MBX_CPM_ATQLEN_ATQCRIT_S 30
+#define VF_MBX_CPM_ATQLEN_ATQCRIT_M BIT(30)
+#define VF_MBX_CPM_ATQLEN_ATQENABLE_S 31
+#define VF_MBX_CPM_ATQLEN_ATQENABLE_M BIT(31)
+#define VF_MBX_CPM_ATQT(_VF128) (0x0022D000 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_MBX_CPM_ATQT_MAX_INDEX 127
+#define VF_MBX_CPM_ATQT_ATQT_S 0
+#define VF_MBX_CPM_ATQT_ATQT_M MAKEMASK(0x3FF, 0)
+#define VF_MBX_HLP_ARQBAH(_VF16) (0x0022DD80 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_HLP_ARQBAH_MAX_INDEX 15
+#define VF_MBX_HLP_ARQBAH_ARQBAH_S 0
+#define VF_MBX_HLP_ARQBAH_ARQBAH_M MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_HLP_ARQBAL(_VF16) (0x0022DD40 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_HLP_ARQBAL_MAX_INDEX 15
+#define VF_MBX_HLP_ARQBAL_ARQBAL_LSB_S 0
+#define VF_MBX_HLP_ARQBAL_ARQBAL_LSB_M MAKEMASK(0x3F, 0)
+#define VF_MBX_HLP_ARQBAL_ARQBAL_S 6
+#define VF_MBX_HLP_ARQBAL_ARQBAL_M MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_HLP_ARQH(_VF16) (0x0022DE00 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_HLP_ARQH_MAX_INDEX 15
+#define VF_MBX_HLP_ARQH_ARQH_S 0
+#define VF_MBX_HLP_ARQH_ARQH_M MAKEMASK(0x3FF, 0)
+#define VF_MBX_HLP_ARQLEN(_VF16) (0x0022DDC0 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_HLP_ARQLEN_MAX_INDEX 15
+#define VF_MBX_HLP_ARQLEN_ARQLEN_S 0
+#define VF_MBX_HLP_ARQLEN_ARQLEN_M MAKEMASK(0x3FF, 0)
+#define VF_MBX_HLP_ARQLEN_ARQVFE_S 28
+#define VF_MBX_HLP_ARQLEN_ARQVFE_M BIT(28)
+#define VF_MBX_HLP_ARQLEN_ARQOVFL_S 29
+#define VF_MBX_HLP_ARQLEN_ARQOVFL_M BIT(29)
+#define VF_MBX_HLP_ARQLEN_ARQCRIT_S 30
+#define VF_MBX_HLP_ARQLEN_ARQCRIT_M BIT(30)
+#define VF_MBX_HLP_ARQLEN_ARQENABLE_S 31
+#define VF_MBX_HLP_ARQLEN_ARQENABLE_M BIT(31)
+#define VF_MBX_HLP_ARQT(_VF16) (0x0022DE40 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_HLP_ARQT_MAX_INDEX 15
+#define VF_MBX_HLP_ARQT_ARQT_S 0
+#define VF_MBX_HLP_ARQT_ARQT_M MAKEMASK(0x3FF, 0)
+#define VF_MBX_HLP_ATQBAH(_VF16) (0x0022DC40 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_HLP_ATQBAH_MAX_INDEX 15
+#define VF_MBX_HLP_ATQBAH_ATQBAH_S 0
+#define VF_MBX_HLP_ATQBAH_ATQBAH_M MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_HLP_ATQBAL(_VF16) (0x0022DC00 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_HLP_ATQBAL_MAX_INDEX 15
+#define VF_MBX_HLP_ATQBAL_ATQBAL_S 6
+#define VF_MBX_HLP_ATQBAL_ATQBAL_M MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_HLP_ATQH(_VF16) (0x0022DCC0 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_HLP_ATQH_MAX_INDEX 15
+#define VF_MBX_HLP_ATQH_ATQH_S 0
+#define VF_MBX_HLP_ATQH_ATQH_M MAKEMASK(0x3FF, 0)
+#define VF_MBX_HLP_ATQLEN(_VF16) (0x0022DC80 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_HLP_ATQLEN_MAX_INDEX 15
+#define VF_MBX_HLP_ATQLEN_ATQLEN_S 0
+#define VF_MBX_HLP_ATQLEN_ATQLEN_M MAKEMASK(0x3FF, 0)
+#define VF_MBX_HLP_ATQLEN_ATQVFE_S 28
+#define VF_MBX_HLP_ATQLEN_ATQVFE_M BIT(28)
+#define VF_MBX_HLP_ATQLEN_ATQOVFL_S 29
+#define VF_MBX_HLP_ATQLEN_ATQOVFL_M BIT(29)
+#define VF_MBX_HLP_ATQLEN_ATQCRIT_S 30
+#define VF_MBX_HLP_ATQLEN_ATQCRIT_M BIT(30)
+#define VF_MBX_HLP_ATQLEN_ATQENABLE_S 31
+#define VF_MBX_HLP_ATQLEN_ATQENABLE_M BIT(31)
+#define VF_MBX_HLP_ATQT(_VF16) (0x0022DD00 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_HLP_ATQT_MAX_INDEX 15
+#define VF_MBX_HLP_ATQT_ATQT_S 0
+#define VF_MBX_HLP_ATQT_ATQT_M MAKEMASK(0x3FF, 0)
+#define VF_MBX_PSM_ARQBAH(_VF16) (0x0022E000 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_PSM_ARQBAH_MAX_INDEX 15
+#define VF_MBX_PSM_ARQBAH_ARQBAH_S 0
+#define VF_MBX_PSM_ARQBAH_ARQBAH_M MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_PSM_ARQBAL(_VF16) (0x0022DFC0 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_PSM_ARQBAL_MAX_INDEX 15
+#define VF_MBX_PSM_ARQBAL_ARQBAL_LSB_S 0
+#define VF_MBX_PSM_ARQBAL_ARQBAL_LSB_M MAKEMASK(0x3F, 0)
+#define VF_MBX_PSM_ARQBAL_ARQBAL_S 6
+#define VF_MBX_PSM_ARQBAL_ARQBAL_M MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_PSM_ARQH(_VF16) (0x0022E080 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_PSM_ARQH_MAX_INDEX 15
+#define VF_MBX_PSM_ARQH_ARQH_S 0
+#define VF_MBX_PSM_ARQH_ARQH_M MAKEMASK(0x3FF, 0)
+#define VF_MBX_PSM_ARQLEN(_VF16) (0x0022E040 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_PSM_ARQLEN_MAX_INDEX 15
+#define VF_MBX_PSM_ARQLEN_ARQLEN_S 0
+#define VF_MBX_PSM_ARQLEN_ARQLEN_M MAKEMASK(0x3FF, 0)
+#define VF_MBX_PSM_ARQLEN_ARQVFE_S 28
+#define VF_MBX_PSM_ARQLEN_ARQVFE_M BIT(28)
+#define VF_MBX_PSM_ARQLEN_ARQOVFL_S 29
+#define VF_MBX_PSM_ARQLEN_ARQOVFL_M BIT(29)
+#define VF_MBX_PSM_ARQLEN_ARQCRIT_S 30
+#define VF_MBX_PSM_ARQLEN_ARQCRIT_M BIT(30)
+#define VF_MBX_PSM_ARQLEN_ARQENABLE_S 31
+#define VF_MBX_PSM_ARQLEN_ARQENABLE_M BIT(31)
+#define VF_MBX_PSM_ARQT(_VF16) (0x0022E0C0 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_PSM_ARQT_MAX_INDEX 15
+#define VF_MBX_PSM_ARQT_ARQT_S 0
+#define VF_MBX_PSM_ARQT_ARQT_M MAKEMASK(0x3FF, 0)
+#define VF_MBX_PSM_ATQBAH(_VF16) (0x0022DEC0 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_PSM_ATQBAH_MAX_INDEX 15
+#define VF_MBX_PSM_ATQBAH_ATQBAH_S 0
+#define VF_MBX_PSM_ATQBAH_ATQBAH_M MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_PSM_ATQBAL(_VF16) (0x0022DE80 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_PSM_ATQBAL_MAX_INDEX 15
+#define VF_MBX_PSM_ATQBAL_ATQBAL_S 6
+#define VF_MBX_PSM_ATQBAL_ATQBAL_M MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_PSM_ATQH(_VF16) (0x0022DF40 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_PSM_ATQH_MAX_INDEX 15
+#define VF_MBX_PSM_ATQH_ATQH_S 0
+#define VF_MBX_PSM_ATQH_ATQH_M MAKEMASK(0x3FF, 0)
+#define VF_MBX_PSM_ATQLEN(_VF16) (0x0022DF00 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_PSM_ATQLEN_MAX_INDEX 15
+#define VF_MBX_PSM_ATQLEN_ATQLEN_S 0
+#define VF_MBX_PSM_ATQLEN_ATQLEN_M MAKEMASK(0x3FF, 0)
+#define VF_MBX_PSM_ATQLEN_ATQVFE_S 28
+#define VF_MBX_PSM_ATQLEN_ATQVFE_M BIT(28)
+#define VF_MBX_PSM_ATQLEN_ATQOVFL_S 29
+#define VF_MBX_PSM_ATQLEN_ATQOVFL_M BIT(29)
+#define VF_MBX_PSM_ATQLEN_ATQCRIT_S 30
+#define VF_MBX_PSM_ATQLEN_ATQCRIT_M BIT(30)
+#define VF_MBX_PSM_ATQLEN_ATQENABLE_S 31
+#define VF_MBX_PSM_ATQLEN_ATQENABLE_M BIT(31)
+#define VF_MBX_PSM_ATQT(_VF16) (0x0022DF80 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_PSM_ATQT_MAX_INDEX 15
+#define VF_MBX_PSM_ATQT_ATQT_S 0
+#define VF_MBX_PSM_ATQT_ATQT_M MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_ARQBAH(_VF128) (0x0022F400 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_SB_CPM_ARQBAH_MAX_INDEX 127
+#define VF_SB_CPM_ARQBAH_ARQBAH_S 0
+#define VF_SB_CPM_ARQBAH_ARQBAH_M MAKEMASK(0xFFFFFFFF, 0)
+#define VF_SB_CPM_ARQBAL(_VF128) (0x0022F200 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_SB_CPM_ARQBAL_MAX_INDEX 127
+#define VF_SB_CPM_ARQBAL_ARQBAL_LSB_S 0
+#define VF_SB_CPM_ARQBAL_ARQBAL_LSB_M MAKEMASK(0x3F, 0)
+#define VF_SB_CPM_ARQBAL_ARQBAL_S 6
+#define VF_SB_CPM_ARQBAL_ARQBAL_M MAKEMASK(0x3FFFFFF, 6)
+#define VF_SB_CPM_ARQH(_VF128) (0x0022F800 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_SB_CPM_ARQH_MAX_INDEX 127
+#define VF_SB_CPM_ARQH_ARQH_S 0
+#define VF_SB_CPM_ARQH_ARQH_M MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_ARQLEN(_VF128) (0x0022F600 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_SB_CPM_ARQLEN_MAX_INDEX 127
+#define VF_SB_CPM_ARQLEN_ARQLEN_S 0
+#define VF_SB_CPM_ARQLEN_ARQLEN_M MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_ARQLEN_ARQVFE_S 28
+#define VF_SB_CPM_ARQLEN_ARQVFE_M BIT(28)
+#define VF_SB_CPM_ARQLEN_ARQOVFL_S 29
+#define VF_SB_CPM_ARQLEN_ARQOVFL_M BIT(29)
+#define VF_SB_CPM_ARQLEN_ARQCRIT_S 30
+#define VF_SB_CPM_ARQLEN_ARQCRIT_M BIT(30)
+#define VF_SB_CPM_ARQLEN_ARQENABLE_S 31
+#define VF_SB_CPM_ARQLEN_ARQENABLE_M BIT(31)
+#define VF_SB_CPM_ARQT(_VF128) (0x0022FA00 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_SB_CPM_ARQT_MAX_INDEX 127
+#define VF_SB_CPM_ARQT_ARQT_S 0
+#define VF_SB_CPM_ARQT_ARQT_M MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_ATQBAH(_VF128) (0x0022EA00 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_SB_CPM_ATQBAH_MAX_INDEX 127
+#define VF_SB_CPM_ATQBAH_ATQBAH_S 0
+#define VF_SB_CPM_ATQBAH_ATQBAH_M MAKEMASK(0xFFFFFFFF, 0)
+#define VF_SB_CPM_ATQBAL(_VF128) (0x0022E800 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_SB_CPM_ATQBAL_MAX_INDEX 127
+#define VF_SB_CPM_ATQBAL_ATQBAL_S 6
+#define VF_SB_CPM_ATQBAL_ATQBAL_M MAKEMASK(0x3FFFFFF, 6)
+#define VF_SB_CPM_ATQH(_VF128) (0x0022EE00 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_SB_CPM_ATQH_MAX_INDEX 127
+#define VF_SB_CPM_ATQH_ATQH_S 0
+#define VF_SB_CPM_ATQH_ATQH_M MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_ATQLEN(_VF128) (0x0022EC00 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_SB_CPM_ATQLEN_MAX_INDEX 127
+#define VF_SB_CPM_ATQLEN_ATQLEN_S 0
+#define VF_SB_CPM_ATQLEN_ATQLEN_M MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_ATQLEN_ATQVFE_S 28
+#define VF_SB_CPM_ATQLEN_ATQVFE_M BIT(28)
+#define VF_SB_CPM_ATQLEN_ATQOVFL_S 29
+#define VF_SB_CPM_ATQLEN_ATQOVFL_M BIT(29)
+#define VF_SB_CPM_ATQLEN_ATQCRIT_S 30
+#define VF_SB_CPM_ATQLEN_ATQCRIT_M BIT(30)
+#define VF_SB_CPM_ATQLEN_ATQENABLE_S 31
+#define VF_SB_CPM_ATQLEN_ATQENABLE_M BIT(31)
+#define VF_SB_CPM_ATQT(_VF128) (0x0022F000 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_SB_CPM_ATQT_MAX_INDEX 127
+#define VF_SB_CPM_ATQT_ATQT_S 0
+#define VF_SB_CPM_ATQT_ATQT_M MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_REM_DEV_CTL 0x002300EC /* Reset Source: CORER */
+#define VF_SB_CPM_REM_DEV_CTL_DEST_EN_S 0
+#define VF_SB_CPM_REM_DEV_CTL_DEST_EN_M MAKEMASK(0xFFFF, 0)
+#define VP_MBX_CPM_PF_VF_CTRL(_VP128) (0x00231800 + ((_VP128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VP_MBX_CPM_PF_VF_CTRL_MAX_INDEX 127
+#define VP_MBX_CPM_PF_VF_CTRL_QUEUE_EN_S 0
+#define VP_MBX_CPM_PF_VF_CTRL_QUEUE_EN_M BIT(0)
+#define VP_MBX_HLP_PF_VF_CTRL(_VP16) (0x00231A00 + ((_VP16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VP_MBX_HLP_PF_VF_CTRL_MAX_INDEX 15
+#define VP_MBX_HLP_PF_VF_CTRL_QUEUE_EN_S 0
+#define VP_MBX_HLP_PF_VF_CTRL_QUEUE_EN_M BIT(0)
+#define VP_MBX_PF_VF_CTRL(_VSI) (0x00230800 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VP_MBX_PF_VF_CTRL_MAX_INDEX 767
+#define VP_MBX_PF_VF_CTRL_QUEUE_EN_S 0
+#define VP_MBX_PF_VF_CTRL_QUEUE_EN_M BIT(0)
+#define VP_MBX_PSM_PF_VF_CTRL(_VP16) (0x00231A40 + ((_VP16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VP_MBX_PSM_PF_VF_CTRL_MAX_INDEX 15
+#define VP_MBX_PSM_PF_VF_CTRL_QUEUE_EN_S 0
+#define VP_MBX_PSM_PF_VF_CTRL_QUEUE_EN_M BIT(0)
+#define VP_SB_CPM_PF_VF_CTRL(_VP128) (0x00231C00 + ((_VP128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VP_SB_CPM_PF_VF_CTRL_MAX_INDEX 127
+#define VP_SB_CPM_PF_VF_CTRL_QUEUE_EN_S 0
+#define VP_SB_CPM_PF_VF_CTRL_QUEUE_EN_M BIT(0)
+#define GL_DCB_TDSCP2TC_BLOCK_DIS 0x00049218 /* Reset Source: CORER */
+#define GL_DCB_TDSCP2TC_BLOCK_DIS_DSCP2TC_BLOCK_DIS_S 0
+#define GL_DCB_TDSCP2TC_BLOCK_DIS_DSCP2TC_BLOCK_DIS_M BIT(0)
+#define GL_DCB_TDSCP2TC_BLOCK_IPV4(_i) (0x00049018 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GL_DCB_TDSCP2TC_BLOCK_IPV4_MAX_INDEX 63
+#define GL_DCB_TDSCP2TC_BLOCK_IPV4_TC_BLOCK_LUT_S 0
+#define GL_DCB_TDSCP2TC_BLOCK_IPV4_TC_BLOCK_LUT_M MAKEMASK(0xFFFFFFFF, 0)
+#define GL_DCB_TDSCP2TC_BLOCK_IPV6(_i) (0x00049118 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GL_DCB_TDSCP2TC_BLOCK_IPV6_MAX_INDEX 63
+#define GL_DCB_TDSCP2TC_BLOCK_IPV6_TC_BLOCK_LUT_S 0
+#define GL_DCB_TDSCP2TC_BLOCK_IPV6_TC_BLOCK_LUT_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_GENC 0x00083044 /* Reset Source: CORER */
+#define GLDCB_GENC_PCIRTT_S 0
+#define GLDCB_GENC_PCIRTT_M MAKEMASK(0xFFFF, 0)
+#define GLDCB_PRS_RETSTCC(_i) (0x002000B0 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLDCB_PRS_RETSTCC_MAX_INDEX 31
+#define GLDCB_PRS_RETSTCC_BWSHARE_S 0
+#define GLDCB_PRS_RETSTCC_BWSHARE_M MAKEMASK(0x7F, 0)
+#define GLDCB_PRS_RETSTCC_ETSTC_S 31
+#define GLDCB_PRS_RETSTCC_ETSTC_M BIT(31)
+#define GLDCB_PRS_RSPMC 0x00200160 /* Reset Source: CORER */
+#define GLDCB_PRS_RSPMC_RSPM_S 0
+#define GLDCB_PRS_RSPMC_RSPM_M MAKEMASK(0xFF, 0)
+#define GLDCB_PRS_RSPMC_RPM_MODE_S 8
+#define GLDCB_PRS_RSPMC_RPM_MODE_M MAKEMASK(0x3, 8)
+#define GLDCB_PRS_RSPMC_PRR_MAX_EXP_S 10
+#define GLDCB_PRS_RSPMC_PRR_MAX_EXP_M MAKEMASK(0xF, 10)
+#define GLDCB_PRS_RSPMC_PFCTIMER_S 14
+#define GLDCB_PRS_RSPMC_PFCTIMER_M MAKEMASK(0x3FFF, 14)
+#define GLDCB_PRS_RSPMC_RPM_DIS_S 31
+#define GLDCB_PRS_RSPMC_RPM_DIS_M BIT(31)
+#define GLDCB_RETSTCC(_i) (0x00122140 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLDCB_RETSTCC_MAX_INDEX 31
+#define GLDCB_RETSTCC_BWSHARE_S 0
+#define GLDCB_RETSTCC_BWSHARE_M MAKEMASK(0x7F, 0)
+#define GLDCB_RETSTCC_ETSTC_S 31
+#define GLDCB_RETSTCC_ETSTC_M BIT(31)
+#define GLDCB_RETSTCS(_i) (0x001221C0 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLDCB_RETSTCS_MAX_INDEX 31
+#define GLDCB_RETSTCS_CREDITS_S 0
+#define GLDCB_RETSTCS_CREDITS_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_RTC2PFC_RCB 0x00122100 /* Reset Source: CORER */
+#define GLDCB_RTC2PFC_RCB_TC2PFC_S 0
+#define GLDCB_RTC2PFC_RCB_TC2PFC_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_SWT_RETSTCC(_i) (0x0020A040 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLDCB_SWT_RETSTCC_MAX_INDEX 31
+#define GLDCB_SWT_RETSTCC_BWSHARE_S 0
+#define GLDCB_SWT_RETSTCC_BWSHARE_M MAKEMASK(0x7F, 0)
+#define GLDCB_SWT_RETSTCC_ETSTC_S 31
+#define GLDCB_SWT_RETSTCC_ETSTC_M BIT(31)
+#define GLDCB_TC2PFC 0x001D2694 /* Reset Source: CORER */
+#define GLDCB_TC2PFC_TC2PFC_S 0
+#define GLDCB_TC2PFC_TC2PFC_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_TCB_MNG_SP 0x000AE12C /* Reset Source: CORER */
+#define GLDCB_TCB_MNG_SP_MNG_SP_S 0
+#define GLDCB_TCB_MNG_SP_MNG_SP_M BIT(0)
+#define GLDCB_TCB_TCLL_CFG 0x000AE134 /* Reset Source: CORER */
+#define GLDCB_TCB_TCLL_CFG_LLTC_S 0
+#define GLDCB_TCB_TCLL_CFG_LLTC_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_TCB_WB_SP 0x000AE310 /* Reset Source: CORER */
+#define GLDCB_TCB_WB_SP_WB_SP_S 0
+#define GLDCB_TCB_WB_SP_WB_SP_M BIT(0)
+#define GLDCB_TCUPM_IMM_EN 0x000BC824 /* Reset Source: CORER */
+#define GLDCB_TCUPM_IMM_EN_IMM_EN_S 0
+#define GLDCB_TCUPM_IMM_EN_IMM_EN_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_TCUPM_LEGACY_TC 0x000BC828 /* Reset Source: CORER */
+#define GLDCB_TCUPM_LEGACY_TC_LEGTC_S 0
+#define GLDCB_TCUPM_LEGACY_TC_LEGTC_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_TCUPM_NO_EXCEED_DIS 0x000BC830 /* Reset Source: CORER */
+#define GLDCB_TCUPM_NO_EXCEED_DIS_NON_EXCEED_DIS_S 0
+#define GLDCB_TCUPM_NO_EXCEED_DIS_NON_EXCEED_DIS_M BIT(0)
+#define GLDCB_TCUPM_WB_DIS 0x000BC834 /* Reset Source: CORER */
+#define GLDCB_TCUPM_WB_DIS_PORT_DISABLE_S 0
+#define GLDCB_TCUPM_WB_DIS_PORT_DISABLE_M BIT(0)
+#define GLDCB_TCUPM_WB_DIS_TC_DISABLE_S 1
+#define GLDCB_TCUPM_WB_DIS_TC_DISABLE_M BIT(1)
+#define GLDCB_TFPFCI 0x0009949C /* Reset Source: CORER */
+#define GLDCB_TFPFCI_GLDCB_TFPFCI_S 0
+#define GLDCB_TFPFCI_GLDCB_TFPFCI_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_TLPM_IMM_TCB 0x000A0190 /* Reset Source: CORER */
+#define GLDCB_TLPM_IMM_TCB_IMM_EN_S 0
+#define GLDCB_TLPM_IMM_TCB_IMM_EN_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_TLPM_IMM_TCUPM 0x000A018C /* Reset Source: CORER */
+#define GLDCB_TLPM_IMM_TCUPM_IMM_EN_S 0
+#define GLDCB_TLPM_IMM_TCUPM_IMM_EN_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_TLPM_PCI_DM 0x000A0180 /* Reset Source: CORER */
+#define GLDCB_TLPM_PCI_DM_MONITOR_S 0
+#define GLDCB_TLPM_PCI_DM_MONITOR_M MAKEMASK(0x7FFFF, 0)
+#define GLDCB_TLPM_PCI_DTHR 0x000A0184 /* Reset Source: CORER */
+#define GLDCB_TLPM_PCI_DTHR_PCI_TDATA_S 0
+#define GLDCB_TLPM_PCI_DTHR_PCI_TDATA_M MAKEMASK(0xFFF, 0)
+#define GLDCB_TPB_IMM_TLPM 0x00099468 /* Reset Source: CORER */
+#define GLDCB_TPB_IMM_TLPM_IMM_EN_S 0
+#define GLDCB_TPB_IMM_TLPM_IMM_EN_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_TPB_IMM_TPB 0x0009946C /* Reset Source: CORER */
+#define GLDCB_TPB_IMM_TPB_IMM_EN_S 0
+#define GLDCB_TPB_IMM_TPB_IMM_EN_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_TPB_TCLL_CFG 0x00099464 /* Reset Source: CORER */
+#define GLDCB_TPB_TCLL_CFG_LLTC_S 0
+#define GLDCB_TPB_TCLL_CFG_LLTC_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCB_BULK_DWRR_REG_QUANTA 0x000AE0E0 /* Reset Source: CORER */
+#define GLTCB_BULK_DWRR_REG_QUANTA_QUANTA_S 0
+#define GLTCB_BULK_DWRR_REG_QUANTA_QUANTA_M MAKEMASK(0x7FF, 0)
+#define GLTCB_BULK_DWRR_REG_SAT 0x000AE0F0 /* Reset Source: CORER */
+#define GLTCB_BULK_DWRR_REG_SAT_SATURATION_S 0
+#define GLTCB_BULK_DWRR_REG_SAT_SATURATION_M MAKEMASK(0x1FFFF, 0)
+#define GLTCB_BULK_DWRR_WB_QUANTA 0x000AE0E4 /* Reset Source: CORER */
+#define GLTCB_BULK_DWRR_WB_QUANTA_QUANTA_S 0
+#define GLTCB_BULK_DWRR_WB_QUANTA_QUANTA_M MAKEMASK(0x7FF, 0)
+#define GLTCB_BULK_DWRR_WB_SAT 0x000AE0F4 /* Reset Source: CORER */
+#define GLTCB_BULK_DWRR_WB_SAT_SATURATION_S 0
+#define GLTCB_BULK_DWRR_WB_SAT_SATURATION_M MAKEMASK(0x1FFFF, 0)
+#define GLTCB_CREDIT_EXP_CTL 0x000AE120 /* Reset Source: CORER */
+#define GLTCB_CREDIT_EXP_CTL_EN_S 0
+#define GLTCB_CREDIT_EXP_CTL_EN_M BIT(0)
+#define GLTCB_CREDIT_EXP_CTL_MIN_PKT_S 1
+#define GLTCB_CREDIT_EXP_CTL_MIN_PKT_M MAKEMASK(0x1FF, 1)
+#define GLTCB_LL_DWRR_REG_QUANTA 0x000AE0E8 /* Reset Source: CORER */
+#define GLTCB_LL_DWRR_REG_QUANTA_QUANTA_S 0
+#define GLTCB_LL_DWRR_REG_QUANTA_QUANTA_M MAKEMASK(0x7FF, 0)
+#define GLTCB_LL_DWRR_REG_SAT 0x000AE0F8 /* Reset Source: CORER */
+#define GLTCB_LL_DWRR_REG_SAT_SATURATION_S 0
+#define GLTCB_LL_DWRR_REG_SAT_SATURATION_M MAKEMASK(0x1FFFF, 0)
+#define GLTCB_LL_DWRR_WB_QUANTA 0x000AE0EC /* Reset Source: CORER */
+#define GLTCB_LL_DWRR_WB_QUANTA_QUANTA_S 0
+#define GLTCB_LL_DWRR_WB_QUANTA_QUANTA_M MAKEMASK(0x7FF, 0)
+#define GLTCB_LL_DWRR_WB_SAT 0x000AE0FC /* Reset Source: CORER */
+#define GLTCB_LL_DWRR_WB_SAT_SATURATION_S 0
+#define GLTCB_LL_DWRR_WB_SAT_SATURATION_M MAKEMASK(0x1FFFF, 0)
+#define GLTCB_WB_RL 0x000AE238 /* Reset Source: CORER */
+#define GLTCB_WB_RL_PERIOD_S 0
+#define GLTCB_WB_RL_PERIOD_M MAKEMASK(0xFFFF, 0)
+#define GLTCB_WB_RL_EN_S 16
+#define GLTCB_WB_RL_EN_M BIT(16)
+#define GLTPB_WB_RL 0x00099460 /* Reset Source: CORER */
+#define GLTPB_WB_RL_PERIOD_S 0
+#define GLTPB_WB_RL_PERIOD_M MAKEMASK(0xFFFF, 0)
+#define GLTPB_WB_RL_EN_S 16
+#define GLTPB_WB_RL_EN_M BIT(16)
+#define PRTDCB_FCCFG 0x001E4640 /* Reset Source: GLOBR */
+#define PRTDCB_FCCFG_TFCE_S 3
+#define PRTDCB_FCCFG_TFCE_M MAKEMASK(0x3, 3)
+#define PRTDCB_FCRTV 0x001E4600 /* Reset Source: GLOBR */
+#define PRTDCB_FCRTV_FC_REFRESH_TH_S 0
+#define PRTDCB_FCRTV_FC_REFRESH_TH_M MAKEMASK(0xFFFF, 0)
+#define PRTDCB_FCTTVN(_i) (0x001E4580 + ((_i) * 32)) /* _i=0...3 */ /* Reset Source: GLOBR */
+#define PRTDCB_FCTTVN_MAX_INDEX 3
+#define PRTDCB_FCTTVN_TTV_2N_S 0
+#define PRTDCB_FCTTVN_TTV_2N_M MAKEMASK(0xFFFF, 0)
+#define PRTDCB_FCTTVN_TTV_2N_P1_S 16
+#define PRTDCB_FCTTVN_TTV_2N_P1_M MAKEMASK(0xFFFF, 16)
+#define PRTDCB_GENC 0x00083000 /* Reset Source: CORER */
+#define PRTDCB_GENC_NUMTC_S 2
+#define PRTDCB_GENC_NUMTC_M MAKEMASK(0xF, 2)
+#define PRTDCB_GENC_FCOEUP_S 6
+#define PRTDCB_GENC_FCOEUP_M MAKEMASK(0x7, 6)
+#define PRTDCB_GENC_FCOEUP_VALID_S 9
+#define PRTDCB_GENC_FCOEUP_VALID_M BIT(9)
+#define PRTDCB_GENC_PFCLDA_S 16
+#define PRTDCB_GENC_PFCLDA_M MAKEMASK(0xFFFF, 16)
+#define PRTDCB_GENS 0x00083020 /* Reset Source: CORER */
+#define PRTDCB_GENS_DCBX_STATUS_S 0
+#define PRTDCB_GENS_DCBX_STATUS_M MAKEMASK(0x7, 0)
+#define PRTDCB_PRS_RETSC 0x002001A0 /* Reset Source: CORER */
+#define PRTDCB_PRS_RETSC_ETS_MODE_S 0
+#define PRTDCB_PRS_RETSC_ETS_MODE_M BIT(0)
+#define PRTDCB_PRS_RETSC_NON_ETS_MODE_S 1
+#define PRTDCB_PRS_RETSC_NON_ETS_MODE_M BIT(1)
+#define PRTDCB_PRS_RETSC_ETS_MAX_EXP_S 2
+#define PRTDCB_PRS_RETSC_ETS_MAX_EXP_M MAKEMASK(0xF, 2)
+#define PRTDCB_PRS_RPRRC 0x00200180 /* Reset Source: CORER */
+#define PRTDCB_PRS_RPRRC_BWSHARE_S 0
+#define PRTDCB_PRS_RPRRC_BWSHARE_M MAKEMASK(0x3FF, 0)
+#define PRTDCB_PRS_RPRRC_BWSHARE_DIS_S 31
+#define PRTDCB_PRS_RPRRC_BWSHARE_DIS_M BIT(31)
+#define PRTDCB_RETSC 0x001222A0 /* Reset Source: CORER */
+#define PRTDCB_RETSC_ETS_MODE_S 0
+#define PRTDCB_RETSC_ETS_MODE_M BIT(0)
+#define PRTDCB_RETSC_NON_ETS_MODE_S 1
+#define PRTDCB_RETSC_NON_ETS_MODE_M BIT(1)
+#define PRTDCB_RETSC_ETS_MAX_EXP_S 2
+#define PRTDCB_RETSC_ETS_MAX_EXP_M MAKEMASK(0xF, 2)
+#define PRTDCB_RPRRC 0x001220C0 /* Reset Source: CORER */
+#define PRTDCB_RPRRC_BWSHARE_S 0
+#define PRTDCB_RPRRC_BWSHARE_M MAKEMASK(0x3FF, 0)
+#define PRTDCB_RPRRC_BWSHARE_DIS_S 31
+#define PRTDCB_RPRRC_BWSHARE_DIS_M BIT(31)
+#define PRTDCB_RPRRS 0x001220E0 /* Reset Source: CORER */
+#define PRTDCB_RPRRS_CREDITS_S 0
+#define PRTDCB_RPRRS_CREDITS_M MAKEMASK(0xFFFFFFFF, 0)
+#define PRTDCB_RUP_TDPU 0x00040960 /* Reset Source: CORER */
+#define PRTDCB_RUP_TDPU_NOVLANUP_S 0
+#define PRTDCB_RUP_TDPU_NOVLANUP_M MAKEMASK(0x7, 0)
+#define PRTDCB_RUP2TC 0x001D2640 /* Reset Source: CORER */
+#define PRTDCB_RUP2TC_UP0TC_S 0
+#define PRTDCB_RUP2TC_UP0TC_M MAKEMASK(0x7, 0)
+#define PRTDCB_RUP2TC_UP1TC_S 3
+#define PRTDCB_RUP2TC_UP1TC_M MAKEMASK(0x7, 3)
+#define PRTDCB_RUP2TC_UP2TC_S 6
+#define PRTDCB_RUP2TC_UP2TC_M MAKEMASK(0x7, 6)
+#define PRTDCB_RUP2TC_UP3TC_S 9
+#define PRTDCB_RUP2TC_UP3TC_M MAKEMASK(0x7, 9)
+#define PRTDCB_RUP2TC_UP4TC_S 12
+#define PRTDCB_RUP2TC_UP4TC_M MAKEMASK(0x7, 12)
+#define PRTDCB_RUP2TC_UP5TC_S 15
+#define PRTDCB_RUP2TC_UP5TC_M MAKEMASK(0x7, 15)
+#define PRTDCB_RUP2TC_UP6TC_S 18
+#define PRTDCB_RUP2TC_UP6TC_M MAKEMASK(0x7, 18)
+#define PRTDCB_RUP2TC_UP7TC_S 21
+#define PRTDCB_RUP2TC_UP7TC_M MAKEMASK(0x7, 21)
+#define PRTDCB_SWT_RETSC 0x0020A140 /* Reset Source: CORER */
+#define PRTDCB_SWT_RETSC_ETS_MODE_S 0
+#define PRTDCB_SWT_RETSC_ETS_MODE_M BIT(0)
+#define PRTDCB_SWT_RETSC_NON_ETS_MODE_S 1
+#define PRTDCB_SWT_RETSC_NON_ETS_MODE_M BIT(1)
+#define PRTDCB_SWT_RETSC_ETS_MAX_EXP_S 2
+#define PRTDCB_SWT_RETSC_ETS_MAX_EXP_M MAKEMASK(0xF, 2)
+#define PRTDCB_TCB_DWRR_CREDITS 0x000AE000 /* Reset Source: CORER */
+#define PRTDCB_TCB_DWRR_CREDITS_CREDITS_S 0
+#define PRTDCB_TCB_DWRR_CREDITS_CREDITS_M MAKEMASK(0x3FFFF, 0)
+#define PRTDCB_TCB_DWRR_QUANTA 0x000AE020 /* Reset Source: CORER */
+#define PRTDCB_TCB_DWRR_QUANTA_QUANTA_S 0
+#define PRTDCB_TCB_DWRR_QUANTA_QUANTA_M MAKEMASK(0x7FF, 0)
+#define PRTDCB_TCB_DWRR_SAT 0x000AE040 /* Reset Source: CORER */
+#define PRTDCB_TCB_DWRR_SAT_SATURATION_S 0
+#define PRTDCB_TCB_DWRR_SAT_SATURATION_M MAKEMASK(0x1FFFF, 0)
+#define PRTDCB_TCUPM_NO_EXCEED_DM 0x000BC3C0 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_NO_EXCEED_DM_MONITOR_S 0
+#define PRTDCB_TCUPM_NO_EXCEED_DM_MONITOR_M MAKEMASK(0x7FFFF, 0)
+#define PRTDCB_TCUPM_REG_CM 0x000BC360 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_REG_CM_MONITOR_S 0
+#define PRTDCB_TCUPM_REG_CM_MONITOR_M MAKEMASK(0x7FFF, 0)
+#define PRTDCB_TCUPM_REG_CTHR 0x000BC380 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_REG_CTHR_PORTOFFTH_H_S 0
+#define PRTDCB_TCUPM_REG_CTHR_PORTOFFTH_H_M MAKEMASK(0x7FFF, 0)
+#define PRTDCB_TCUPM_REG_CTHR_PORTOFFTH_L_S 15
+#define PRTDCB_TCUPM_REG_CTHR_PORTOFFTH_L_M MAKEMASK(0x7FFF, 15)
+#define PRTDCB_TCUPM_REG_DM 0x000BC3A0 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_REG_DM_MONITOR_S 0
+#define PRTDCB_TCUPM_REG_DM_MONITOR_M MAKEMASK(0x7FFFF, 0)
+#define PRTDCB_TCUPM_REG_DTHR 0x000BC3E0 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_REG_DTHR_PORTOFFTH_H_S 0
+#define PRTDCB_TCUPM_REG_DTHR_PORTOFFTH_H_M MAKEMASK(0xFFF, 0)
+#define PRTDCB_TCUPM_REG_DTHR_PORTOFFTH_L_S 12
+#define PRTDCB_TCUPM_REG_DTHR_PORTOFFTH_L_M MAKEMASK(0xFFF, 12)
+#define PRTDCB_TCUPM_REG_PE_HB_DM 0x000BC400 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_REG_PE_HB_DM_MONITOR_S 0
+#define PRTDCB_TCUPM_REG_PE_HB_DM_MONITOR_M MAKEMASK(0xFFF, 0)
+#define PRTDCB_TCUPM_REG_PE_HB_DTHR 0x000BC420 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_REG_PE_HB_DTHR_PORTOFFTH_H_S 0
+#define PRTDCB_TCUPM_REG_PE_HB_DTHR_PORTOFFTH_H_M MAKEMASK(0xFFF, 0)
+#define PRTDCB_TCUPM_REG_PE_HB_DTHR_PORTOFFTH_L_S 12
+#define PRTDCB_TCUPM_REG_PE_HB_DTHR_PORTOFFTH_L_M MAKEMASK(0xFFF, 12)
+#define PRTDCB_TCUPM_WAIT_PFC_CM 0x000BC440 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_WAIT_PFC_CM_MONITOR_S 0
+#define PRTDCB_TCUPM_WAIT_PFC_CM_MONITOR_M MAKEMASK(0x7FFF, 0)
+#define PRTDCB_TCUPM_WAIT_PFC_CTHR 0x000BC460 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_WAIT_PFC_CTHR_PORTOFFTH_S 0
+#define PRTDCB_TCUPM_WAIT_PFC_CTHR_PORTOFFTH_M MAKEMASK(0x7FFF, 0)
+#define PRTDCB_TCUPM_WAIT_PFC_DM 0x000BC480 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_WAIT_PFC_DM_MONITOR_S 0
+#define PRTDCB_TCUPM_WAIT_PFC_DM_MONITOR_M MAKEMASK(0x7FFFF, 0)
+#define PRTDCB_TCUPM_WAIT_PFC_DTHR 0x000BC4A0 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_WAIT_PFC_DTHR_PORTOFFTH_S 0
+#define PRTDCB_TCUPM_WAIT_PFC_DTHR_PORTOFFTH_M MAKEMASK(0xFFF, 0)
+#define PRTDCB_TCUPM_WAIT_PFC_PE_HB_DM 0x000BC4C0 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_WAIT_PFC_PE_HB_DM_MONITOR_S 0
+#define PRTDCB_TCUPM_WAIT_PFC_PE_HB_DM_MONITOR_M MAKEMASK(0xFFF, 0)
+#define PRTDCB_TCUPM_WAIT_PFC_PE_HB_DTHR 0x000BC4E0 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_WAIT_PFC_PE_HB_DTHR_PORTOFFTH_S 0
+#define PRTDCB_TCUPM_WAIT_PFC_PE_HB_DTHR_PORTOFFTH_M MAKEMASK(0xFFF, 0)
+#define PRTDCB_TDPUC 0x00040940 /* Reset Source: CORER */
+#define PRTDCB_TDPUC_MAX_TXFRAME_S 0
+#define PRTDCB_TDPUC_MAX_TXFRAME_M MAKEMASK(0xFFFF, 0)
+#define PRTDCB_TDPUC_MAL_LENGTH_S 16
+#define PRTDCB_TDPUC_MAL_LENGTH_M BIT(16)
+#define PRTDCB_TDPUC_MAL_CMD_S 17
+#define PRTDCB_TDPUC_MAL_CMD_M BIT(17)
+#define PRTDCB_TDPUC_TTL_DROP_S 18
+#define PRTDCB_TDPUC_TTL_DROP_M BIT(18)
+#define PRTDCB_TDPUC_UR_DROP_S 19
+#define PRTDCB_TDPUC_UR_DROP_M BIT(19)
+#define PRTDCB_TDPUC_DUMMY_S 20
+#define PRTDCB_TDPUC_DUMMY_M BIT(20)
+#define PRTDCB_TDPUC_BIG_PKT_SIZE_S 21
+#define PRTDCB_TDPUC_BIG_PKT_SIZE_M BIT(21)
+#define PRTDCB_TDPUC_L2_ACCEPT_FAIL_S 22
+#define PRTDCB_TDPUC_L2_ACCEPT_FAIL_M BIT(22)
+#define PRTDCB_TDPUC_DSCP_CHECK_FAIL_S 23
+#define PRTDCB_TDPUC_DSCP_CHECK_FAIL_M BIT(23)
+#define PRTDCB_TDPUC_RCU_ANTISPOOF_S 24
+#define PRTDCB_TDPUC_RCU_ANTISPOOF_M BIT(24)
+#define PRTDCB_TDPUC_NIC_DSI_S 25
+#define PRTDCB_TDPUC_NIC_DSI_M BIT(25)
+#define PRTDCB_TDPUC_NIC_IPSEC_S 26
+#define PRTDCB_TDPUC_NIC_IPSEC_M BIT(26)
+#define PRTDCB_TDPUC_CLEAR_DROP_S 31
+#define PRTDCB_TDPUC_CLEAR_DROP_M BIT(31)
+#define PRTDCB_TFCS 0x001E4560 /* Reset Source: GLOBR */
+#define PRTDCB_TFCS_TXOFF_S 0
+#define PRTDCB_TFCS_TXOFF_M BIT(0)
+#define PRTDCB_TFCS_TXOFF0_S 8
+#define PRTDCB_TFCS_TXOFF0_M BIT(8)
+#define PRTDCB_TFCS_TXOFF1_S 9
+#define PRTDCB_TFCS_TXOFF1_M BIT(9)
+#define PRTDCB_TFCS_TXOFF2_S 10
+#define PRTDCB_TFCS_TXOFF2_M BIT(10)
+#define PRTDCB_TFCS_TXOFF3_S 11
+#define PRTDCB_TFCS_TXOFF3_M BIT(11)
+#define PRTDCB_TFCS_TXOFF4_S 12
+#define PRTDCB_TFCS_TXOFF4_M BIT(12)
+#define PRTDCB_TFCS_TXOFF5_S 13
+#define PRTDCB_TFCS_TXOFF5_M BIT(13)
+#define PRTDCB_TFCS_TXOFF6_S 14
+#define PRTDCB_TFCS_TXOFF6_M BIT(14)
+#define PRTDCB_TFCS_TXOFF7_S 15
+#define PRTDCB_TFCS_TXOFF7_M BIT(15)
+#define PRTDCB_TLPM_REG_DM 0x000A0000 /* Reset Source: CORER */
+#define PRTDCB_TLPM_REG_DM_MONITOR_S 0
+#define PRTDCB_TLPM_REG_DM_MONITOR_M MAKEMASK(0x7FFFF, 0)
+#define PRTDCB_TLPM_REG_DTHR 0x000A0020 /* Reset Source: CORER */
+#define PRTDCB_TLPM_REG_DTHR_PORTOFFTH_H_S 0
+#define PRTDCB_TLPM_REG_DTHR_PORTOFFTH_H_M MAKEMASK(0xFFF, 0)
+#define PRTDCB_TLPM_REG_DTHR_PORTOFFTH_L_S 12
+#define PRTDCB_TLPM_REG_DTHR_PORTOFFTH_L_M MAKEMASK(0xFFF, 12)
+#define PRTDCB_TLPM_WAIT_PFC_DM 0x000A0040 /* Reset Source: CORER */
+#define PRTDCB_TLPM_WAIT_PFC_DM_MONITOR_S 0
+#define PRTDCB_TLPM_WAIT_PFC_DM_MONITOR_M MAKEMASK(0x7FFFF, 0)
+#define PRTDCB_TLPM_WAIT_PFC_DTHR 0x000A0060 /* Reset Source: CORER */
+#define PRTDCB_TLPM_WAIT_PFC_DTHR_PORTOFFTH_S 0
+#define PRTDCB_TLPM_WAIT_PFC_DTHR_PORTOFFTH_M MAKEMASK(0xFFF, 0)
+#define PRTDCB_TPFCTS(_i) (0x001E4660 + ((_i) * 32)) /* _i=0...7 */ /* Reset Source: GLOBR */
+#define PRTDCB_TPFCTS_MAX_INDEX 7
+#define PRTDCB_TPFCTS_PFCTIMER_S 0
+#define PRTDCB_TPFCTS_PFCTIMER_M MAKEMASK(0x3FFF, 0)
+#define PRTDCB_TUP2TC 0x001D26C0 /* Reset Source: CORER */
+#define PRTDCB_TUP2TC_UP0TC_S 0
+#define PRTDCB_TUP2TC_UP0TC_M MAKEMASK(0x7, 0)
+#define PRTDCB_TUP2TC_UP1TC_S 3
+#define PRTDCB_TUP2TC_UP1TC_M MAKEMASK(0x7, 3)
+#define PRTDCB_TUP2TC_UP2TC_S 6
+#define PRTDCB_TUP2TC_UP2TC_M MAKEMASK(0x7, 6)
+#define PRTDCB_TUP2TC_UP3TC_S 9
+#define PRTDCB_TUP2TC_UP3TC_M MAKEMASK(0x7, 9)
+#define PRTDCB_TUP2TC_UP4TC_S 12
+#define PRTDCB_TUP2TC_UP4TC_M MAKEMASK(0x7, 12)
+#define PRTDCB_TUP2TC_UP5TC_S 15
+#define PRTDCB_TUP2TC_UP5TC_M MAKEMASK(0x7, 15)
+#define PRTDCB_TUP2TC_UP6TC_S 18
+#define PRTDCB_TUP2TC_UP6TC_M MAKEMASK(0x7, 18)
+#define PRTDCB_TUP2TC_UP7TC_S 21
+#define PRTDCB_TUP2TC_UP7TC_M MAKEMASK(0x7, 21)
+#define PRTDCB_TX_DSCP2UP_CTL 0x00040980 /* Reset Source: CORER */
+#define PRTDCB_TX_DSCP2UP_CTL_DSCP2UP_ENA_S 0
+#define PRTDCB_TX_DSCP2UP_CTL_DSCP2UP_ENA_M BIT(0)
+#define PRTDCB_TX_DSCP2UP_CTL_DSCP_DEFAULT_UP_S 1
+#define PRTDCB_TX_DSCP2UP_CTL_DSCP_DEFAULT_UP_M MAKEMASK(0x7, 1)
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT(_i) (0x000409A0 + ((_i) * 32)) /* _i=0...7 */ /* Reset Source: CORER */
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_MAX_INDEX 7
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_0_S 0
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_0_M MAKEMASK(0x7, 0)
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_1_S 4
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_1_M MAKEMASK(0x7, 4)
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_2_S 8
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_2_M MAKEMASK(0x7, 8)
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_3_S 12
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_3_M MAKEMASK(0x7, 12)
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_4_S 16
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_4_M MAKEMASK(0x7, 16)
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_5_S 20
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_5_M MAKEMASK(0x7, 20)
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_6_S 24
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_6_M MAKEMASK(0x7, 24)
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_7_S 28
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_7_M MAKEMASK(0x7, 28)
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT(_i) (0x00040AA0 + ((_i) * 32)) /* _i=0...7 */ /* Reset Source: CORER */
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_MAX_INDEX 7
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_0_S 0
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_0_M MAKEMASK(0x7, 0)
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_1_S 4
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_1_M MAKEMASK(0x7, 4)
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_2_S 8
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_2_M MAKEMASK(0x7, 8)
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_3_S 12
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_3_M MAKEMASK(0x7, 12)
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_4_S 16
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_4_M MAKEMASK(0x7, 16)
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_5_S 20
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_5_M MAKEMASK(0x7, 20)
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_6_S 24
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_6_M MAKEMASK(0x7, 24)
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_7_S 28
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_7_M MAKEMASK(0x7, 28)
+#define PRTTCB_BULK_DWRR_REG_CREDITS 0x000AE060 /* Reset Source: CORER */
+#define PRTTCB_BULK_DWRR_REG_CREDITS_CREDITS_S 0
+#define PRTTCB_BULK_DWRR_REG_CREDITS_CREDITS_M MAKEMASK(0x3FFFF, 0)
+#define PRTTCB_BULK_DWRR_WB_CREDITS 0x000AE080 /* Reset Source: CORER */
+#define PRTTCB_BULK_DWRR_WB_CREDITS_CREDITS_S 0
+#define PRTTCB_BULK_DWRR_WB_CREDITS_CREDITS_M MAKEMASK(0x3FFFF, 0)
+#define PRTTCB_CREDIT_EXP 0x000AE100 /* Reset Source: CORER */
+#define PRTTCB_CREDIT_EXP_EXPANSION_S 0
+#define PRTTCB_CREDIT_EXP_EXPANSION_M MAKEMASK(0xFF, 0)
+#define PRTTCB_LL_DWRR_REG_CREDITS 0x000AE0A0 /* Reset Source: CORER */
+#define PRTTCB_LL_DWRR_REG_CREDITS_CREDITS_S 0
+#define PRTTCB_LL_DWRR_REG_CREDITS_CREDITS_M MAKEMASK(0x3FFFF, 0)
+#define PRTTCB_LL_DWRR_WB_CREDITS 0x000AE0C0 /* Reset Source: CORER */
+#define PRTTCB_LL_DWRR_WB_CREDITS_CREDITS_S 0
+#define PRTTCB_LL_DWRR_WB_CREDITS_CREDITS_M MAKEMASK(0x3FFFF, 0)
+#define TCDCB_TCUPM_WAIT_CM(_i) (0x000BC520 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TCDCB_TCUPM_WAIT_CM_MAX_INDEX 31
+#define TCDCB_TCUPM_WAIT_CM_MONITOR_S 0
+#define TCDCB_TCUPM_WAIT_CM_MONITOR_M MAKEMASK(0x7FFF, 0)
+#define TCDCB_TCUPM_WAIT_CTHR(_i) (0x000BC5A0 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TCDCB_TCUPM_WAIT_CTHR_MAX_INDEX 31
+#define TCDCB_TCUPM_WAIT_CTHR_TCOFFTH_S 0
+#define TCDCB_TCUPM_WAIT_CTHR_TCOFFTH_M MAKEMASK(0x7FFF, 0)
+#define TCDCB_TCUPM_WAIT_DM(_i) (0x000BC620 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TCDCB_TCUPM_WAIT_DM_MAX_INDEX 31
+#define TCDCB_TCUPM_WAIT_DM_MONITOR_S 0
+#define TCDCB_TCUPM_WAIT_DM_MONITOR_M MAKEMASK(0x7FFFF, 0)
+#define TCDCB_TCUPM_WAIT_DTHR(_i) (0x000BC6A0 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TCDCB_TCUPM_WAIT_DTHR_MAX_INDEX 31
+#define TCDCB_TCUPM_WAIT_DTHR_TCOFFTH_S 0
+#define TCDCB_TCUPM_WAIT_DTHR_TCOFFTH_M MAKEMASK(0xFFF, 0)
+#define TCDCB_TCUPM_WAIT_PE_HB_DM(_i) (0x000BC720 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TCDCB_TCUPM_WAIT_PE_HB_DM_MAX_INDEX 31
+#define TCDCB_TCUPM_WAIT_PE_HB_DM_MONITOR_S 0
+#define TCDCB_TCUPM_WAIT_PE_HB_DM_MONITOR_M MAKEMASK(0xFFF, 0)
+#define TCDCB_TCUPM_WAIT_PE_HB_DTHR(_i) (0x000BC7A0 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TCDCB_TCUPM_WAIT_PE_HB_DTHR_MAX_INDEX 31
+#define TCDCB_TCUPM_WAIT_PE_HB_DTHR_TCOFFTH_S 0
+#define TCDCB_TCUPM_WAIT_PE_HB_DTHR_TCOFFTH_M MAKEMASK(0xFFF, 0)
+#define TCDCB_TLPM_WAIT_DM(_i) (0x000A0080 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TCDCB_TLPM_WAIT_DM_MAX_INDEX 31
+#define TCDCB_TLPM_WAIT_DM_MONITOR_S 0
+#define TCDCB_TLPM_WAIT_DM_MONITOR_M MAKEMASK(0x7FFFF, 0)
+#define TCDCB_TLPM_WAIT_DTHR(_i) (0x000A0100 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TCDCB_TLPM_WAIT_DTHR_MAX_INDEX 31
+#define TCDCB_TLPM_WAIT_DTHR_TCOFFTH_S 0
+#define TCDCB_TLPM_WAIT_DTHR_TCOFFTH_M MAKEMASK(0xFFF, 0)
+#define TCTCB_WB_RL_TC_CFG(_i) (0x000AE138 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TCTCB_WB_RL_TC_CFG_MAX_INDEX 31
+#define TCTCB_WB_RL_TC_CFG_TOKENS_S 0
+#define TCTCB_WB_RL_TC_CFG_TOKENS_M MAKEMASK(0xFFF, 0)
+#define TCTCB_WB_RL_TC_CFG_BURST_SIZE_S 12
+#define TCTCB_WB_RL_TC_CFG_BURST_SIZE_M MAKEMASK(0x3FF, 12)
+#define TCTCB_WB_RL_TC_STAT(_i) (0x000AE1B8 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TCTCB_WB_RL_TC_STAT_MAX_INDEX 31
+#define TCTCB_WB_RL_TC_STAT_BUCKET_S 0
+#define TCTCB_WB_RL_TC_STAT_BUCKET_M MAKEMASK(0x1FFFF, 0)
+#define TPB_BULK_DWRR_REG_QUANTA 0x00099340 /* Reset Source: CORER */
+#define TPB_BULK_DWRR_REG_QUANTA_QUANTA_S 0
+#define TPB_BULK_DWRR_REG_QUANTA_QUANTA_M MAKEMASK(0x7FF, 0)
+#define TPB_BULK_DWRR_REG_SAT 0x00099350 /* Reset Source: CORER */
+#define TPB_BULK_DWRR_REG_SAT_SATURATION_S 0
+#define TPB_BULK_DWRR_REG_SAT_SATURATION_M MAKEMASK(0x1FFFF, 0)
+#define TPB_BULK_DWRR_WB_QUANTA 0x00099344 /* Reset Source: CORER */
+#define TPB_BULK_DWRR_WB_QUANTA_QUANTA_S 0
+#define TPB_BULK_DWRR_WB_QUANTA_QUANTA_M MAKEMASK(0x7FF, 0)
+#define TPB_BULK_DWRR_WB_SAT 0x00099354 /* Reset Source: CORER */
+#define TPB_BULK_DWRR_WB_SAT_SATURATION_S 0
+#define TPB_BULK_DWRR_WB_SAT_SATURATION_M MAKEMASK(0x1FFFF, 0)
+#define TPB_GLDCB_TCB_WB_SP 0x0009966C /* Reset Source: CORER */
+#define TPB_GLDCB_TCB_WB_SP_WB_SP_S 0
+#define TPB_GLDCB_TCB_WB_SP_WB_SP_M BIT(0)
+#define TPB_GLTCB_CREDIT_EXP_CTL 0x00099664 /* Reset Source: CORER */
+#define TPB_GLTCB_CREDIT_EXP_CTL_EN_S 0
+#define TPB_GLTCB_CREDIT_EXP_CTL_EN_M BIT(0)
+#define TPB_GLTCB_CREDIT_EXP_CTL_MIN_PKT_S 1
+#define TPB_GLTCB_CREDIT_EXP_CTL_MIN_PKT_M MAKEMASK(0x1FF, 1)
+#define TPB_LL_DWRR_REG_QUANTA 0x00099348 /* Reset Source: CORER */
+#define TPB_LL_DWRR_REG_QUANTA_QUANTA_S 0
+#define TPB_LL_DWRR_REG_QUANTA_QUANTA_M MAKEMASK(0x7FF, 0)
+#define TPB_LL_DWRR_REG_SAT 0x00099358 /* Reset Source: CORER */
+#define TPB_LL_DWRR_REG_SAT_SATURATION_S 0
+#define TPB_LL_DWRR_REG_SAT_SATURATION_M MAKEMASK(0x1FFFF, 0)
+#define TPB_LL_DWRR_WB_QUANTA 0x0009934C /* Reset Source: CORER */
+#define TPB_LL_DWRR_WB_QUANTA_QUANTA_S 0
+#define TPB_LL_DWRR_WB_QUANTA_QUANTA_M MAKEMASK(0x7FF, 0)
+#define TPB_LL_DWRR_WB_SAT 0x0009935C /* Reset Source: CORER */
+#define TPB_LL_DWRR_WB_SAT_SATURATION_S 0
+#define TPB_LL_DWRR_WB_SAT_SATURATION_M MAKEMASK(0x1FFFF, 0)
+#define TPB_PRTDCB_TCB_DWRR_CREDITS 0x000991C0 /* Reset Source: CORER */
+#define TPB_PRTDCB_TCB_DWRR_CREDITS_CREDITS_S 0
+#define TPB_PRTDCB_TCB_DWRR_CREDITS_CREDITS_M MAKEMASK(0x3FFFF, 0)
+#define TPB_PRTDCB_TCB_DWRR_QUANTA 0x00099220 /* Reset Source: CORER */
+#define TPB_PRTDCB_TCB_DWRR_QUANTA_QUANTA_S 0
+#define TPB_PRTDCB_TCB_DWRR_QUANTA_QUANTA_M MAKEMASK(0x7FF, 0)
+#define TPB_PRTDCB_TCB_DWRR_SAT 0x00099260 /* Reset Source: CORER */
+#define TPB_PRTDCB_TCB_DWRR_SAT_SATURATION_S 0
+#define TPB_PRTDCB_TCB_DWRR_SAT_SATURATION_M MAKEMASK(0x1FFFF, 0)
+#define TPB_PRTTCB_BULK_DWRR_REG_CREDITS 0x000992A0 /* Reset Source: CORER */
+#define TPB_PRTTCB_BULK_DWRR_REG_CREDITS_CREDITS_S 0
+#define TPB_PRTTCB_BULK_DWRR_REG_CREDITS_CREDITS_M MAKEMASK(0x3FFFF, 0)
+#define TPB_PRTTCB_BULK_DWRR_WB_CREDITS 0x000992C0 /* Reset Source: CORER */
+#define TPB_PRTTCB_BULK_DWRR_WB_CREDITS_CREDITS_S 0
+#define TPB_PRTTCB_BULK_DWRR_WB_CREDITS_CREDITS_M MAKEMASK(0x3FFFF, 0)
+#define TPB_PRTTCB_CREDIT_EXP 0x00099644 /* Reset Source: CORER */
+#define TPB_PRTTCB_CREDIT_EXP_EXPANSION_S 0
+#define TPB_PRTTCB_CREDIT_EXP_EXPANSION_M MAKEMASK(0xFF, 0)
+#define TPB_PRTTCB_LL_DWRR_REG_CREDITS 0x00099300 /* Reset Source: CORER */
+#define TPB_PRTTCB_LL_DWRR_REG_CREDITS_CREDITS_S 0
+#define TPB_PRTTCB_LL_DWRR_REG_CREDITS_CREDITS_M MAKEMASK(0x3FFFF, 0)
+#define TPB_PRTTCB_LL_DWRR_WB_CREDITS 0x00099320 /* Reset Source: CORER */
+#define TPB_PRTTCB_LL_DWRR_WB_CREDITS_CREDITS_S 0
+#define TPB_PRTTCB_LL_DWRR_WB_CREDITS_CREDITS_M MAKEMASK(0x3FFFF, 0)
+#define TPB_WB_RL_TC_CFG(_i) (0x00099360 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TPB_WB_RL_TC_CFG_MAX_INDEX 31
+#define TPB_WB_RL_TC_CFG_TOKENS_S 0
+#define TPB_WB_RL_TC_CFG_TOKENS_M MAKEMASK(0xFFF, 0)
+#define TPB_WB_RL_TC_CFG_BURST_SIZE_S 12
+#define TPB_WB_RL_TC_CFG_BURST_SIZE_M MAKEMASK(0x3FF, 12)
+#define TPB_WB_RL_TC_STAT(_i) (0x000993E0 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TPB_WB_RL_TC_STAT_MAX_INDEX 31
+#define TPB_WB_RL_TC_STAT_BUCKET_S 0
+#define TPB_WB_RL_TC_STAT_BUCKET_M MAKEMASK(0x1FFFF, 0)
+#define GL_ACLEXT_CDMD_L1SEL(_i) (0x00210054 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_CDMD_L1SEL_MAX_INDEX 2
+#define GL_ACLEXT_CDMD_L1SEL_RX_SEL_S 0
+#define GL_ACLEXT_CDMD_L1SEL_RX_SEL_M MAKEMASK(0x1F, 0)
+#define GL_ACLEXT_CDMD_L1SEL_TX_SEL_S 8
+#define GL_ACLEXT_CDMD_L1SEL_TX_SEL_M MAKEMASK(0x1F, 8)
+#define GL_ACLEXT_CDMD_L1SEL_AUX0_SEL_S 16
+#define GL_ACLEXT_CDMD_L1SEL_AUX0_SEL_M MAKEMASK(0x1F, 16)
+#define GL_ACLEXT_CDMD_L1SEL_AUX1_SEL_S 24
+#define GL_ACLEXT_CDMD_L1SEL_AUX1_SEL_M MAKEMASK(0x1F, 24)
+#define GL_ACLEXT_CDMD_L1SEL_BIDIR_ENA_S 30
+#define GL_ACLEXT_CDMD_L1SEL_BIDIR_ENA_M MAKEMASK(0x3, 30)
+#define GL_ACLEXT_CTLTBL_L2ADDR(_i) (0x00210084 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_CTLTBL_L2ADDR_MAX_INDEX 2
+#define GL_ACLEXT_CTLTBL_L2ADDR_LINE_OFF_S 0
+#define GL_ACLEXT_CTLTBL_L2ADDR_LINE_OFF_M MAKEMASK(0x7, 0)
+#define GL_ACLEXT_CTLTBL_L2ADDR_LINE_IDX_S 8
+#define GL_ACLEXT_CTLTBL_L2ADDR_LINE_IDX_M MAKEMASK(0x7, 8)
+#define GL_ACLEXT_CTLTBL_L2ADDR_AUTO_INC_S 31
+#define GL_ACLEXT_CTLTBL_L2ADDR_AUTO_INC_M BIT(31)
+#define GL_ACLEXT_CTLTBL_L2DATA(_i) (0x00210090 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_CTLTBL_L2DATA_MAX_INDEX 2
+#define GL_ACLEXT_CTLTBL_L2DATA_DATA_S 0
+#define GL_ACLEXT_CTLTBL_L2DATA_DATA_M MAKEMASK(0xFFFFFFFF, 0)
+#define GL_ACLEXT_DFLT_L2PRFL(_i) (0x00210138 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_DFLT_L2PRFL_MAX_INDEX 2
+#define GL_ACLEXT_DFLT_L2PRFL_DFLT_PRFL_S 0
+#define GL_ACLEXT_DFLT_L2PRFL_DFLT_PRFL_M MAKEMASK(0xFFFF, 0)
+#define GL_ACLEXT_DFLT_L2PRFL_ACL(_i) (0x00393800 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_DFLT_L2PRFL_ACL_MAX_INDEX 2
+#define GL_ACLEXT_DFLT_L2PRFL_ACL_DFLT_PRFL_S 0
+#define GL_ACLEXT_DFLT_L2PRFL_ACL_DFLT_PRFL_M MAKEMASK(0xFFFF, 0)
+#define GL_ACLEXT_FLGS_L1SEL0_1(_i) (0x0021006C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_FLGS_L1SEL0_1_MAX_INDEX 2
+#define GL_ACLEXT_FLGS_L1SEL0_1_FLS0_S 0
+#define GL_ACLEXT_FLGS_L1SEL0_1_FLS0_M MAKEMASK(0x1FF, 0)
+#define GL_ACLEXT_FLGS_L1SEL0_1_FLS1_S 16
+#define GL_ACLEXT_FLGS_L1SEL0_1_FLS1_M MAKEMASK(0x1FF, 16)
+#define GL_ACLEXT_FLGS_L1SEL2_3(_i) (0x00210078 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_FLGS_L1SEL2_3_MAX_INDEX 2
+#define GL_ACLEXT_FLGS_L1SEL2_3_FLS2_S 0
+#define GL_ACLEXT_FLGS_L1SEL2_3_FLS2_M MAKEMASK(0x1FF, 0)
+#define GL_ACLEXT_FLGS_L1SEL2_3_FLS3_S 16
+#define GL_ACLEXT_FLGS_L1SEL2_3_FLS3_M MAKEMASK(0x1FF, 16)
+#define GL_ACLEXT_FLGS_L1TBL(_i) (0x00210060 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_FLGS_L1TBL_MAX_INDEX 2
+#define GL_ACLEXT_FLGS_L1TBL_LSB_S 0
+#define GL_ACLEXT_FLGS_L1TBL_LSB_M MAKEMASK(0xFFFF, 0)
+#define GL_ACLEXT_FLGS_L1TBL_MSB_S 16
+#define GL_ACLEXT_FLGS_L1TBL_MSB_M MAKEMASK(0xFFFF, 16)
+#define GL_ACLEXT_FORCE_L1CDID(_i) (0x00210018 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_FORCE_L1CDID_MAX_INDEX 2
+#define GL_ACLEXT_FORCE_L1CDID_STATIC_CDID_S 0
+#define GL_ACLEXT_FORCE_L1CDID_STATIC_CDID_M MAKEMASK(0xF, 0)
+#define GL_ACLEXT_FORCE_L1CDID_STATIC_CDID_EN_S 31
+#define GL_ACLEXT_FORCE_L1CDID_STATIC_CDID_EN_M BIT(31)
+#define GL_ACLEXT_FORCE_PID(_i) (0x00210000 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_FORCE_PID_MAX_INDEX 2
+#define GL_ACLEXT_FORCE_PID_STATIC_PID_S 0
+#define GL_ACLEXT_FORCE_PID_STATIC_PID_M MAKEMASK(0xFFFF, 0)
+#define GL_ACLEXT_FORCE_PID_STATIC_PID_EN_S 31
+#define GL_ACLEXT_FORCE_PID_STATIC_PID_EN_M BIT(31)
+#define GL_ACLEXT_K2N_L2ADDR(_i) (0x00210144 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_K2N_L2ADDR_MAX_INDEX 2
+#define GL_ACLEXT_K2N_L2ADDR_LINE_IDX_S 0
+#define GL_ACLEXT_K2N_L2ADDR_LINE_IDX_M MAKEMASK(0x7F, 0)
+#define GL_ACLEXT_K2N_L2ADDR_AUTO_INC_S 31
+#define GL_ACLEXT_K2N_L2ADDR_AUTO_INC_M BIT(31)
+#define GL_ACLEXT_K2N_L2DATA(_i) (0x00210150 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_K2N_L2DATA_MAX_INDEX 2
+#define GL_ACLEXT_K2N_L2DATA_DATA0_S 0
+#define GL_ACLEXT_K2N_L2DATA_DATA0_M MAKEMASK(0xFF, 0)
+#define GL_ACLEXT_K2N_L2DATA_DATA1_S 8
+#define GL_ACLEXT_K2N_L2DATA_DATA1_M MAKEMASK(0xFF, 8)
+#define GL_ACLEXT_K2N_L2DATA_DATA2_S 16
+#define GL_ACLEXT_K2N_L2DATA_DATA2_M MAKEMASK(0xFF, 16)
+#define GL_ACLEXT_K2N_L2DATA_DATA3_S 24
+#define GL_ACLEXT_K2N_L2DATA_DATA3_M MAKEMASK(0xFF, 24)
+#define GL_ACLEXT_L2_PMASK0(_i) (0x002100FC + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_L2_PMASK0_MAX_INDEX 2
+#define GL_ACLEXT_L2_PMASK0_BITMASK_S 0
+#define GL_ACLEXT_L2_PMASK0_BITMASK_M MAKEMASK(0xFFFFFFFF, 0)
+#define GL_ACLEXT_L2_PMASK1(_i) (0x00210108 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_L2_PMASK1_MAX_INDEX 2
+#define GL_ACLEXT_L2_PMASK1_BITMASK_S 0
+#define GL_ACLEXT_L2_PMASK1_BITMASK_M MAKEMASK(0xFFFF, 0)
+#define GL_ACLEXT_L2_TMASK0(_i) (0x00210498 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_L2_TMASK0_MAX_INDEX 2
+#define GL_ACLEXT_L2_TMASK0_BITMASK_S 0
+#define GL_ACLEXT_L2_TMASK0_BITMASK_M MAKEMASK(0xFFFFFFFF, 0)
+#define GL_ACLEXT_L2_TMASK1(_i) (0x002104A4 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_L2_TMASK1_MAX_INDEX 2
+#define GL_ACLEXT_L2_TMASK1_BITMASK_S 0
+#define GL_ACLEXT_L2_TMASK1_BITMASK_M MAKEMASK(0xFF, 0)
+#define GL_ACLEXT_L2BMP0_3(_i) (0x002100A8 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_L2BMP0_3_MAX_INDEX 2
+#define GL_ACLEXT_L2BMP0_3_BMP0_S 0
+#define GL_ACLEXT_L2BMP0_3_BMP0_M MAKEMASK(0xFF, 0)
+#define GL_ACLEXT_L2BMP0_3_BMP1_S 8
+#define GL_ACLEXT_L2BMP0_3_BMP1_M MAKEMASK(0xFF, 8)
+#define GL_ACLEXT_L2BMP0_3_BMP2_S 16
+#define GL_ACLEXT_L2BMP0_3_BMP2_M MAKEMASK(0xFF, 16)
+#define GL_ACLEXT_L2BMP0_3_BMP3_S 24
+#define GL_ACLEXT_L2BMP0_3_BMP3_M MAKEMASK(0xFF, 24)
+#define GL_ACLEXT_L2BMP4_7(_i) (0x002100B4 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_L2BMP4_7_MAX_INDEX 2
+#define GL_ACLEXT_L2BMP4_7_BMP4_S 0
+#define GL_ACLEXT_L2BMP4_7_BMP4_M MAKEMASK(0xFF, 0)
+#define GL_ACLEXT_L2BMP4_7_BMP5_S 8
+#define GL_ACLEXT_L2BMP4_7_BMP5_M MAKEMASK(0xFF, 8)
+#define GL_ACLEXT_L2BMP4_7_BMP6_S 16
+#define GL_ACLEXT_L2BMP4_7_BMP6_M MAKEMASK(0xFF, 16)
+#define GL_ACLEXT_L2BMP4_7_BMP7_S 24
+#define GL_ACLEXT_L2BMP4_7_BMP7_M MAKEMASK(0xFF, 24)
+#define GL_ACLEXT_L2PRTMOD(_i) (0x0021009C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_L2PRTMOD_MAX_INDEX 2
+#define GL_ACLEXT_L2PRTMOD_XLT1_S 0
+#define GL_ACLEXT_L2PRTMOD_XLT1_M MAKEMASK(0x3, 0)
+#define GL_ACLEXT_L2PRTMOD_XLT2_S 8
+#define GL_ACLEXT_L2PRTMOD_XLT2_M MAKEMASK(0x3, 8)
+#define GL_ACLEXT_N2N_L2ADDR(_i) (0x0021015C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_N2N_L2ADDR_MAX_INDEX 2
+#define GL_ACLEXT_N2N_L2ADDR_LINE_IDX_S 0
+#define GL_ACLEXT_N2N_L2ADDR_LINE_IDX_M MAKEMASK(0x3F, 0)
+#define GL_ACLEXT_N2N_L2ADDR_AUTO_INC_S 31
+#define GL_ACLEXT_N2N_L2ADDR_AUTO_INC_M BIT(31)
+#define GL_ACLEXT_N2N_L2DATA(_i) (0x00210168 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_N2N_L2DATA_MAX_INDEX 2
+#define GL_ACLEXT_N2N_L2DATA_DATA0_S 0
+#define GL_ACLEXT_N2N_L2DATA_DATA0_M MAKEMASK(0xFF, 0)
+#define GL_ACLEXT_N2N_L2DATA_DATA1_S 8
+#define GL_ACLEXT_N2N_L2DATA_DATA1_M MAKEMASK(0xFF, 8)
+#define GL_ACLEXT_N2N_L2DATA_DATA2_S 16
+#define GL_ACLEXT_N2N_L2DATA_DATA2_M MAKEMASK(0xFF, 16)
+#define GL_ACLEXT_N2N_L2DATA_DATA3_S 24
+#define GL_ACLEXT_N2N_L2DATA_DATA3_M MAKEMASK(0xFF, 24)
+#define GL_ACLEXT_P2P_L1ADDR(_i) (0x00210024 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_P2P_L1ADDR_MAX_INDEX 2
+#define GL_ACLEXT_P2P_L1ADDR_LINE_IDX_S 0
+#define GL_ACLEXT_P2P_L1ADDR_LINE_IDX_M BIT(0)
+#define GL_ACLEXT_P2P_L1ADDR_AUTO_INC_S 31
+#define GL_ACLEXT_P2P_L1ADDR_AUTO_INC_M BIT(31)
+#define GL_ACLEXT_P2P_L1DATA(_i) (0x00210030 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_P2P_L1DATA_MAX_INDEX 2
+#define GL_ACLEXT_P2P_L1DATA_DATA_S 0
+#define GL_ACLEXT_P2P_L1DATA_DATA_M MAKEMASK(0xFFFFFFFF, 0)
+#define GL_ACLEXT_PID_L2GKTYPE(_i) (0x002100F0 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_PID_L2GKTYPE_MAX_INDEX 2
+#define GL_ACLEXT_PID_L2GKTYPE_PID_GKTYPE_S 0
+#define GL_ACLEXT_PID_L2GKTYPE_PID_GKTYPE_M MAKEMASK(0x3, 0)
+#define GL_ACLEXT_PLVL_SEL(_i) (0x0021000C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_PLVL_SEL_MAX_INDEX 2
+#define GL_ACLEXT_PLVL_SEL_PLVL_SEL_S 0
+#define GL_ACLEXT_PLVL_SEL_PLVL_SEL_M BIT(0)
+#define GL_ACLEXT_TCAM_L2ADDR(_i) (0x00210114 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_TCAM_L2ADDR_MAX_INDEX 2
+#define GL_ACLEXT_TCAM_L2ADDR_LINE_IDX_S 0
+#define GL_ACLEXT_TCAM_L2ADDR_LINE_IDX_M MAKEMASK(0x3FF, 0)
+#define GL_ACLEXT_TCAM_L2ADDR_AUTO_INC_S 31
+#define GL_ACLEXT_TCAM_L2ADDR_AUTO_INC_M BIT(31)
+#define GL_ACLEXT_TCAM_L2DATALSB(_i) (0x00210120 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_TCAM_L2DATALSB_MAX_INDEX 2
+#define GL_ACLEXT_TCAM_L2DATALSB_DATALSB_S 0
+#define GL_ACLEXT_TCAM_L2DATALSB_DATALSB_M MAKEMASK(0xFFFFFFFF, 0)
+#define GL_ACLEXT_TCAM_L2DATAMSB(_i) (0x0021012C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_TCAM_L2DATAMSB_MAX_INDEX 2
+#define GL_ACLEXT_TCAM_L2DATAMSB_DATAMSB_S 0
+#define GL_ACLEXT_TCAM_L2DATAMSB_DATAMSB_M MAKEMASK(0xFF, 0)
+#define GL_ACLEXT_XLT0_L1ADDR(_i) (0x0021003C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_XLT0_L1ADDR_MAX_INDEX 2
+#define GL_ACLEXT_XLT0_L1ADDR_LINE_IDX_S 0
+#define GL_ACLEXT_XLT0_L1ADDR_LINE_IDX_M MAKEMASK(0xFF, 0)
+#define GL_ACLEXT_XLT0_L1ADDR_AUTO_INC_S 31
+#define GL_ACLEXT_XLT0_L1ADDR_AUTO_INC_M BIT(31)
+#define GL_ACLEXT_XLT0_L1DATA(_i) (0x00210048 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_XLT0_L1DATA_MAX_INDEX 2
+#define GL_ACLEXT_XLT0_L1DATA_DATA_S 0
+#define GL_ACLEXT_XLT0_L1DATA_DATA_M MAKEMASK(0xFFFFFFFF, 0)
+#define GL_ACLEXT_XLT1_L2ADDR(_i) (0x002100C0 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_XLT1_L2ADDR_MAX_INDEX 2
+#define GL_ACLEXT_XLT1_L2ADDR_LINE_IDX_S 0
+#define GL_ACLEXT_XLT1_L2ADDR_LINE_IDX_M MAKEMASK(0x7FF, 0)
+#define GL_ACLEXT_XLT1_L2ADDR_AUTO_INC_S 31
+#define GL_ACLEXT_XLT1_L2ADDR_AUTO_INC_M BIT(31)
+#define GL_ACLEXT_XLT1_L2DATA(_i) (0x002100CC + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_XLT1_L2DATA_MAX_INDEX 2
+#define GL_ACLEXT_XLT1_L2DATA_DATA_S 0
+#define GL_ACLEXT_XLT1_L2DATA_DATA_M MAKEMASK(0xFFFFFFFF, 0)
+#define GL_ACLEXT_XLT2_L2ADDR(_i) (0x002100D8 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_XLT2_L2ADDR_MAX_INDEX 2
+#define GL_ACLEXT_XLT2_L2ADDR_LINE_IDX_S 0
+#define GL_ACLEXT_XLT2_L2ADDR_LINE_IDX_M MAKEMASK(0x1FF, 0)
+#define GL_ACLEXT_XLT2_L2ADDR_AUTO_INC_S 31
+#define GL_ACLEXT_XLT2_L2ADDR_AUTO_INC_M BIT(31)
+#define GL_ACLEXT_XLT2_L2DATA(_i) (0x002100E4 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_XLT2_L2DATA_MAX_INDEX 2
+#define GL_ACLEXT_XLT2_L2DATA_DATA_S 0
+#define GL_ACLEXT_XLT2_L2DATA_DATA_M MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PREEXT_CDMD_L1SEL(_i) (0x0020F054 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_CDMD_L1SEL_MAX_INDEX 2
+#define GL_PREEXT_CDMD_L1SEL_RX_SEL_S 0
+#define GL_PREEXT_CDMD_L1SEL_RX_SEL_M MAKEMASK(0x1F, 0)
+#define GL_PREEXT_CDMD_L1SEL_TX_SEL_S 8
+#define GL_PREEXT_CDMD_L1SEL_TX_SEL_M MAKEMASK(0x1F, 8)
+#define GL_PREEXT_CDMD_L1SEL_AUX0_SEL_S 16
+#define GL_PREEXT_CDMD_L1SEL_AUX0_SEL_M MAKEMASK(0x1F, 16)
+#define GL_PREEXT_CDMD_L1SEL_AUX1_SEL_S 24
+#define GL_PREEXT_CDMD_L1SEL_AUX1_SEL_M MAKEMASK(0x1F, 24)
+#define GL_PREEXT_CDMD_L1SEL_BIDIR_ENA_S 30
+#define GL_PREEXT_CDMD_L1SEL_BIDIR_ENA_M MAKEMASK(0x3, 30)
+#define GL_PREEXT_CTLTBL_L2ADDR(_i) (0x0020F084 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_CTLTBL_L2ADDR_MAX_INDEX 2
+#define GL_PREEXT_CTLTBL_L2ADDR_LINE_OFF_S 0
+#define GL_PREEXT_CTLTBL_L2ADDR_LINE_OFF_M MAKEMASK(0x7, 0)
+#define GL_PREEXT_CTLTBL_L2ADDR_LINE_IDX_S 8
+#define GL_PREEXT_CTLTBL_L2ADDR_LINE_IDX_M MAKEMASK(0x7, 8)
+#define GL_PREEXT_CTLTBL_L2ADDR_AUTO_INC_S 31
+#define GL_PREEXT_CTLTBL_L2ADDR_AUTO_INC_M BIT(31)
+#define GL_PREEXT_CTLTBL_L2DATA(_i) (0x0020F090 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_CTLTBL_L2DATA_MAX_INDEX 2
+#define GL_PREEXT_CTLTBL_L2DATA_DATA_S 0
+#define GL_PREEXT_CTLTBL_L2DATA_DATA_M MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PREEXT_DFLT_L2PRFL(_i) (0x0020F138 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_DFLT_L2PRFL_MAX_INDEX 2
+#define GL_PREEXT_DFLT_L2PRFL_DFLT_PRFL_S 0
+#define GL_PREEXT_DFLT_L2PRFL_DFLT_PRFL_M MAKEMASK(0xFFFF, 0)
+#define GL_PREEXT_FLGS_L1SEL0_1(_i) (0x0020F06C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_FLGS_L1SEL0_1_MAX_INDEX 2
+#define GL_PREEXT_FLGS_L1SEL0_1_FLS0_S 0
+#define GL_PREEXT_FLGS_L1SEL0_1_FLS0_M MAKEMASK(0x1FF, 0)
+#define GL_PREEXT_FLGS_L1SEL0_1_FLS1_S 16
+#define GL_PREEXT_FLGS_L1SEL0_1_FLS1_M MAKEMASK(0x1FF, 16)
+#define GL_PREEXT_FLGS_L1SEL2_3(_i) (0x0020F078 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_FLGS_L1SEL2_3_MAX_INDEX 2
+#define GL_PREEXT_FLGS_L1SEL2_3_FLS2_S 0
+#define GL_PREEXT_FLGS_L1SEL2_3_FLS2_M MAKEMASK(0x1FF, 0)
+#define GL_PREEXT_FLGS_L1SEL2_3_FLS3_S 16
+#define GL_PREEXT_FLGS_L1SEL2_3_FLS3_M MAKEMASK(0x1FF, 16)
+#define GL_PREEXT_FLGS_L1TBL(_i) (0x0020F060 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_FLGS_L1TBL_MAX_INDEX 2
+#define GL_PREEXT_FLGS_L1TBL_LSB_S 0
+#define GL_PREEXT_FLGS_L1TBL_LSB_M MAKEMASK(0xFFFF, 0)
+#define GL_PREEXT_FLGS_L1TBL_MSB_S 16
+#define GL_PREEXT_FLGS_L1TBL_MSB_M MAKEMASK(0xFFFF, 16)
+#define GL_PREEXT_FORCE_L1CDID(_i) (0x0020F018 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_FORCE_L1CDID_MAX_INDEX 2
+#define GL_PREEXT_FORCE_L1CDID_STATIC_CDID_S 0
+#define GL_PREEXT_FORCE_L1CDID_STATIC_CDID_M MAKEMASK(0xF, 0)
+#define GL_PREEXT_FORCE_L1CDID_STATIC_CDID_EN_S 31
+#define GL_PREEXT_FORCE_L1CDID_STATIC_CDID_EN_M BIT(31)
+#define GL_PREEXT_FORCE_PID(_i) (0x0020F000 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_FORCE_PID_MAX_INDEX 2
+#define GL_PREEXT_FORCE_PID_STATIC_PID_S 0
+#define GL_PREEXT_FORCE_PID_STATIC_PID_M MAKEMASK(0xFFFF, 0)
+#define GL_PREEXT_FORCE_PID_STATIC_PID_EN_S 31
+#define GL_PREEXT_FORCE_PID_STATIC_PID_EN_M BIT(31)
+#define GL_PREEXT_K2N_L2ADDR(_i) (0x0020F144 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_K2N_L2ADDR_MAX_INDEX 2
+#define GL_PREEXT_K2N_L2ADDR_LINE_IDX_S 0
+#define GL_PREEXT_K2N_L2ADDR_LINE_IDX_M MAKEMASK(0x7F, 0)
+#define GL_PREEXT_K2N_L2ADDR_AUTO_INC_S 31
+#define GL_PREEXT_K2N_L2ADDR_AUTO_INC_M BIT(31)
+#define GL_PREEXT_K2N_L2DATA(_i) (0x0020F150 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_K2N_L2DATA_MAX_INDEX 2
+#define GL_PREEXT_K2N_L2DATA_DATA0_S 0
+#define GL_PREEXT_K2N_L2DATA_DATA0_M MAKEMASK(0xFF, 0)
+#define GL_PREEXT_K2N_L2DATA_DATA1_S 8
+#define GL_PREEXT_K2N_L2DATA_DATA1_M MAKEMASK(0xFF, 8)
+#define GL_PREEXT_K2N_L2DATA_DATA2_S 16
+#define GL_PREEXT_K2N_L2DATA_DATA2_M MAKEMASK(0xFF, 16)
+#define GL_PREEXT_K2N_L2DATA_DATA3_S 24
+#define GL_PREEXT_K2N_L2DATA_DATA3_M MAKEMASK(0xFF, 24)
+#define GL_PREEXT_L2_PMASK0(_i) (0x0020F0FC + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_L2_PMASK0_MAX_INDEX 2
+#define GL_PREEXT_L2_PMASK0_BITMASK_S 0
+#define GL_PREEXT_L2_PMASK0_BITMASK_M MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PREEXT_L2_PMASK1(_i) (0x0020F108 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_L2_PMASK1_MAX_INDEX 2
+#define GL_PREEXT_L2_PMASK1_BITMASK_S 0
+#define GL_PREEXT_L2_PMASK1_BITMASK_M MAKEMASK(0xFFFF, 0)
+#define GL_PREEXT_L2_TMASK0(_i) (0x0020F498 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_L2_TMASK0_MAX_INDEX 2
+#define GL_PREEXT_L2_TMASK0_BITMASK_S 0
+#define GL_PREEXT_L2_TMASK0_BITMASK_M MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PREEXT_L2_TMASK1(_i) (0x0020F4A4 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_L2_TMASK1_MAX_INDEX 2
+#define GL_PREEXT_L2_TMASK1_BITMASK_S 0
+#define GL_PREEXT_L2_TMASK1_BITMASK_M MAKEMASK(0xFF, 0)
+#define GL_PREEXT_L2BMP0_3(_i) (0x0020F0A8 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_L2BMP0_3_MAX_INDEX 2
+#define GL_PREEXT_L2BMP0_3_BMP0_S 0
+#define GL_PREEXT_L2BMP0_3_BMP0_M MAKEMASK(0xFF, 0)
+#define GL_PREEXT_L2BMP0_3_BMP1_S 8
+#define GL_PREEXT_L2BMP0_3_BMP1_M MAKEMASK(0xFF, 8)
+#define GL_PREEXT_L2BMP0_3_BMP2_S 16
+#define GL_PREEXT_L2BMP0_3_BMP2_M MAKEMASK(0xFF, 16)
+#define GL_PREEXT_L2BMP0_3_BMP3_S 24
+#define GL_PREEXT_L2BMP0_3_BMP3_M MAKEMASK(0xFF, 24)
+#define GL_PREEXT_L2BMP4_7(_i) (0x0020F0B4 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_L2BMP4_7_MAX_INDEX 2
+#define GL_PREEXT_L2BMP4_7_BMP4_S 0
+#define GL_PREEXT_L2BMP4_7_BMP4_M MAKEMASK(0xFF, 0)
+#define GL_PREEXT_L2BMP4_7_BMP5_S 8
+#define GL_PREEXT_L2BMP4_7_BMP5_M MAKEMASK(0xFF, 8)
+#define GL_PREEXT_L2BMP4_7_BMP6_S 16
+#define GL_PREEXT_L2BMP4_7_BMP6_M MAKEMASK(0xFF, 16)
+#define GL_PREEXT_L2BMP4_7_BMP7_S 24
+#define GL_PREEXT_L2BMP4_7_BMP7_M MAKEMASK(0xFF, 24)
+#define GL_PREEXT_L2PRTMOD(_i) (0x0020F09C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_L2PRTMOD_MAX_INDEX 2
+#define GL_PREEXT_L2PRTMOD_XLT1_S 0
+#define GL_PREEXT_L2PRTMOD_XLT1_M MAKEMASK(0x3, 0)
+#define GL_PREEXT_L2PRTMOD_XLT2_S 8
+#define GL_PREEXT_L2PRTMOD_XLT2_M MAKEMASK(0x3, 8)
+#define GL_PREEXT_N2N_L2ADDR(_i) (0x0020F15C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_N2N_L2ADDR_MAX_INDEX 2
+#define GL_PREEXT_N2N_L2ADDR_LINE_IDX_S 0
+#define GL_PREEXT_N2N_L2ADDR_LINE_IDX_M MAKEMASK(0x3F, 0)
+#define GL_PREEXT_N2N_L2ADDR_AUTO_INC_S 31
+#define GL_PREEXT_N2N_L2ADDR_AUTO_INC_M BIT(31)
+#define GL_PREEXT_N2N_L2DATA(_i) (0x0020F168 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_N2N_L2DATA_MAX_INDEX 2
+#define GL_PREEXT_N2N_L2DATA_DATA0_S 0
+#define GL_PREEXT_N2N_L2DATA_DATA0_M MAKEMASK(0xFF, 0)
+#define GL_PREEXT_N2N_L2DATA_DATA1_S 8
+#define GL_PREEXT_N2N_L2DATA_DATA1_M MAKEMASK(0xFF, 8)
+#define GL_PREEXT_N2N_L2DATA_DATA2_S 16
+#define GL_PREEXT_N2N_L2DATA_DATA2_M MAKEMASK(0xFF, 16)
+#define GL_PREEXT_N2N_L2DATA_DATA3_S 24
+#define GL_PREEXT_N2N_L2DATA_DATA3_M MAKEMASK(0xFF, 24)
+#define GL_PREEXT_P2P_L1ADDR(_i) (0x0020F024 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_P2P_L1ADDR_MAX_INDEX 2
+#define GL_PREEXT_P2P_L1ADDR_LINE_IDX_S 0
+#define GL_PREEXT_P2P_L1ADDR_LINE_IDX_M BIT(0)
+#define GL_PREEXT_P2P_L1ADDR_AUTO_INC_S 31
+#define GL_PREEXT_P2P_L1ADDR_AUTO_INC_M BIT(31)
+#define GL_PREEXT_P2P_L1DATA(_i) (0x0020F030 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_P2P_L1DATA_MAX_INDEX 2
+#define GL_PREEXT_P2P_L1DATA_DATA_S 0
+#define GL_PREEXT_P2P_L1DATA_DATA_M MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PREEXT_PID_L2GKTYPE(_i) (0x0020F0F0 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_PID_L2GKTYPE_MAX_INDEX 2
+#define GL_PREEXT_PID_L2GKTYPE_PID_GKTYPE_S 0
+#define GL_PREEXT_PID_L2GKTYPE_PID_GKTYPE_M MAKEMASK(0x3, 0)
+#define GL_PREEXT_PLVL_SEL(_i) (0x0020F00C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_PLVL_SEL_MAX_INDEX 2
+#define GL_PREEXT_PLVL_SEL_PLVL_SEL_S 0
+#define GL_PREEXT_PLVL_SEL_PLVL_SEL_M BIT(0)
+#define GL_PREEXT_TCAM_L2ADDR(_i) (0x0020F114 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_TCAM_L2ADDR_MAX_INDEX 2
+#define GL_PREEXT_TCAM_L2ADDR_LINE_IDX_S 0
+#define GL_PREEXT_TCAM_L2ADDR_LINE_IDX_M MAKEMASK(0x3FF, 0)
+#define GL_PREEXT_TCAM_L2ADDR_AUTO_INC_S 31
+#define GL_PREEXT_TCAM_L2ADDR_AUTO_INC_M BIT(31)
+#define GL_PREEXT_TCAM_L2DATALSB(_i) (0x0020F120 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_TCAM_L2DATALSB_MAX_INDEX 2
+#define GL_PREEXT_TCAM_L2DATALSB_DATALSB_S 0
+#define GL_PREEXT_TCAM_L2DATALSB_DATALSB_M MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PREEXT_TCAM_L2DATAMSB(_i) (0x0020F12C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_TCAM_L2DATAMSB_MAX_INDEX 2
+#define GL_PREEXT_TCAM_L2DATAMSB_DATAMSB_S 0
+#define GL_PREEXT_TCAM_L2DATAMSB_DATAMSB_M MAKEMASK(0xFF, 0)
+#define GL_PREEXT_XLT0_L1ADDR(_i) (0x0020F03C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_XLT0_L1ADDR_MAX_INDEX 2
+#define GL_PREEXT_XLT0_L1ADDR_LINE_IDX_S 0
+#define GL_PREEXT_XLT0_L1ADDR_LINE_IDX_M MAKEMASK(0xFF, 0)
+#define GL_PREEXT_XLT0_L1ADDR_AUTO_INC_S 31
+#define GL_PREEXT_XLT0_L1ADDR_AUTO_INC_M BIT(31)
+#define GL_PREEXT_XLT0_L1DATA(_i) (0x0020F048 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_XLT0_L1DATA_MAX_INDEX 2
+#define GL_PREEXT_XLT0_L1DATA_DATA_S 0
+#define GL_PREEXT_XLT0_L1DATA_DATA_M MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PREEXT_XLT1_L2ADDR(_i) (0x0020F0C0 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_XLT1_L2ADDR_MAX_INDEX 2
+#define GL_PREEXT_XLT1_L2ADDR_LINE_IDX_S 0
+#define GL_PREEXT_XLT1_L2ADDR_LINE_IDX_M MAKEMASK(0x7FF, 0)
+#define GL_PREEXT_XLT1_L2ADDR_AUTO_INC_S 31
+#define GL_PREEXT_XLT1_L2ADDR_AUTO_INC_M BIT(31)
+#define GL_PREEXT_XLT1_L2DATA(_i) (0x0020F0CC + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_XLT1_L2DATA_MAX_INDEX 2
+#define GL_PREEXT_XLT1_L2DATA_DATA_S 0
+#define GL_PREEXT_XLT1_L2DATA_DATA_M MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PREEXT_XLT2_L2ADDR(_i) (0x0020F0D8 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_XLT2_L2ADDR_MAX_INDEX 2
+#define GL_PREEXT_XLT2_L2ADDR_LINE_IDX_S 0
+#define GL_PREEXT_XLT2_L2ADDR_LINE_IDX_M MAKEMASK(0x1FF, 0)
+#define GL_PREEXT_XLT2_L2ADDR_AUTO_INC_S 31
+#define GL_PREEXT_XLT2_L2ADDR_AUTO_INC_M BIT(31)
+#define GL_PREEXT_XLT2_L2DATA(_i) (0x0020F0E4 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_XLT2_L2DATA_MAX_INDEX 2
+#define GL_PREEXT_XLT2_L2DATA_DATA_S 0
+#define GL_PREEXT_XLT2_L2DATA_DATA_M MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PSTEXT_CDMD_L1SEL(_i) (0x0020E054 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_CDMD_L1SEL_MAX_INDEX 2
+#define GL_PSTEXT_CDMD_L1SEL_RX_SEL_S 0
+#define GL_PSTEXT_CDMD_L1SEL_RX_SEL_M MAKEMASK(0x1F, 0)
+#define GL_PSTEXT_CDMD_L1SEL_TX_SEL_S 8
+#define GL_PSTEXT_CDMD_L1SEL_TX_SEL_M MAKEMASK(0x1F, 8)
+#define GL_PSTEXT_CDMD_L1SEL_AUX0_SEL_S 16
+#define GL_PSTEXT_CDMD_L1SEL_AUX0_SEL_M MAKEMASK(0x1F, 16)
+#define GL_PSTEXT_CDMD_L1SEL_AUX1_SEL_S 24
+#define GL_PSTEXT_CDMD_L1SEL_AUX1_SEL_M MAKEMASK(0x1F, 24)
+#define GL_PSTEXT_CDMD_L1SEL_BIDIR_ENA_S 30
+#define GL_PSTEXT_CDMD_L1SEL_BIDIR_ENA_M MAKEMASK(0x3, 30)
+#define GL_PSTEXT_CTLTBL_L2ADDR(_i) (0x0020E084 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_CTLTBL_L2ADDR_MAX_INDEX 2
+#define GL_PSTEXT_CTLTBL_L2ADDR_LINE_OFF_S 0
+#define GL_PSTEXT_CTLTBL_L2ADDR_LINE_OFF_M MAKEMASK(0x7, 0)
+#define GL_PSTEXT_CTLTBL_L2ADDR_LINE_IDX_S 8
+#define GL_PSTEXT_CTLTBL_L2ADDR_LINE_IDX_M MAKEMASK(0x7, 8)
+#define GL_PSTEXT_CTLTBL_L2ADDR_AUTO_INC_S 31
+#define GL_PSTEXT_CTLTBL_L2ADDR_AUTO_INC_M BIT(31)
+#define GL_PSTEXT_CTLTBL_L2DATA(_i) (0x0020E090 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_CTLTBL_L2DATA_MAX_INDEX 2
+#define GL_PSTEXT_CTLTBL_L2DATA_DATA_S 0
+#define GL_PSTEXT_CTLTBL_L2DATA_DATA_M MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PSTEXT_DFLT_L2PRFL(_i) (0x0020E138 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_DFLT_L2PRFL_MAX_INDEX 2
+#define GL_PSTEXT_DFLT_L2PRFL_DFLT_PRFL_S 0
+#define GL_PSTEXT_DFLT_L2PRFL_DFLT_PRFL_M MAKEMASK(0xFFFF, 0)
+#define GL_PSTEXT_FL15_BMPLSB(_i) (0x0020E480 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_FL15_BMPLSB_MAX_INDEX 2
+#define GL_PSTEXT_FL15_BMPLSB_BMPLSB_S 0
+#define GL_PSTEXT_FL15_BMPLSB_BMPLSB_M MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PSTEXT_FL15_BMPMSB(_i) (0x0020E48C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_FL15_BMPMSB_MAX_INDEX 2
+#define GL_PSTEXT_FL15_BMPMSB_BMPMSB_S 0
+#define GL_PSTEXT_FL15_BMPMSB_BMPMSB_M MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PSTEXT_FLGS_L1SEL0_1(_i) (0x0020E06C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_FLGS_L1SEL0_1_MAX_INDEX 2
+#define GL_PSTEXT_FLGS_L1SEL0_1_FLS0_S 0
+#define GL_PSTEXT_FLGS_L1SEL0_1_FLS0_M MAKEMASK(0x1FF, 0)
+#define GL_PSTEXT_FLGS_L1SEL0_1_FLS1_S 16
+#define GL_PSTEXT_FLGS_L1SEL0_1_FLS1_M MAKEMASK(0x1FF, 16)
+#define GL_PSTEXT_FLGS_L1SEL2_3(_i) (0x0020E078 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_FLGS_L1SEL2_3_MAX_INDEX 2
+#define GL_PSTEXT_FLGS_L1SEL2_3_FLS2_S 0
+#define GL_PSTEXT_FLGS_L1SEL2_3_FLS2_M MAKEMASK(0x1FF, 0)
+#define GL_PSTEXT_FLGS_L1SEL2_3_FLS3_S 16
+#define GL_PSTEXT_FLGS_L1SEL2_3_FLS3_M MAKEMASK(0x1FF, 16)
+#define GL_PSTEXT_FLGS_L1TBL(_i) (0x0020E060 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_FLGS_L1TBL_MAX_INDEX 2
+#define GL_PSTEXT_FLGS_L1TBL_LSB_S 0
+#define GL_PSTEXT_FLGS_L1TBL_LSB_M MAKEMASK(0xFFFF, 0)
+#define GL_PSTEXT_FLGS_L1TBL_MSB_S 16
+#define GL_PSTEXT_FLGS_L1TBL_MSB_M MAKEMASK(0xFFFF, 16)
+#define GL_PSTEXT_FORCE_L1CDID(_i) (0x0020E018 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_FORCE_L1CDID_MAX_INDEX 2
+#define GL_PSTEXT_FORCE_L1CDID_STATIC_CDID_S 0
+#define GL_PSTEXT_FORCE_L1CDID_STATIC_CDID_M MAKEMASK(0xF, 0)
+#define GL_PSTEXT_FORCE_L1CDID_STATIC_CDID_EN_S 31
+#define GL_PSTEXT_FORCE_L1CDID_STATIC_CDID_EN_M BIT(31)
+#define GL_PSTEXT_FORCE_PID(_i) (0x0020E000 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_FORCE_PID_MAX_INDEX 2
+#define GL_PSTEXT_FORCE_PID_STATIC_PID_S 0
+#define GL_PSTEXT_FORCE_PID_STATIC_PID_M MAKEMASK(0xFFFF, 0)
+#define GL_PSTEXT_FORCE_PID_STATIC_PID_EN_S 31
+#define GL_PSTEXT_FORCE_PID_STATIC_PID_EN_M BIT(31)
+#define GL_PSTEXT_K2N_L2ADDR(_i) (0x0020E144 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_K2N_L2ADDR_MAX_INDEX 2
+#define GL_PSTEXT_K2N_L2ADDR_LINE_IDX_S 0
+#define GL_PSTEXT_K2N_L2ADDR_LINE_IDX_M MAKEMASK(0x7F, 0)
+#define GL_PSTEXT_K2N_L2ADDR_AUTO_INC_S 31
+#define GL_PSTEXT_K2N_L2ADDR_AUTO_INC_M BIT(31)
+#define GL_PSTEXT_K2N_L2DATA(_i) (0x0020E150 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_K2N_L2DATA_MAX_INDEX 2
+#define GL_PSTEXT_K2N_L2DATA_DATA0_S 0
+#define GL_PSTEXT_K2N_L2DATA_DATA0_M MAKEMASK(0xFF, 0)
+#define GL_PSTEXT_K2N_L2DATA_DATA1_S 8
+#define GL_PSTEXT_K2N_L2DATA_DATA1_M MAKEMASK(0xFF, 8)
+#define GL_PSTEXT_K2N_L2DATA_DATA2_S 16
+#define GL_PSTEXT_K2N_L2DATA_DATA2_M MAKEMASK(0xFF, 16)
+#define GL_PSTEXT_K2N_L2DATA_DATA3_S 24
+#define GL_PSTEXT_K2N_L2DATA_DATA3_M MAKEMASK(0xFF, 24)
+#define GL_PSTEXT_L2_PMASK0(_i) (0x0020E0FC + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_L2_PMASK0_MAX_INDEX 2
+#define GL_PSTEXT_L2_PMASK0_BITMASK_S 0
+#define GL_PSTEXT_L2_PMASK0_BITMASK_M MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PSTEXT_L2_PMASK1(_i) (0x0020E108 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_L2_PMASK1_MAX_INDEX 2
+#define GL_PSTEXT_L2_PMASK1_BITMASK_S 0
+#define GL_PSTEXT_L2_PMASK1_BITMASK_M MAKEMASK(0xFFFF, 0)
+#define GL_PSTEXT_L2_TMASK0(_i) (0x0020E498 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_L2_TMASK0_MAX_INDEX 2
+#define GL_PSTEXT_L2_TMASK0_BITMASK_S 0
+#define GL_PSTEXT_L2_TMASK0_BITMASK_M MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PSTEXT_L2_TMASK1(_i) (0x0020E4A4 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_L2_TMASK1_MAX_INDEX 2
+#define GL_PSTEXT_L2_TMASK1_BITMASK_S 0
+#define GL_PSTEXT_L2_TMASK1_BITMASK_M MAKEMASK(0xFF, 0)
+#define GL_PSTEXT_L2PRTMOD(_i) (0x0020E09C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_L2PRTMOD_MAX_INDEX 2
+#define GL_PSTEXT_L2PRTMOD_XLT1_S 0
+#define GL_PSTEXT_L2PRTMOD_XLT1_M MAKEMASK(0x3, 0)
+#define GL_PSTEXT_L2PRTMOD_XLT2_S 8
+#define GL_PSTEXT_L2PRTMOD_XLT2_M MAKEMASK(0x3, 8)
+#define GL_PSTEXT_N2N_L2ADDR(_i) (0x0020E15C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_N2N_L2ADDR_MAX_INDEX 2
+#define GL_PSTEXT_N2N_L2ADDR_LINE_IDX_S 0
+#define GL_PSTEXT_N2N_L2ADDR_LINE_IDX_M MAKEMASK(0x3F, 0)
+#define GL_PSTEXT_N2N_L2ADDR_AUTO_INC_S 31
+#define GL_PSTEXT_N2N_L2ADDR_AUTO_INC_M BIT(31)
+#define GL_PSTEXT_N2N_L2DATA(_i) (0x0020E168 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_N2N_L2DATA_MAX_INDEX 2
+#define GL_PSTEXT_N2N_L2DATA_DATA0_S 0
+#define GL_PSTEXT_N2N_L2DATA_DATA0_M MAKEMASK(0xFF, 0)
+#define GL_PSTEXT_N2N_L2DATA_DATA1_S 8
+#define GL_PSTEXT_N2N_L2DATA_DATA1_M MAKEMASK(0xFF, 8)
+#define GL_PSTEXT_N2N_L2DATA_DATA2_S 16
+#define GL_PSTEXT_N2N_L2DATA_DATA2_M MAKEMASK(0xFF, 16)
+#define GL_PSTEXT_N2N_L2DATA_DATA3_S 24
+#define GL_PSTEXT_N2N_L2DATA_DATA3_M MAKEMASK(0xFF, 24)
+#define GL_PSTEXT_P2P_L1ADDR(_i) (0x0020E024 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_P2P_L1ADDR_MAX_INDEX 2
+#define GL_PSTEXT_P2P_L1ADDR_LINE_IDX_S 0
+#define GL_PSTEXT_P2P_L1ADDR_LINE_IDX_M BIT(0)
+#define GL_PSTEXT_P2P_L1ADDR_AUTO_INC_S 31
+#define GL_PSTEXT_P2P_L1ADDR_AUTO_INC_M BIT(31)
+#define GL_PSTEXT_P2P_L1DATA(_i) (0x0020E030 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_P2P_L1DATA_MAX_INDEX 2
+#define GL_PSTEXT_P2P_L1DATA_DATA_S 0
+#define GL_PSTEXT_P2P_L1DATA_DATA_M MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PSTEXT_PID_L2GKTYPE(_i) (0x0020E0F0 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_PID_L2GKTYPE_MAX_INDEX 2
+#define GL_PSTEXT_PID_L2GKTYPE_PID_GKTYPE_S 0
+#define GL_PSTEXT_PID_L2GKTYPE_PID_GKTYPE_M MAKEMASK(0x3, 0)
+#define GL_PSTEXT_PLVL_SEL(_i) (0x0020E00C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_PLVL_SEL_MAX_INDEX 2
+#define GL_PSTEXT_PLVL_SEL_PLVL_SEL_S 0
+#define GL_PSTEXT_PLVL_SEL_PLVL_SEL_M BIT(0)
+#define GL_PSTEXT_PRFLM_CTRL(_i) (0x0020E474 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_PRFLM_CTRL_MAX_INDEX 2
+#define GL_PSTEXT_PRFLM_CTRL_PRFL_IDX_S 0
+#define GL_PSTEXT_PRFLM_CTRL_PRFL_IDX_M MAKEMASK(0xFF, 0)
+#define GL_PSTEXT_PRFLM_CTRL_RD_REQ_S 30
+#define GL_PSTEXT_PRFLM_CTRL_RD_REQ_M BIT(30)
+#define GL_PSTEXT_PRFLM_CTRL_WR_REQ_S 31
+#define GL_PSTEXT_PRFLM_CTRL_WR_REQ_M BIT(31)
+#define GL_PSTEXT_PRFLM_DATA_0(_i) (0x0020E174 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GL_PSTEXT_PRFLM_DATA_0_MAX_INDEX 63
+#define GL_PSTEXT_PRFLM_DATA_0_PROT_S 0
+#define GL_PSTEXT_PRFLM_DATA_0_PROT_M MAKEMASK(0xFF, 0)
+#define GL_PSTEXT_PRFLM_DATA_0_OFF_S 16
+#define GL_PSTEXT_PRFLM_DATA_0_OFF_M MAKEMASK(0x1FF, 16)
+#define GL_PSTEXT_PRFLM_DATA_1(_i) (0x0020E274 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GL_PSTEXT_PRFLM_DATA_1_MAX_INDEX 63
+#define GL_PSTEXT_PRFLM_DATA_1_PROT_S 0
+#define GL_PSTEXT_PRFLM_DATA_1_PROT_M MAKEMASK(0xFF, 0)
+#define GL_PSTEXT_PRFLM_DATA_1_OFF_S 16
+#define GL_PSTEXT_PRFLM_DATA_1_OFF_M MAKEMASK(0x1FF, 16)
+#define GL_PSTEXT_PRFLM_DATA_2(_i) (0x0020E374 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GL_PSTEXT_PRFLM_DATA_2_MAX_INDEX 63
+#define GL_PSTEXT_PRFLM_DATA_2_PROT_S 0
+#define GL_PSTEXT_PRFLM_DATA_2_PROT_M MAKEMASK(0xFF, 0)
+#define GL_PSTEXT_PRFLM_DATA_2_OFF_S 16
+#define GL_PSTEXT_PRFLM_DATA_2_OFF_M MAKEMASK(0x1FF, 16)
+#define GL_PSTEXT_TCAM_L2ADDR(_i) (0x0020E114 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_TCAM_L2ADDR_MAX_INDEX 2
+#define GL_PSTEXT_TCAM_L2ADDR_LINE_IDX_S 0
+#define GL_PSTEXT_TCAM_L2ADDR_LINE_IDX_M MAKEMASK(0x3FF, 0)
+#define GL_PSTEXT_TCAM_L2ADDR_AUTO_INC_S 31
+#define GL_PSTEXT_TCAM_L2ADDR_AUTO_INC_M BIT(31)
+#define GL_PSTEXT_TCAM_L2DATALSB(_i) (0x0020E120 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_TCAM_L2DATALSB_MAX_INDEX 2
+#define GL_PSTEXT_TCAM_L2DATALSB_DATALSB_S 0
+#define GL_PSTEXT_TCAM_L2DATALSB_DATALSB_M MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PSTEXT_TCAM_L2DATAMSB(_i) (0x0020E12C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_TCAM_L2DATAMSB_MAX_INDEX 2
+#define GL_PSTEXT_TCAM_L2DATAMSB_DATAMSB_S 0
+#define GL_PSTEXT_TCAM_L2DATAMSB_DATAMSB_M MAKEMASK(0xFF, 0)
+#define GL_PSTEXT_XLT0_L1ADDR(_i) (0x0020E03C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_XLT0_L1ADDR_MAX_INDEX 2
+#define GL_PSTEXT_XLT0_L1ADDR_LINE_IDX_S 0
+#define GL_PSTEXT_XLT0_L1ADDR_LINE_IDX_M MAKEMASK(0xFF, 0)
+#define GL_PSTEXT_XLT0_L1ADDR_AUTO_INC_S 31
+#define GL_PSTEXT_XLT0_L1ADDR_AUTO_INC_M BIT(31)
+#define GL_PSTEXT_XLT0_L1DATA(_i) (0x0020E048 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_XLT0_L1DATA_MAX_INDEX 2
+#define GL_PSTEXT_XLT0_L1DATA_DATA_S 0
+#define GL_PSTEXT_XLT0_L1DATA_DATA_M MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PSTEXT_XLT1_L2ADDR(_i) (0x0020E0C0 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_XLT1_L2ADDR_MAX_INDEX 2
+#define GL_PSTEXT_XLT1_L2ADDR_LINE_IDX_S 0
+#define GL_PSTEXT_XLT1_L2ADDR_LINE_IDX_M MAKEMASK(0x7FF, 0)
+#define GL_PSTEXT_XLT1_L2ADDR_AUTO_INC_S 31
+#define GL_PSTEXT_XLT1_L2ADDR_AUTO_INC_M BIT(31)
+#define GL_PSTEXT_XLT1_L2DATA(_i) (0x0020E0CC + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_XLT1_L2DATA_MAX_INDEX 2
+#define GL_PSTEXT_XLT1_L2DATA_DATA_S 0
+#define GL_PSTEXT_XLT1_L2DATA_DATA_M MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PSTEXT_XLT2_L2ADDR(_i) (0x0020E0D8 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_XLT2_L2ADDR_MAX_INDEX 2
+#define GL_PSTEXT_XLT2_L2ADDR_LINE_IDX_S 0
+#define GL_PSTEXT_XLT2_L2ADDR_LINE_IDX_M MAKEMASK(0x1FF, 0)
+#define GL_PSTEXT_XLT2_L2ADDR_AUTO_INC_S 31
+#define GL_PSTEXT_XLT2_L2ADDR_AUTO_INC_M BIT(31)
+#define GL_PSTEXT_XLT2_L2DATA(_i) (0x0020E0E4 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_XLT2_L2DATA_MAX_INDEX 2
+#define GL_PSTEXT_XLT2_L2DATA_DATA_S 0
+#define GL_PSTEXT_XLT2_L2DATA_DATA_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLFLXP_PTYPE_TRANSLATION(_i) (0x0045C000 + ((_i) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define GLFLXP_PTYPE_TRANSLATION_MAX_INDEX 255
+#define GLFLXP_PTYPE_TRANSLATION_PTYPE_4N_S 0
+#define GLFLXP_PTYPE_TRANSLATION_PTYPE_4N_M MAKEMASK(0xFF, 0)
+#define GLFLXP_PTYPE_TRANSLATION_PTYPE_4N_1_S 8
+#define GLFLXP_PTYPE_TRANSLATION_PTYPE_4N_1_M MAKEMASK(0xFF, 8)
+#define GLFLXP_PTYPE_TRANSLATION_PTYPE_4N_2_S 16
+#define GLFLXP_PTYPE_TRANSLATION_PTYPE_4N_2_M MAKEMASK(0xFF, 16)
+#define GLFLXP_PTYPE_TRANSLATION_PTYPE_4N_3_S 24
+#define GLFLXP_PTYPE_TRANSLATION_PTYPE_4N_3_M MAKEMASK(0xFF, 24)
+#define GLFLXP_RX_CMD_LX_PROT_IDX(_i) (0x0045C400 + ((_i) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define GLFLXP_RX_CMD_LX_PROT_IDX_MAX_INDEX 255
+#define GLFLXP_RX_CMD_LX_PROT_IDX_INNER_CLOUD_OFFSET_INDEX_S 0
+#define GLFLXP_RX_CMD_LX_PROT_IDX_INNER_CLOUD_OFFSET_INDEX_M MAKEMASK(0x7, 0)
+#define GLFLXP_RX_CMD_LX_PROT_IDX_L4_OFFSET_INDEX_S 4
+#define GLFLXP_RX_CMD_LX_PROT_IDX_L4_OFFSET_INDEX_M MAKEMASK(0x7, 4)
+#define GLFLXP_RX_CMD_LX_PROT_IDX_PAYLOAD_OFFSET_INDEX_S 8
+#define GLFLXP_RX_CMD_LX_PROT_IDX_PAYLOAD_OFFSET_INDEX_M MAKEMASK(0x7, 8)
+#define GLFLXP_RX_CMD_LX_PROT_IDX_L3_PROTOCOL_S 12
+#define GLFLXP_RX_CMD_LX_PROT_IDX_L3_PROTOCOL_M MAKEMASK(0x3, 12)
+#define GLFLXP_RX_CMD_LX_PROT_IDX_L4_PROTOCOL_S 14
+#define GLFLXP_RX_CMD_LX_PROT_IDX_L4_PROTOCOL_M MAKEMASK(0x3, 14)
+#define GLFLXP_RX_CMD_PROTIDS(_i, _j) (0x0045A000 + ((_i) * 4 + (_j) * 1024)) /* _i=0...255, _j=0...5 */ /* Reset Source: CORER */
+#define GLFLXP_RX_CMD_PROTIDS_MAX_INDEX 255
+#define GLFLXP_RX_CMD_PROTIDS_PROTID_4N_S 0
+#define GLFLXP_RX_CMD_PROTIDS_PROTID_4N_M MAKEMASK(0xFF, 0)
+#define GLFLXP_RX_CMD_PROTIDS_PROTID_4N_1_S 8
+#define GLFLXP_RX_CMD_PROTIDS_PROTID_4N_1_M MAKEMASK(0xFF, 8)
+#define GLFLXP_RX_CMD_PROTIDS_PROTID_4N_2_S 16
+#define GLFLXP_RX_CMD_PROTIDS_PROTID_4N_2_M MAKEMASK(0xFF, 16)
+#define GLFLXP_RX_CMD_PROTIDS_PROTID_4N_3_S 24
+#define GLFLXP_RX_CMD_PROTIDS_PROTID_4N_3_M MAKEMASK(0xFF, 24)
+#define GLFLXP_RXDID_FLAGS(_i, _j) (0x0045D000 + ((_i) * 4 + (_j) * 256)) /* _i=0...63, _j=0...4 */ /* Reset Source: CORER */
+#define GLFLXP_RXDID_FLAGS_MAX_INDEX 63
+#define GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_S 0
+#define GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_M MAKEMASK(0x3F, 0)
+#define GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_1_S 8
+#define GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_1_M MAKEMASK(0x3F, 8)
+#define GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_2_S 16
+#define GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_2_M MAKEMASK(0x3F, 16)
+#define GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_3_S 24
+#define GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_3_M MAKEMASK(0x3F, 24)
+#define GLFLXP_RXDID_FLAGS1_OVERRIDE(_i) (0x0045D600 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GLFLXP_RXDID_FLAGS1_OVERRIDE_MAX_INDEX 63
+#define GLFLXP_RXDID_FLAGS1_OVERRIDE_FLEXIFLAGS1_OVERRIDE_S 0
+#define GLFLXP_RXDID_FLAGS1_OVERRIDE_FLEXIFLAGS1_OVERRIDE_M MAKEMASK(0xF, 0)
+#define GLFLXP_RXDID_FLX_WRD_0(_i) (0x0045c800 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GLFLXP_RXDID_FLX_WRD_0_MAX_INDEX 63
+#define GLFLXP_RXDID_FLX_WRD_0_PROT_MDID_S 0
+#define GLFLXP_RXDID_FLX_WRD_0_PROT_MDID_M MAKEMASK(0xFF, 0)
+#define GLFLXP_RXDID_FLX_WRD_0_EXTRACTION_OFFSET_S 8
+#define GLFLXP_RXDID_FLX_WRD_0_EXTRACTION_OFFSET_M MAKEMASK(0x3FF, 8)
+#define GLFLXP_RXDID_FLX_WRD_0_RXDID_OPCODE_S 30
+#define GLFLXP_RXDID_FLX_WRD_0_RXDID_OPCODE_M MAKEMASK(0x3, 30)
+#define GLFLXP_RXDID_FLX_WRD_1(_i) (0x0045c900 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GLFLXP_RXDID_FLX_WRD_1_MAX_INDEX 63
+#define GLFLXP_RXDID_FLX_WRD_1_PROT_MDID_S 0
+#define GLFLXP_RXDID_FLX_WRD_1_PROT_MDID_M MAKEMASK(0xFF, 0)
+#define GLFLXP_RXDID_FLX_WRD_1_EXTRACTION_OFFSET_S 8
+#define GLFLXP_RXDID_FLX_WRD_1_EXTRACTION_OFFSET_M MAKEMASK(0x3FF, 8)
+#define GLFLXP_RXDID_FLX_WRD_1_RXDID_OPCODE_S 30
+#define GLFLXP_RXDID_FLX_WRD_1_RXDID_OPCODE_M MAKEMASK(0x3, 30)
+#define GLFLXP_RXDID_FLX_WRD_2(_i) (0x0045ca00 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GLFLXP_RXDID_FLX_WRD_2_MAX_INDEX 63
+#define GLFLXP_RXDID_FLX_WRD_2_PROT_MDID_S 0
+#define GLFLXP_RXDID_FLX_WRD_2_PROT_MDID_M MAKEMASK(0xFF, 0)
+#define GLFLXP_RXDID_FLX_WRD_2_EXTRACTION_OFFSET_S 8
+#define GLFLXP_RXDID_FLX_WRD_2_EXTRACTION_OFFSET_M MAKEMASK(0x3FF, 8)
+#define GLFLXP_RXDID_FLX_WRD_2_RXDID_OPCODE_S 30
+#define GLFLXP_RXDID_FLX_WRD_2_RXDID_OPCODE_M MAKEMASK(0x3, 30)
+#define GLFLXP_RXDID_FLX_WRD_3(_i) (0x0045cb00 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GLFLXP_RXDID_FLX_WRD_3_MAX_INDEX 63
+#define GLFLXP_RXDID_FLX_WRD_3_PROT_MDID_S 0
+#define GLFLXP_RXDID_FLX_WRD_3_PROT_MDID_M MAKEMASK(0xFF, 0)
+#define GLFLXP_RXDID_FLX_WRD_3_EXTRACTION_OFFSET_S 8
+#define GLFLXP_RXDID_FLX_WRD_3_EXTRACTION_OFFSET_M MAKEMASK(0x3FF, 8)
+#define GLFLXP_RXDID_FLX_WRD_3_RXDID_OPCODE_S 30
+#define GLFLXP_RXDID_FLX_WRD_3_RXDID_OPCODE_M MAKEMASK(0x3, 30)
+#define GLFLXP_RXDID_FLX_WRD_4(_i) (0x0045cc00 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GLFLXP_RXDID_FLX_WRD_4_MAX_INDEX 63
+#define GLFLXP_RXDID_FLX_WRD_4_PROT_MDID_S 0
+#define GLFLXP_RXDID_FLX_WRD_4_PROT_MDID_M MAKEMASK(0xFF, 0)
+#define GLFLXP_RXDID_FLX_WRD_4_EXTRACTION_OFFSET_S 8
+#define GLFLXP_RXDID_FLX_WRD_4_EXTRACTION_OFFSET_M MAKEMASK(0x3FF, 8)
+#define GLFLXP_RXDID_FLX_WRD_4_RXDID_OPCODE_S 30
+#define GLFLXP_RXDID_FLX_WRD_4_RXDID_OPCODE_M MAKEMASK(0x3, 30)
+#define GLFLXP_RXDID_FLX_WRD_5(_i) (0x0045cd00 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GLFLXP_RXDID_FLX_WRD_5_MAX_INDEX 63
+#define GLFLXP_RXDID_FLX_WRD_5_PROT_MDID_S 0
+#define GLFLXP_RXDID_FLX_WRD_5_PROT_MDID_M MAKEMASK(0xFF, 0)
+#define GLFLXP_RXDID_FLX_WRD_5_EXTRACTION_OFFSET_S 8
+#define GLFLXP_RXDID_FLX_WRD_5_EXTRACTION_OFFSET_M MAKEMASK(0x3FF, 8)
+#define GLFLXP_RXDID_FLX_WRD_5_RXDID_OPCODE_S 30
+#define GLFLXP_RXDID_FLX_WRD_5_RXDID_OPCODE_M MAKEMASK(0x3, 30)
+#define GLFLXP_TX_SCHED_CORRECT(_i, _j) (0x00458000 + ((_i) * 4 + (_j) * 256)) /* _i=0...63, _j=0...31 */ /* Reset Source: CORER */
+#define GLFLXP_TX_SCHED_CORRECT_MAX_INDEX 63
+#define GLFLXP_TX_SCHED_CORRECT_PROTD_ID_2N_S 0
+#define GLFLXP_TX_SCHED_CORRECT_PROTD_ID_2N_M MAKEMASK(0xFF, 0)
+#define GLFLXP_TX_SCHED_CORRECT_RECIPE_2N_S 8
+#define GLFLXP_TX_SCHED_CORRECT_RECIPE_2N_M MAKEMASK(0x1F, 8)
+#define GLFLXP_TX_SCHED_CORRECT_PROTD_ID_2N_1_S 16
+#define GLFLXP_TX_SCHED_CORRECT_PROTD_ID_2N_1_M MAKEMASK(0xFF, 16)
+#define GLFLXP_TX_SCHED_CORRECT_RECIPE_2N_1_S 24
+#define GLFLXP_TX_SCHED_CORRECT_RECIPE_2N_1_M MAKEMASK(0x1F, 24)
+#define QRXFLXP_CNTXT(_QRX) (0x00480000 + ((_QRX) * 4)) /* _i=0...2047 */ /* Reset Source: CORER */
+#define QRXFLXP_CNTXT_MAX_INDEX 2047
+#define QRXFLXP_CNTXT_RXDID_IDX_S 0
+#define QRXFLXP_CNTXT_RXDID_IDX_M MAKEMASK(0x3F, 0)
+#define QRXFLXP_CNTXT_RXDID_PRIO_S 8
+#define QRXFLXP_CNTXT_RXDID_PRIO_M MAKEMASK(0x7, 8)
+#define QRXFLXP_CNTXT_TS_S 11
+#define QRXFLXP_CNTXT_TS_M BIT(11)
+#define GL_FWSTS 0x00083048 /* Reset Source: POR */
+#define GL_FWSTS_FWS0B_S 0
+#define GL_FWSTS_FWS0B_M MAKEMASK(0xFF, 0)
+#define GL_FWSTS_FWROWD_S 8
+#define GL_FWSTS_FWROWD_M BIT(8)
+#define GL_FWSTS_FWRI_S 9
+#define GL_FWSTS_FWRI_M BIT(9)
+#define GL_FWSTS_FWS1B_S 16
+#define GL_FWSTS_FWS1B_M MAKEMASK(0xFF, 16)
+#define GL_TCVMLR_DRAIN_CNTR_CTL 0x000A21E0 /* Reset Source: CORER */
+#define GL_TCVMLR_DRAIN_CNTR_CTL_OP_S 0
+#define GL_TCVMLR_DRAIN_CNTR_CTL_OP_M BIT(0)
+#define GL_TCVMLR_DRAIN_CNTR_CTL_PORT_S 1
+#define GL_TCVMLR_DRAIN_CNTR_CTL_PORT_M MAKEMASK(0x7, 1)
+#define GL_TCVMLR_DRAIN_CNTR_CTL_VALUE_S 4
+#define GL_TCVMLR_DRAIN_CNTR_CTL_VALUE_M MAKEMASK(0x3FFF, 4)
+#define GL_TCVMLR_DRAIN_DONE_DEC 0x000A21A8 /* Reset Source: CORER */
+#define GL_TCVMLR_DRAIN_DONE_DEC_TARGET_S 0
+#define GL_TCVMLR_DRAIN_DONE_DEC_TARGET_M BIT(0)
+#define GL_TCVMLR_DRAIN_DONE_DEC_INDEX_S 1
+#define GL_TCVMLR_DRAIN_DONE_DEC_INDEX_M MAKEMASK(0x1F, 1)
+#define GL_TCVMLR_DRAIN_DONE_DEC_VALUE_S 6
+#define GL_TCVMLR_DRAIN_DONE_DEC_VALUE_M MAKEMASK(0xFF, 6)
+#define GL_TCVMLR_DRAIN_DONE_TCLAN(_i) (0x000A20A8 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GL_TCVMLR_DRAIN_DONE_TCLAN_MAX_INDEX 31
+#define GL_TCVMLR_DRAIN_DONE_TCLAN_COUNT_S 0
+#define GL_TCVMLR_DRAIN_DONE_TCLAN_COUNT_M MAKEMASK(0xFF, 0)
+#define GL_TCVMLR_DRAIN_DONE_TPB(_i) (0x000A2128 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GL_TCVMLR_DRAIN_DONE_TPB_MAX_INDEX 31
+#define GL_TCVMLR_DRAIN_DONE_TPB_COUNT_S 0
+#define GL_TCVMLR_DRAIN_DONE_TPB_COUNT_M MAKEMASK(0xFF, 0)
+#define GL_TCVMLR_DRAIN_MARKER 0x000A2008 /* Reset Source: CORER */
+#define GL_TCVMLR_DRAIN_MARKER_PORT_S 0
+#define GL_TCVMLR_DRAIN_MARKER_PORT_M MAKEMASK(0x7, 0)
+#define GL_TCVMLR_DRAIN_MARKER_TC_S 3
+#define GL_TCVMLR_DRAIN_MARKER_TC_M MAKEMASK(0x1F, 3)
+#define GL_TCVMLR_ERR_STAT 0x000A2024 /* Reset Source: CORER */
+#define GL_TCVMLR_ERR_STAT_ERROR_S 0
+#define GL_TCVMLR_ERR_STAT_ERROR_M BIT(0)
+#define GL_TCVMLR_ERR_STAT_FW_REQ_S 1
+#define GL_TCVMLR_ERR_STAT_FW_REQ_M BIT(1)
+#define GL_TCVMLR_ERR_STAT_STAT_S 2
+#define GL_TCVMLR_ERR_STAT_STAT_M MAKEMASK(0x7, 2)
+#define GL_TCVMLR_ERR_STAT_ENT_TYPE_S 5
+#define GL_TCVMLR_ERR_STAT_ENT_TYPE_M MAKEMASK(0x7, 5)
+#define GL_TCVMLR_ERR_STAT_ENT_ID_S 8
+#define GL_TCVMLR_ERR_STAT_ENT_ID_M MAKEMASK(0x3FFF, 8)
+#define GL_TCVMLR_QCFG 0x000A2010 /* Reset Source: CORER */
+#define GL_TCVMLR_QCFG_QID_S 0
+#define GL_TCVMLR_QCFG_QID_M MAKEMASK(0x3FFF, 0)
+#define GL_TCVMLR_QCFG_OP_S 14
+#define GL_TCVMLR_QCFG_OP_M BIT(14)
+#define GL_TCVMLR_QCFG_PORT_S 15
+#define GL_TCVMLR_QCFG_PORT_M MAKEMASK(0x7, 15)
+#define GL_TCVMLR_QCFG_TC_S 18
+#define GL_TCVMLR_QCFG_TC_M MAKEMASK(0x1F, 18)
+#define GL_TCVMLR_QCFG_RD 0x000A2014 /* Reset Source: CORER */
+#define GL_TCVMLR_QCFG_RD_QID_S 0
+#define GL_TCVMLR_QCFG_RD_QID_M MAKEMASK(0x3FFF, 0)
+#define GL_TCVMLR_QCFG_RD_PORT_S 14
+#define GL_TCVMLR_QCFG_RD_PORT_M MAKEMASK(0x7, 14)
+#define GL_TCVMLR_QCFG_RD_TC_S 17
+#define GL_TCVMLR_QCFG_RD_TC_M MAKEMASK(0x1F, 17)
+#define GL_TCVMLR_QCNTR 0x000A200C /* Reset Source: CORER */
+#define GL_TCVMLR_QCNTR_CNTR_S 0
+#define GL_TCVMLR_QCNTR_CNTR_M MAKEMASK(0x7FFF, 0)
+#define GL_TCVMLR_QCTL 0x000A2004 /* Reset Source: CORER */
+#define GL_TCVMLR_QCTL_QID_S 0
+#define GL_TCVMLR_QCTL_QID_M MAKEMASK(0x3FFF, 0)
+#define GL_TCVMLR_QCTL_OP_S 14
+#define GL_TCVMLR_QCTL_OP_M BIT(14)
+#define GL_TCVMLR_REQ_STAT 0x000A2018 /* Reset Source: CORER */
+#define GL_TCVMLR_REQ_STAT_ENT_TYPE_S 0
+#define GL_TCVMLR_REQ_STAT_ENT_TYPE_M MAKEMASK(0x7, 0)
+#define GL_TCVMLR_REQ_STAT_ENT_ID_S 3
+#define GL_TCVMLR_REQ_STAT_ENT_ID_M MAKEMASK(0x3FFF, 3)
+#define GL_TCVMLR_REQ_STAT_OP_S 17
+#define GL_TCVMLR_REQ_STAT_OP_M BIT(17)
+#define GL_TCVMLR_REQ_STAT_WRITE_STATUS_S 18
+#define GL_TCVMLR_REQ_STAT_WRITE_STATUS_M MAKEMASK(0x7, 18)
+#define GL_TCVMLR_STAT 0x000A201C /* Reset Source: CORER */
+#define GL_TCVMLR_STAT_ENT_TYPE_S 0
+#define GL_TCVMLR_STAT_ENT_TYPE_M MAKEMASK(0x7, 0)
+#define GL_TCVMLR_STAT_ENT_ID_S 3
+#define GL_TCVMLR_STAT_ENT_ID_M MAKEMASK(0x3FFF, 3)
+#define GL_TCVMLR_STAT_STATUS_S 17
+#define GL_TCVMLR_STAT_STATUS_M MAKEMASK(0x7, 17)
+#define GL_XLR_MARKER_TRIG_TCVMLR 0x000A2000 /* Reset Source: CORER */
+#define GL_XLR_MARKER_TRIG_TCVMLR_VM_VF_NUM_S 0
+#define GL_XLR_MARKER_TRIG_TCVMLR_VM_VF_NUM_M MAKEMASK(0x3FF, 0)
+#define GL_XLR_MARKER_TRIG_TCVMLR_VM_VF_TYPE_S 10
+#define GL_XLR_MARKER_TRIG_TCVMLR_VM_VF_TYPE_M MAKEMASK(0x3, 10)
+#define GL_XLR_MARKER_TRIG_TCVMLR_PF_NUM_S 12
+#define GL_XLR_MARKER_TRIG_TCVMLR_PF_NUM_M MAKEMASK(0x7, 12)
+#define GL_XLR_MARKER_TRIG_TCVMLR_PORT_NUM_S 16
+#define GL_XLR_MARKER_TRIG_TCVMLR_PORT_NUM_M MAKEMASK(0x7, 16)
+#define GL_XLR_MARKER_TRIG_VMLR 0x00093804 /* Reset Source: CORER */
+#define GL_XLR_MARKER_TRIG_VMLR_VM_VF_NUM_S 0
+#define GL_XLR_MARKER_TRIG_VMLR_VM_VF_NUM_M MAKEMASK(0x3FF, 0)
+#define GL_XLR_MARKER_TRIG_VMLR_VM_VF_TYPE_S 10
+#define GL_XLR_MARKER_TRIG_VMLR_VM_VF_TYPE_M MAKEMASK(0x3, 10)
+#define GL_XLR_MARKER_TRIG_VMLR_PF_NUM_S 12
+#define GL_XLR_MARKER_TRIG_VMLR_PF_NUM_M MAKEMASK(0x7, 12)
+#define GL_XLR_MARKER_TRIG_VMLR_PORT_NUM_S 16
+#define GL_XLR_MARKER_TRIG_VMLR_PORT_NUM_M MAKEMASK(0x7, 16)
+#define GLGEN_ANA_ABORT_PTYPE 0x0020C21C /* Reset Source: CORER */
+#define GLGEN_ANA_ABORT_PTYPE_ABORT_S 0
+#define GLGEN_ANA_ABORT_PTYPE_ABORT_M MAKEMASK(0x3FF, 0)
+#define GLGEN_ANA_ALU_ACCSS_OUT_OF_PKT 0x0020C208 /* Reset Source: CORER */
+#define GLGEN_ANA_ALU_ACCSS_OUT_OF_PKT_NPC_S 0
+#define GLGEN_ANA_ALU_ACCSS_OUT_OF_PKT_NPC_M MAKEMASK(0xFF, 0)
+#define GLGEN_ANA_CFG_CTRL 0x0020C104 /* Reset Source: CORER */
+#define GLGEN_ANA_CFG_CTRL_LINE_IDX_S 0
+#define GLGEN_ANA_CFG_CTRL_LINE_IDX_M MAKEMASK(0x3FFFF, 0)
+#define GLGEN_ANA_CFG_CTRL_TABLE_ID_S 18
+#define GLGEN_ANA_CFG_CTRL_TABLE_ID_M MAKEMASK(0xFF, 18)
+#define GLGEN_ANA_CFG_CTRL_RESRVED_S 26
+#define GLGEN_ANA_CFG_CTRL_RESRVED_M MAKEMASK(0x7, 26)
+#define GLGEN_ANA_CFG_CTRL_OPERATION_ID_S 29
+#define GLGEN_ANA_CFG_CTRL_OPERATION_ID_M MAKEMASK(0x7, 29)
+#define GLGEN_ANA_CFG_HTBL_LU_RESULT 0x0020C158 /* Reset Source: CORER */
+#define GLGEN_ANA_CFG_HTBL_LU_RESULT_HIT_S 0
+#define GLGEN_ANA_CFG_HTBL_LU_RESULT_HIT_M BIT(0)
+#define GLGEN_ANA_CFG_HTBL_LU_RESULT_PG_MEM_IDX_S 1
+#define GLGEN_ANA_CFG_HTBL_LU_RESULT_PG_MEM_IDX_M MAKEMASK(0x7, 1)
+#define GLGEN_ANA_CFG_HTBL_LU_RESULT_ADDR_S 4
+#define GLGEN_ANA_CFG_HTBL_LU_RESULT_ADDR_M MAKEMASK(0x1FF, 4)
+#define GLGEN_ANA_CFG_LU_KEY(_i) (0x0020C14C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GLGEN_ANA_CFG_LU_KEY_MAX_INDEX 2
+#define GLGEN_ANA_CFG_LU_KEY_LU_KEY_S 0
+#define GLGEN_ANA_CFG_LU_KEY_LU_KEY_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_CFG_RDDATA(_i) (0x0020C10C + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLGEN_ANA_CFG_RDDATA_MAX_INDEX 15
+#define GLGEN_ANA_CFG_RDDATA_RD_DATA_S 0
+#define GLGEN_ANA_CFG_RDDATA_RD_DATA_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_CFG_SPLBUF_LU_RESULT 0x0020C15C /* Reset Source: CORER */
+#define GLGEN_ANA_CFG_SPLBUF_LU_RESULT_HIT_S 0
+#define GLGEN_ANA_CFG_SPLBUF_LU_RESULT_HIT_M BIT(0)
+#define GLGEN_ANA_CFG_SPLBUF_LU_RESULT_RSV_S 1
+#define GLGEN_ANA_CFG_SPLBUF_LU_RESULT_RSV_M MAKEMASK(0x7, 1)
+#define GLGEN_ANA_CFG_SPLBUF_LU_RESULT_ADDR_S 4
+#define GLGEN_ANA_CFG_SPLBUF_LU_RESULT_ADDR_M MAKEMASK(0x1FF, 4)
+#define GLGEN_ANA_CFG_WRDATA 0x0020C108 /* Reset Source: CORER */
+#define GLGEN_ANA_CFG_WRDATA_WR_DATA_S 0
+#define GLGEN_ANA_CFG_WRDATA_WR_DATA_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_DEF_PTYPE 0x0020C100 /* Reset Source: CORER */
+#define GLGEN_ANA_DEF_PTYPE_DEF_PTYPE_S 0
+#define GLGEN_ANA_DEF_PTYPE_DEF_PTYPE_M MAKEMASK(0x3FF, 0)
+#define GLGEN_ANA_DFD_FIFO_0 0x0020C398 /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_FIFO_0_PC_NXT_S 0
+#define GLGEN_ANA_DFD_FIFO_0_PC_NXT_M BIT(0)
+#define GLGEN_ANA_DFD_FIFO_0_HO_NXT_S 1
+#define GLGEN_ANA_DFD_FIFO_0_HO_NXT_M BIT(1)
+#define GLGEN_ANA_DFD_FIFO_0_NID_NXT_S 2
+#define GLGEN_ANA_DFD_FIFO_0_NID_NXT_M BIT(2)
+#define GLGEN_ANA_DFD_FIFO_0_PG_KEY_SEL_S 8
+#define GLGEN_ANA_DFD_FIFO_0_PG_KEY_SEL_M BIT(8)
+#define GLGEN_ANA_DFD_FIFO_PTR 0x0020C43C /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_FIFO_PTR_HEAD_S 0
+#define GLGEN_ANA_DFD_FIFO_PTR_HEAD_M MAKEMASK(0x1FF, 0)
+#define GLGEN_ANA_DFD_FIFO_PTR_USED_SPACE_S 16
+#define GLGEN_ANA_DFD_FIFO_PTR_USED_SPACE_M MAKEMASK(0x3FF, 16)
+#define GLGEN_ANA_DFD_GEN_CTRL 0x0020C38C /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_GEN_CTRL_ENABLE_S 0
+#define GLGEN_ANA_DFD_GEN_CTRL_ENABLE_M BIT(0)
+#define GLGEN_ANA_DFD_GEN_CTRL_BLK_INPUT_S 1
+#define GLGEN_ANA_DFD_GEN_CTRL_BLK_INPUT_M BIT(1)
+#define GLGEN_ANA_DFD_LOG_0 0x0020C3A8 /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_LOG_0_SOURCE_S 0
+#define GLGEN_ANA_DFD_LOG_0_SOURCE_M MAKEMASK(0x7, 0)
+#define GLGEN_ANA_DFD_LOG_0_LVL_OR_EDGE_S 3
+#define GLGEN_ANA_DFD_LOG_0_LVL_OR_EDGE_M BIT(3)
+#define GLGEN_ANA_DFD_LOG_0_RC_DISP_TRIG_S 4
+#define GLGEN_ANA_DFD_LOG_0_RC_DISP_TRIG_M MAKEMASK(0x7, 4)
+#define GLGEN_ANA_DFD_LOG_0_FLD_MODE_S 8
+#define GLGEN_ANA_DFD_LOG_0_FLD_MODE_M BIT(8)
+#define GLGEN_ANA_DFD_LOG_0_DLY_CYCL_S 16
+#define GLGEN_ANA_DFD_LOG_0_DLY_CYCL_M MAKEMASK(0x3FF, 16)
+#define GLGEN_ANA_DFD_LOG_1 0x0020C3AC /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_LOG_1_NUM_EVENTS_S 0
+#define GLGEN_ANA_DFD_LOG_1_NUM_EVENTS_M MAKEMASK(0x3FF, 0)
+#define GLGEN_ANA_DFD_LOG_1_NUM_TRIGS_S 16
+#define GLGEN_ANA_DFD_LOG_1_NUM_TRIGS_M MAKEMASK(0x3FF, 16)
+#define GLGEN_ANA_DFD_LOG_ACTN_EN 0x0020C3F8 /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_LOG_ACTN_EN_BLK_PH_ARB_S 0
+#define GLGEN_ANA_DFD_LOG_ACTN_EN_BLK_PH_ARB_M BIT(0)
+#define GLGEN_ANA_DFD_LOG_ACTN_EN_BLK_PH_FB_S 1
+#define GLGEN_ANA_DFD_LOG_ACTN_EN_BLK_PH_FB_M BIT(1)
+#define GLGEN_ANA_DFD_LOG_ACTN_EN_BLK_INPUT_S 2
+#define GLGEN_ANA_DFD_LOG_ACTN_EN_BLK_INPUT_M BIT(2)
+#define GLGEN_ANA_DFD_LOG_ACTN_EN_STP_OUTPUT_S 3
+#define GLGEN_ANA_DFD_LOG_ACTN_EN_STP_OUTPUT_M BIT(3)
+#define GLGEN_ANA_DFD_LOG_ACTN_EN_STP_WR_DFD_FIFO_S 4
+#define GLGEN_ANA_DFD_LOG_ACTN_EN_STP_WR_DFD_FIFO_M BIT(4)
+#define GLGEN_ANA_DFD_LOG_ACTN_RST 0x0020C3FC /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_LOG_ACTN_RST_BLK_PH_ARB_S 0
+#define GLGEN_ANA_DFD_LOG_ACTN_RST_BLK_PH_ARB_M BIT(0)
+#define GLGEN_ANA_DFD_LOG_ACTN_RST_BLK_PH_FB_S 1
+#define GLGEN_ANA_DFD_LOG_ACTN_RST_BLK_PH_FB_M BIT(1)
+#define GLGEN_ANA_DFD_LOG_ACTN_RST_BLK_INPUT_S 2
+#define GLGEN_ANA_DFD_LOG_ACTN_RST_BLK_INPUT_M BIT(2)
+#define GLGEN_ANA_DFD_LOG_ACTN_RST_STP_OUTPUT_S 3
+#define GLGEN_ANA_DFD_LOG_ACTN_RST_STP_OUTPUT_M BIT(3)
+#define GLGEN_ANA_DFD_LOG_ACTN_RST_STP_WR_DFD_FIFO_S 4
+#define GLGEN_ANA_DFD_LOG_ACTN_RST_STP_WR_DFD_FIFO_M BIT(4)
+#define GLGEN_ANA_DFD_LOG_DATA(_i) (0x0020C3B0 + ((_i) * 4)) /* _i=0...8 */ /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_LOG_DATA_MAX_INDEX 8
+#define GLGEN_ANA_DFD_LOG_DATA_DATA_S 0
+#define GLGEN_ANA_DFD_LOG_DATA_DATA_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_DFD_LOG_MASK(_i) (0x0020C3D4 + ((_i) * 4)) /* _i=0...8 */ /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_LOG_MASK_MAX_INDEX 8
+#define GLGEN_ANA_DFD_LOG_MASK_MASK_S 0
+#define GLGEN_ANA_DFD_LOG_MASK_MASK_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_DFD_LOG_RST_ALL 0x0020C400 /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_LOG_RST_ALL_RST_S 0
+#define GLGEN_ANA_DFD_LOG_RST_ALL_RST_M BIT(0)
+#define GLGEN_ANA_DFD_LOG_RST_ALL_GEN_RST_S 1
+#define GLGEN_ANA_DFD_LOG_RST_ALL_GEN_RST_M BIT(1)
+#define GLGEN_ANA_DFD_LOG_TRG_0 0x0020C404 /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_LOG_TRG_0_TAGID_S 0
+#define GLGEN_ANA_DFD_LOG_TRG_0_TAGID_M MAKEMASK(0x3F, 0)
+#define GLGEN_ANA_DFD_LOG_TRG_0_ACT_TRIGGED_S 16
+#define GLGEN_ANA_DFD_LOG_TRG_0_ACT_TRIGGED_M BIT(16)
+#define GLGEN_ANA_DFD_LOG_TRG_0_MAX_NUM_RND_S 24
+#define GLGEN_ANA_DFD_LOG_TRG_0_MAX_NUM_RND_M MAKEMASK(0x7F, 24)
+#define GLGEN_ANA_DFD_LOG_TRG_0_TRIGGED_S 31
+#define GLGEN_ANA_DFD_LOG_TRG_0_TRIGGED_M BIT(31)
+#define GLGEN_ANA_DFD_LOG_TRG_DATA(_i) (0x0020C408 + ((_i) * 4)) /* _i=0...8 */ /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_LOG_TRG_DATA_MAX_INDEX 8
+#define GLGEN_ANA_DFD_LOG_TRG_DATA_DATA_S 0
+#define GLGEN_ANA_DFD_LOG_TRG_DATA_DATA_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_DFD_PACE_OUT 0x0020C4CC /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_PACE_OUT_PUSH_S 0
+#define GLGEN_ANA_DFD_PACE_OUT_PUSH_M BIT(0)
+#define GLGEN_ANA_DFD_PACING_0 0x0020C390 /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_PACING_0_STOP_FEEDBK_S 0
+#define GLGEN_ANA_DFD_PACING_0_STOP_FEEDBK_M BIT(0)
+#define GLGEN_ANA_DFD_PACING_0_STOP_ARB_S 1
+#define GLGEN_ANA_DFD_PACING_0_STOP_ARB_M BIT(1)
+#define GLGEN_ANA_DFD_PACING_0_NUM_CHUNKS_S 2
+#define GLGEN_ANA_DFD_PACING_0_NUM_CHUNKS_M MAKEMASK(0x1F, 2)
+#define GLGEN_ANA_DFD_PACING_1 0x0020C394 /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_PACING_1_PUSH_S 0
+#define GLGEN_ANA_DFD_PACING_1_PUSH_M BIT(0)
+#define GLGEN_ANA_DFD_REG_FILE_ACC_0 0x0020C39C /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_REG_FILE_ACC_0_SLOT_ID_S 0
+#define GLGEN_ANA_DFD_REG_FILE_ACC_0_SLOT_ID_M MAKEMASK(0xF, 0)
+#define GLGEN_ANA_DFD_REG_FILE_ACC_1 0x0020C3A0 /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_REG_FILE_ACC_1_REGID_S 0
+#define GLGEN_ANA_DFD_REG_FILE_ACC_1_REGID_M MAKEMASK(0xFF, 0)
+#define GLGEN_ANA_DFD_REG_FILE_ACC_RES 0x0020C3A4 /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_REG_FILE_ACC_RES_REG_VAL_S 0
+#define GLGEN_ANA_DFD_REG_FILE_ACC_RES_REG_VAL_M MAKEMASK(0xFFFF, 0)
+#define GLGEN_ANA_DFD_REG_FILE_ACC_RES_EXCEPTIONS_S 16
+#define GLGEN_ANA_DFD_REG_FILE_ACC_RES_EXCEPTIONS_M MAKEMASK(0x7FFF, 16)
+#define GLGEN_ANA_DFD_TAGIDS 0x0020C438 /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_TAGIDS_TAGID_IN_DFD_FIFO_S 0
+#define GLGEN_ANA_DFD_TAGIDS_TAGID_IN_DFD_FIFO_M MAKEMASK(0x3F, 0)
+#define GLGEN_ANA_DFD_TAGIDS_TAGID_NXT_ANA_S 8
+#define GLGEN_ANA_DFD_TAGIDS_TAGID_NXT_ANA_M MAKEMASK(0x3F, 8)
+#define GLGEN_ANA_DFD_TAGIDS_TAGID_OUT_S 16
+#define GLGEN_ANA_DFD_TAGIDS_TAGID_OUT_M MAKEMASK(0x3F, 16)
+#define GLGEN_ANA_DFD_TAGIDS_SLOTID_IN_DFD_FIFO_S 24
+#define GLGEN_ANA_DFD_TAGIDS_SLOTID_IN_DFD_FIFO_M MAKEMASK(0xF, 24)
+#define GLGEN_ANA_DFD_TAGIDS_SLOTID_NXT_ANA_S 28
+#define GLGEN_ANA_DFD_TAGIDS_SLOTID_NXT_ANA_M MAKEMASK(0xF, 28)
+#define GLGEN_ANA_ERR_AUX 0x0020C228 /* Reset Source: CORER */
+#define GLGEN_ANA_ERR_AUX_IPLEN_GPREG_S 0
+#define GLGEN_ANA_ERR_AUX_IPLEN_GPREG_M MAKEMASK(0xF, 0)
+#define GLGEN_ANA_ERR_CTRL 0x0020C220 /* Reset Source: CORER */
+#define GLGEN_ANA_ERR_CTRL_ERR_MASK_EN_S 0
+#define GLGEN_ANA_ERR_CTRL_ERR_MASK_EN_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_FLAG_MAP(_i) (0x0020C000 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GLGEN_ANA_FLAG_MAP_MAX_INDEX 63
+#define GLGEN_ANA_FLAG_MAP_FLAG_EN_S 0
+#define GLGEN_ANA_FLAG_MAP_FLAG_EN_M BIT(0)
+#define GLGEN_ANA_FLAG_MAP_EXT_FLAG_ID_S 1
+#define GLGEN_ANA_FLAG_MAP_EXT_FLAG_ID_M MAKEMASK(0x3F, 1)
+#define GLGEN_ANA_GEN_DFD_RO 0x0020C4C8 /* Reset Source: CORER */
+#define GLGEN_ANA_GEN_DFD_RO_GEN_VAL_S 0
+#define GLGEN_ANA_GEN_DFD_RO_GEN_VAL_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_GIGO_FIFO_PTR 0x0020C448 /* Reset Source: CORER */
+#define GLGEN_ANA_GIGO_FIFO_PTR_HEAD_S 0
+#define GLGEN_ANA_GIGO_FIFO_PTR_HEAD_M MAKEMASK(0x1FF, 0)
+#define GLGEN_ANA_GIGO_FIFO_PTR_USED_SPACE_S 16
+#define GLGEN_ANA_GIGO_FIFO_PTR_USED_SPACE_M MAKEMASK(0x3FF, 16)
+#define GLGEN_ANA_HDR_FIFO_FIFO_PTR 0x0020C44C /* Reset Source: CORER */
+#define GLGEN_ANA_HDR_FIFO_FIFO_PTR_HEAD_S 0
+#define GLGEN_ANA_HDR_FIFO_FIFO_PTR_HEAD_M MAKEMASK(0x1FF, 0)
+#define GLGEN_ANA_HDR_FIFO_FIFO_PTR_USED_SPACE_S 16
+#define GLGEN_ANA_HDR_FIFO_FIFO_PTR_USED_SPACE_M MAKEMASK(0x3FF, 16)
+#define GLGEN_ANA_INV_NODE_PTYPE 0x0020C210 /* Reset Source: CORER */
+#define GLGEN_ANA_INV_NODE_PTYPE_INV_NODE_PTYPE_S 0
+#define GLGEN_ANA_INV_NODE_PTYPE_INV_NODE_PTYPE_M MAKEMASK(0x7FF, 0)
+#define GLGEN_ANA_INV_PROT_ID 0x0020C214 /* Reset Source: CORER */
+#define GLGEN_ANA_INV_PROT_ID_INV_PROT_ID_S 0
+#define GLGEN_ANA_INV_PROT_ID_INV_PROT_ID_M MAKEMASK(0xFF, 0)
+#define GLGEN_ANA_INV_PTYPE_MARKER 0x0020C218 /* Reset Source: CORER */
+#define GLGEN_ANA_INV_PTYPE_MARKER_INV_PTYPE_MARKER_S 0
+#define GLGEN_ANA_INV_PTYPE_MARKER_INV_PTYPE_MARKER_M MAKEMASK(0x7F, 0)
+#define GLGEN_ANA_LAST_PROT_ID(_i) (0x0020C1E4 + ((_i) * 4)) /* _i=0...5 */ /* Reset Source: CORER */
+#define GLGEN_ANA_LAST_PROT_ID_MAX_INDEX 5
+#define GLGEN_ANA_LAST_PROT_ID_EN_S 0
+#define GLGEN_ANA_LAST_PROT_ID_EN_M BIT(0)
+#define GLGEN_ANA_LAST_PROT_ID_PROT_ID_S 1
+#define GLGEN_ANA_LAST_PROT_ID_PROT_ID_M MAKEMASK(0xFF, 1)
+#define GLGEN_ANA_MAX_HDRLEN 0x0020C1E0 /* Reset Source: CORER */
+#define GLGEN_ANA_MAX_HDRLEN_NPC_S 0
+#define GLGEN_ANA_MAX_HDRLEN_NPC_M MAKEMASK(0xFF, 0)
+#define GLGEN_ANA_MAX_HDRLEN_MAX_HDR_LEN_S 8
+#define GLGEN_ANA_MAX_HDRLEN_MAX_HDR_LEN_M MAKEMASK(0x1FF, 8)
+#define GLGEN_ANA_MAX_PROT 0x0020C224 /* Reset Source: CORER */
+#define GLGEN_ANA_MAX_PROT_MAX_PRTS_S 0
+#define GLGEN_ANA_MAX_PROT_MAX_PRTS_M MAKEMASK(0x7F, 0)
+#define GLGEN_ANA_MAX_ROUND 0x0020C20C /* Reset Source: CORER */
+#define GLGEN_ANA_MAX_ROUND_MAX_ROUND_ABS_S 0
+#define GLGEN_ANA_MAX_ROUND_MAX_ROUND_ABS_M MAKEMASK(0x7F, 0)
+#define GLGEN_ANA_MIN_PKT 0x0020C42C /* Reset Source: CORER */
+#define GLGEN_ANA_MIN_PKT_MIN_LEN_S 0
+#define GLGEN_ANA_MIN_PKT_MIN_LEN_M MAKEMASK(0x3FFF, 0)
+#define GLGEN_ANA_NMPG_KEYMASK(_i) (0x0020C1D0 + ((_i) * 4)) /* _i=0...3 */ /* Reset Source: CORER */
+#define GLGEN_ANA_NMPG_KEYMASK_MAX_INDEX 3
+#define GLGEN_ANA_NMPG_KEYMASK_HASH_KEY_S 0
+#define GLGEN_ANA_NMPG_KEYMASK_HASH_KEY_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_NMPG0_HASHKEY(_i) (0x0020C1B0 + ((_i) * 4)) /* _i=0...3 */ /* Reset Source: CORER */
+#define GLGEN_ANA_NMPG0_HASHKEY_MAX_INDEX 3
+#define GLGEN_ANA_NMPG0_HASHKEY_HASH_KEY_S 0
+#define GLGEN_ANA_NMPG0_HASHKEY_HASH_KEY_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_NO_HIT_PG_NM_PG 0x0020C204 /* Reset Source: CORER */
+#define GLGEN_ANA_NO_HIT_PG_NM_PG_NPC_S 0
+#define GLGEN_ANA_NO_HIT_PG_NM_PG_NPC_M MAKEMASK(0xFF, 0)
+#define GLGEN_ANA_OUT_OF_PKT 0x0020C200 /* Reset Source: CORER */
+#define GLGEN_ANA_OUT_OF_PKT_NPC_S 0
+#define GLGEN_ANA_OUT_OF_PKT_NPC_M MAKEMASK(0xFF, 0)
+#define GLGEN_ANA_P2P(_i) (0x0020C160 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLGEN_ANA_P2P_MAX_INDEX 15
+#define GLGEN_ANA_P2P_TARGET_PROF_S 0
+#define GLGEN_ANA_P2P_TARGET_PROF_M MAKEMASK(0xF, 0)
+#define GLGEN_ANA_PG_KEYMASK(_i) (0x0020C1C0 + ((_i) * 4)) /* _i=0...3 */ /* Reset Source: CORER */
+#define GLGEN_ANA_PG_KEYMASK_MAX_INDEX 3
+#define GLGEN_ANA_PG_KEYMASK_HASH_KEY_S 0
+#define GLGEN_ANA_PG_KEYMASK_HASH_KEY_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_PG0_HASHKEY(_i) (0x0020C1A0 + ((_i) * 4)) /* _i=0...3 */ /* Reset Source: CORER */
+#define GLGEN_ANA_PG0_HASHKEY_MAX_INDEX 3
+#define GLGEN_ANA_PG0_HASHKEY_HASH_KEY_S 0
+#define GLGEN_ANA_PG0_HASHKEY_HASH_KEY_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_PROFIL_CTRL 0x0020C1FC /* Reset Source: CORER */
+#define GLGEN_ANA_PROFIL_CTRL_PROFILE_SELECT_MDID_S 0
+#define GLGEN_ANA_PROFIL_CTRL_PROFILE_SELECT_MDID_M MAKEMASK(0x1F, 0)
+#define GLGEN_ANA_PROFIL_CTRL_PROFILE_SELECT_MDSTART_S 5
+#define GLGEN_ANA_PROFIL_CTRL_PROFILE_SELECT_MDSTART_M MAKEMASK(0xF, 5)
+#define GLGEN_ANA_PROFIL_CTRL_PROFILE_SELECT_MD_LEN_S 9
+#define GLGEN_ANA_PROFIL_CTRL_PROFILE_SELECT_MD_LEN_M MAKEMASK(0x1F, 9)
+#define GLGEN_ANA_PROFIL_CTRL_NUM_CTRL_DOMAIN_S 14
+#define GLGEN_ANA_PROFIL_CTRL_NUM_CTRL_DOMAIN_M MAKEMASK(0x3, 14)
+#define GLGEN_ANA_PROFIL_CTRL_DEF_PROF_ID_S 16
+#define GLGEN_ANA_PROFIL_CTRL_DEF_PROF_ID_M MAKEMASK(0xF, 16)
+#define GLGEN_ANA_PROFIL_CTRL_SEL_DEF_PROF_ID_S 20
+#define GLGEN_ANA_PROFIL_CTRL_SEL_DEF_PROF_ID_M BIT(20)
+#define GLGEN_ANA_PSTAT_FIFO_PTR 0x0020C444 /* Reset Source: CORER */
+#define GLGEN_ANA_PSTAT_FIFO_PTR_HEAD_S 0
+#define GLGEN_ANA_PSTAT_FIFO_PTR_HEAD_M MAKEMASK(0x1FF, 0)
+#define GLGEN_ANA_PSTAT_FIFO_PTR_USED_SPACE_S 16
+#define GLGEN_ANA_PSTAT_FIFO_PTR_USED_SPACE_M MAKEMASK(0x3FF, 16)
+#define GLGEN_ANA_STAT_FIFO_PTR 0x0020C440 /* Reset Source: CORER */
+#define GLGEN_ANA_STAT_FIFO_PTR_HEAD_S 0
+#define GLGEN_ANA_STAT_FIFO_PTR_HEAD_M MAKEMASK(0x1FF, 0)
+#define GLGEN_ANA_STAT_FIFO_PTR_USED_SPACE_S 16
+#define GLGEN_ANA_STAT_FIFO_PTR_USED_SPACE_M MAKEMASK(0x3FF, 16)
+#define GLGEN_ANA_TX_DFD_LOG_0 0x0020D3A8 /* Reset Source: CORER */
+#define GLGEN_ANA_TX_DFD_LOG_0_SOURCE_S 0
+#define GLGEN_ANA_TX_DFD_LOG_0_SOURCE_M MAKEMASK(0x7, 0)
+#define GLGEN_ANA_TX_DFD_LOG_0_LVL_OR_EDGE_S 3
+#define GLGEN_ANA_TX_DFD_LOG_0_LVL_OR_EDGE_M BIT(3)
+#define GLGEN_ANA_TX_DFD_LOG_0_RC_DISP_TRIG_S 4
+#define GLGEN_ANA_TX_DFD_LOG_0_RC_DISP_TRIG_M MAKEMASK(0x7, 4)
+#define GLGEN_ANA_TX_DFD_LOG_0_FLD_MODE_S 8
+#define GLGEN_ANA_TX_DFD_LOG_0_FLD_MODE_M BIT(8)
+#define GLGEN_ANA_TX_DFD_LOG_0_DLY_CYCL_S 16
+#define GLGEN_ANA_TX_DFD_LOG_0_DLY_CYCL_M MAKEMASK(0x3FF, 16)
+#define GLGEN_ANA_TX_DFD_PACE_OUT 0x0020D4CC /* Reset Source: CORER */
+#define GLGEN_ANA_TX_DFD_PACE_OUT_PUSH_S 0
+#define GLGEN_ANA_TX_DFD_PACE_OUT_PUSH_M BIT(0)
+#define GLGEN_ANA_TX_GEN_DFD_RO 0x0020D4C8 /* Reset Source: CORER */
+#define GLGEN_ANA_TX_GEN_DFD_RO_GEN_VAL_S 0
+#define GLGEN_ANA_TX_GEN_DFD_RO_GEN_VAL_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_TX_P2P(_i) (0x0020D160 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLGEN_ANA_TX_P2P_MAX_INDEX 15
+#define GLGEN_ANA_TX_P2P_TARGET_PROF_S 0
+#define GLGEN_ANA_TX_P2P_TARGET_PROF_M MAKEMASK(0xF, 0)
+#define GLGEN_ASSERT_HLP 0x000B81E4 /* Reset Source: POR */
+#define GLGEN_ASSERT_HLP_CORE_ON_RST_S 0
+#define GLGEN_ASSERT_HLP_CORE_ON_RST_M BIT(0)
+#define GLGEN_ASSERT_HLP_FULL_ON_RST_S 1
+#define GLGEN_ASSERT_HLP_FULL_ON_RST_M BIT(1)
+#define GLGEN_CLKSTAT 0x000B8184 /* Reset Source: POR */
+#define GLGEN_CLKSTAT_U_CLK_SPEED_S 0
+#define GLGEN_CLKSTAT_U_CLK_SPEED_M MAKEMASK(0x7, 0)
+#define GLGEN_CLKSTAT_L_CLK_SPEED_S 3
+#define GLGEN_CLKSTAT_L_CLK_SPEED_M MAKEMASK(0x7, 3)
+#define GLGEN_CLKSTAT_PSM_CLK_SPEED_S 6
+#define GLGEN_CLKSTAT_PSM_CLK_SPEED_M MAKEMASK(0x7, 6)
+#define GLGEN_CLKSTAT_RXCTL_CLK_SPEED_S 9
+#define GLGEN_CLKSTAT_RXCTL_CLK_SPEED_M MAKEMASK(0x7, 9)
+#define GLGEN_CLKSTAT_UANA_CLK_SPEED_S 12
+#define GLGEN_CLKSTAT_UANA_CLK_SPEED_M MAKEMASK(0x7, 12)
+#define GLGEN_CLKSTAT_PE_CLK_SPEED_S 18
+#define GLGEN_CLKSTAT_PE_CLK_SPEED_M MAKEMASK(0x7, 18)
+#define GLGEN_CLKSTAT_SRC 0x000B826C /* Reset Source: POR */
+#define GLGEN_CLKSTAT_SRC_U_CLK_SRC_S 0
+#define GLGEN_CLKSTAT_SRC_U_CLK_SRC_M MAKEMASK(0x3, 0)
+#define GLGEN_CLKSTAT_SRC_L_CLK_SRC_S 2
+#define GLGEN_CLKSTAT_SRC_L_CLK_SRC_M MAKEMASK(0x3, 2)
+#define GLGEN_CLKSTAT_SRC_PSM_CLK_SRC_S 4
+#define GLGEN_CLKSTAT_SRC_PSM_CLK_SRC_M MAKEMASK(0x3, 4)
+#define GLGEN_CLKSTAT_SRC_RXCTL_CLK_SRC_S 6
+#define GLGEN_CLKSTAT_SRC_RXCTL_CLK_SRC_M MAKEMASK(0x3, 6)
+#define GLGEN_CLKSTAT_SRC_UANA_CLK_SRC_S 8
+#define GLGEN_CLKSTAT_SRC_UANA_CLK_SRC_M MAKEMASK(0xF, 8)
+#define GLGEN_ECC_ERR_INT_TOG_MASK_H 0x00093A00 /* Reset Source: CORER */
+#define GLGEN_ECC_ERR_INT_TOG_MASK_H_CLIENT_NUM_S 0
+#define GLGEN_ECC_ERR_INT_TOG_MASK_H_CLIENT_NUM_M MAKEMASK(0x7F, 0)
+#define GLGEN_ECC_ERR_INT_TOG_MASK_L 0x000939FC /* Reset Source: CORER */
+#define GLGEN_ECC_ERR_INT_TOG_MASK_L_CLIENT_NUM_S 0
+#define GLGEN_ECC_ERR_INT_TOG_MASK_L_CLIENT_NUM_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ECC_ERR_RST_MASK_H 0x000939F8 /* Reset Source: CORER */
+#define GLGEN_ECC_ERR_RST_MASK_H_CLIENT_NUM_S 0
+#define GLGEN_ECC_ERR_RST_MASK_H_CLIENT_NUM_M MAKEMASK(0x7F, 0)
+#define GLGEN_ECC_ERR_RST_MASK_L 0x000939F4 /* Reset Source: CORER */
+#define GLGEN_ECC_ERR_RST_MASK_L_CLIENT_NUM_S 0
+#define GLGEN_ECC_ERR_RST_MASK_L_CLIENT_NUM_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_GPIO_CTL(_i) (0x000880C8 + ((_i) * 4)) /* _i=0...6 */ /* Reset Source: POR */
+#define GLGEN_GPIO_CTL_MAX_INDEX 6
+#define GLGEN_GPIO_CTL_IN_VALUE_S 0
+#define GLGEN_GPIO_CTL_IN_VALUE_M BIT(0)
+#define GLGEN_GPIO_CTL_IN_TRANSIT_S 1
+#define GLGEN_GPIO_CTL_IN_TRANSIT_M BIT(1)
+#define GLGEN_GPIO_CTL_OUT_VALUE_S 2
+#define GLGEN_GPIO_CTL_OUT_VALUE_M BIT(2)
+#define GLGEN_GPIO_CTL_NO_P_UP_S 3
+#define GLGEN_GPIO_CTL_NO_P_UP_M BIT(3)
+#define GLGEN_GPIO_CTL_PIN_DIR_S 4
+#define GLGEN_GPIO_CTL_PIN_DIR_M BIT(4)
+#define GLGEN_GPIO_CTL_TRI_CTL_S 5
+#define GLGEN_GPIO_CTL_TRI_CTL_M BIT(5)
+#define GLGEN_GPIO_CTL_PIN_FUNC_S 8
+#define GLGEN_GPIO_CTL_PIN_FUNC_M MAKEMASK(0xF, 8)
+#define GLGEN_GPIO_CTL_INT_MODE_S 12
+#define GLGEN_GPIO_CTL_INT_MODE_M MAKEMASK(0x3, 12)
+#define GLGEN_MARKER_COUNT 0x000939E8 /* Reset Source: CORER */
+#define GLGEN_MARKER_COUNT_MARKER_COUNT_S 0
+#define GLGEN_MARKER_COUNT_MARKER_COUNT_M MAKEMASK(0xFF, 0)
+#define GLGEN_MARKER_COUNT_MARKER_COUNT_EN_S 31
+#define GLGEN_MARKER_COUNT_MARKER_COUNT_EN_M BIT(31)
+#define GLGEN_RSTAT 0x000B8188 /* Reset Source: POR */
+#define GLGEN_RSTAT_DEVSTATE_S 0
+#define GLGEN_RSTAT_DEVSTATE_M MAKEMASK(0x3, 0)
+#define GLGEN_RSTAT_RESET_TYPE_S 2
+#define GLGEN_RSTAT_RESET_TYPE_M MAKEMASK(0x3, 2)
+#define GLGEN_RSTAT_CORERCNT_S 4
+#define GLGEN_RSTAT_CORERCNT_M MAKEMASK(0x3, 4)
+#define GLGEN_RSTAT_GLOBRCNT_S 6
+#define GLGEN_RSTAT_GLOBRCNT_M MAKEMASK(0x3, 6)
+#define GLGEN_RSTAT_EMPRCNT_S 8
+#define GLGEN_RSTAT_EMPRCNT_M MAKEMASK(0x3, 8)
+#define GLGEN_RSTAT_TIME_TO_RST_S 10
+#define GLGEN_RSTAT_TIME_TO_RST_M MAKEMASK(0x3F, 10)
+#define GLGEN_RSTAT_RTRIG_FLR_S 16
+#define GLGEN_RSTAT_RTRIG_FLR_M BIT(16)
+#define GLGEN_RSTAT_RTRIG_ECC_S 17
+#define GLGEN_RSTAT_RTRIG_ECC_M BIT(17)
+#define GLGEN_RSTAT_RTRIG_FW_AUX_S 18
+#define GLGEN_RSTAT_RTRIG_FW_AUX_M BIT(18)
+#define GLGEN_RTRIG 0x000B8190 /* Reset Source: CORER */
+#define GLGEN_RTRIG_CORER_S 0
+#define GLGEN_RTRIG_CORER_M BIT(0)
+#define GLGEN_RTRIG_GLOBR_S 1
+#define GLGEN_RTRIG_GLOBR_M BIT(1)
+#define GLGEN_RTRIG_EMPFWR_S 2
+#define GLGEN_RTRIG_EMPFWR_M BIT(2)
+#define GLGEN_STAT 0x000B612C /* Reset Source: POR */
+#define GLGEN_STAT_RSVD4FW_S 0
+#define GLGEN_STAT_RSVD4FW_M MAKEMASK(0xFF, 0)
+#define GLGEN_VFLRSTAT(_i) (0x00093A04 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLGEN_VFLRSTAT_MAX_INDEX 7
+#define GLGEN_VFLRSTAT_VFLRS_S 0
+#define GLGEN_VFLRSTAT_VFLRS_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_XLR_MSK2HLP_RDY 0x000939F0 /* Reset Source: CORER */
+#define GLGEN_XLR_MSK2HLP_RDY_GLGEN_XLR_MSK2HLP_RDY_S 0
+#define GLGEN_XLR_MSK2HLP_RDY_GLGEN_XLR_MSK2HLP_RDY_M BIT(0)
+#define GLGEN_XLR_TRNS_WAIT_COUNT 0x000939EC /* Reset Source: CORER */
+#define GLGEN_XLR_TRNS_WAIT_COUNT_W_BTWN_TRNS_COUNT_S 0
+#define GLGEN_XLR_TRNS_WAIT_COUNT_W_BTWN_TRNS_COUNT_M MAKEMASK(0x1F, 0)
+#define GLGEN_XLR_TRNS_WAIT_COUNT_W_PEND_TRNS_COUNT_S 8
+#define GLGEN_XLR_TRNS_WAIT_COUNT_W_PEND_TRNS_COUNT_M MAKEMASK(0xFF, 8)
+#define GLQDC_DFD_CAM_ACC 0x002D2E24 /* Reset Source: CORER */
+#define GLQDC_DFD_CAM_ACC_CLNUM_S 0
+#define GLQDC_DFD_CAM_ACC_CLNUM_M MAKEMASK(0x7F, 0)
+#define GLQDC_DFD_CAM_ACC_RES_0 0x002D2E28 /* Reset Source: CORER */
+#define GLQDC_DFD_CAM_ACC_RES_0_QID_S 0
+#define GLQDC_DFD_CAM_ACC_RES_0_QID_M MAKEMASK(0x3FFF, 0)
+#define GLQDC_DFD_CAM_ACC_RES_0_CAM_V_S 16
+#define GLQDC_DFD_CAM_ACC_RES_0_CAM_V_M BIT(16)
+#define GLQDC_DFD_CAM_ACC_RES_0_CAM_E_S 31
+#define GLQDC_DFD_CAM_ACC_RES_0_CAM_E_M BIT(31)
+#define GLQDC_DFD_CAM_ACC_RES_1 0x002D2E2C /* Reset Source: CORER */
+#define GLQDC_DFD_CAM_ACC_RES_1_CL_HEAD_S 0
+#define GLQDC_DFD_CAM_ACC_RES_1_CL_HEAD_M MAKEMASK(0x3F, 0)
+#define GLQDC_DFD_CAM_ACC_RES_1_CL_TAIL_S 8
+#define GLQDC_DFD_CAM_ACC_RES_1_CL_TAIL_M MAKEMASK(0x3F, 8)
+#define GLQDC_DFD_CAM_ACC_RES_1_CL_EMPTY_S 16
+#define GLQDC_DFD_CAM_ACC_RES_1_CL_EMPTY_M BIT(16)
+#define GLQDC_DFD_CAM_ACC_RES_1_CL_MALC_S 24
+#define GLQDC_DFD_CAM_ACC_RES_1_CL_MALC_M MAKEMASK(0x3F, 24)
+#define GLQDC_DFD_FIFO_CFG_0 0x002D2E34 /* Reset Source: CORER */
+#define GLQDC_DFD_FIFO_CFG_0_QID_S 0
+#define GLQDC_DFD_FIFO_CFG_0_QID_M MAKEMASK(0x3FFF, 0)
+#define GLQDC_DFD_FIFO_CFG_0_SMPL_PT_S 16
+#define GLQDC_DFD_FIFO_CFG_0_SMPL_PT_M MAKEMASK(0xFF, 16)
+#define GLQDC_DFD_FIFO_CFG_0_ALL_QID_S 31
+#define GLQDC_DFD_FIFO_CFG_0_ALL_QID_M BIT(31)
+#define GLQDC_DFD_FIFO_CFG_1 0x002D2E38 /* Reset Source: CORER */
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_0_S 0
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_0_M MAKEMASK(0x7, 0)
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_1_S 4
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_1_M MAKEMASK(0x7, 4)
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_2_S 8
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_2_M MAKEMASK(0x7, 8)
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_3_S 12
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_3_M MAKEMASK(0x7, 12)
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_4_S 16
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_4_M MAKEMASK(0x7, 16)
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_5_S 20
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_5_M MAKEMASK(0x7, 20)
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_6_S 24
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_6_M MAKEMASK(0x7, 24)
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_7_S 28
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_7_M MAKEMASK(0x7, 28)
+#define GLQDC_DFD_FIFO_SZ_CFG 0x002D30AC /* Reset Source: CORER */
+#define GLQDC_DFD_FIFO_SZ_CFG_COMP_S 0
+#define GLQDC_DFD_FIFO_SZ_CFG_COMP_M MAKEMASK(0xFF, 0)
+#define GLQDC_DFD_FIFO_SZ_CFG_MISS_S 8
+#define GLQDC_DFD_FIFO_SZ_CFG_MISS_M MAKEMASK(0xFF, 8)
+#define GLQDC_DFD_FIFO_SZ_CFG_MISS_COMP_S 16
+#define GLQDC_DFD_FIFO_SZ_CFG_MISS_COMP_M MAKEMASK(0xFF, 16)
+#define GLQDC_DFD_GEN_CHKN 0x002D30A0 /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_CHKN_GEN_BITS_S 0
+#define GLQDC_DFD_GEN_CHKN_GEN_BITS_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLQDC_DFD_GEN_CHKN_2 0x002D30A4 /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_CHKN_2_GEN_BITS_S 0
+#define GLQDC_DFD_GEN_CHKN_2_GEN_BITS_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLQDC_DFD_GEN_CTRL 0x002D2E20 /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_CTRL_ENABLE_S 0
+#define GLQDC_DFD_GEN_CTRL_ENABLE_M BIT(0)
+#define GLQDC_DFD_GEN_CTRL_BLK_INJECT_M1_S 1
+#define GLQDC_DFD_GEN_CTRL_BLK_INJECT_M1_M BIT(1)
+#define GLQDC_DFD_GEN_CTRL_NUM_PAUSE_M1_S 16
+#define GLQDC_DFD_GEN_CTRL_NUM_PAUSE_M1_M MAKEMASK(0x3FF, 16)
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_0 0x002D2EE8 /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_0_MISS_COMP_ACK_S 0
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_0_MISS_COMP_ACK_M MAKEMASK(0x7F, 0)
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_0_MISS_COMP_S 7
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_0_MISS_COMP_M MAKEMASK(0x7F, 7)
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_0_COMP_FSM_DATA_S 14
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_0_COMP_FSM_DATA_M MAKEMASK(0x3, 14)
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_0_COMP_FSM_S 16
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_0_COMP_FSM_M MAKEMASK(0x7F, 16)
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_0_PCIE_OUT_S 23
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_0_PCIE_OUT_M MAKEMASK(0x7, 23)
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_1 0x002D2EEC /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_1_MISS_FSM_S 0
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_1_MISS_FSM_M MAKEMASK(0x7F, 0)
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_1_DFD_S 7
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_1_DFD_M MAKEMASK(0xFF, 7)
+#define GLQDC_DFD_GEN_LOG_FSM 0x002D2EF0 /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_LOG_FSM_FTSTATE_S 0
+#define GLQDC_DFD_GEN_LOG_FSM_FTSTATE_M MAKEMASK(0x3, 0)
+#define GLQDC_DFD_GEN_LOG_FSM_MISS_FIFO_FSM_ST_S 2
+#define GLQDC_DFD_GEN_LOG_FSM_MISS_FIFO_FSM_ST_M MAKEMASK(0x7, 2)
+#define GLQDC_DFD_GEN_LOG_FSM_IN_MISS_FIFO_S 5
+#define GLQDC_DFD_GEN_LOG_FSM_IN_MISS_FIFO_M MAKEMASK(0x3, 5)
+#define GLQDC_DFD_GEN_LOG_FSM_CPSTATE_S 7
+#define GLQDC_DFD_GEN_LOG_FSM_CPSTATE_M MAKEMASK(0x7, 7)
+#define GLQDC_DFD_GEN_LOGGNG_0 0x002D2EE0 /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_LOGGNG_0_RINGH_WR_RD_S 0
+#define GLQDC_DFD_GEN_LOGGNG_0_RINGH_WR_RD_M BIT(0)
+#define GLQDC_DFD_GEN_LOGGNG_0_QD_WR_RD_S 1
+#define GLQDC_DFD_GEN_LOGGNG_0_QD_WR_RD_M BIT(1)
+#define GLQDC_DFD_GEN_LOGGNG_0_PCIE_RD_REQ_VLD_S 2
+#define GLQDC_DFD_GEN_LOGGNG_0_PCIE_RD_REQ_VLD_M BIT(2)
+#define GLQDC_DFD_GEN_LOGGNG_0_NXT_SQ_VLD_S 3
+#define GLQDC_DFD_GEN_LOGGNG_0_NXT_SQ_VLD_M BIT(3)
+#define GLQDC_DFD_GEN_LOGGNG_0_SQ_VLD_TO_DONE_S 4
+#define GLQDC_DFD_GEN_LOGGNG_0_SQ_VLD_TO_DONE_M BIT(4)
+#define GLQDC_DFD_GEN_LOGGNG_0_PCIE_COMP_VLD_S 5
+#define GLQDC_DFD_GEN_LOGGNG_0_PCIE_COMP_VLD_M BIT(5)
+#define GLQDC_DFD_GEN_LOGGNG_0_FETCH_NXT_SQ_VLD_S 6
+#define GLQDC_DFD_GEN_LOGGNG_0_FETCH_NXT_SQ_VLD_M BIT(6)
+#define GLQDC_DFD_GEN_LOGGNG_0_MALC_RPT_S 8
+#define GLQDC_DFD_GEN_LOGGNG_0_MALC_RPT_M MAKEMASK(0xF, 8)
+#define GLQDC_DFD_GEN_LOGGNG_0_DFD_FIFO_ADD_S 16
+#define GLQDC_DFD_GEN_LOGGNG_0_DFD_FIFO_ADD_M MAKEMASK(0x7F, 16)
+#define GLQDC_DFD_GEN_LOGGNG_1 0x002D2EE4 /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_RFIL_WM_S 0
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_RFIL_WM_M MAKEMASK(0x3, 0)
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_RFIL_S 2
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_RFIL_M MAKEMASK(0x3, 2)
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_MRED_WM_S 4
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_MRED_WM_M MAKEMASK(0x3, 4)
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_MRED_S 6
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_MRED_M MAKEMASK(0x3, 6)
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_M3_WM_S 8
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_M3_WM_M MAKEMASK(0x3, 8)
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_M3_S 10
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_M3_M MAKEMASK(0x3, 10)
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_LSO_MT_M3_WM_S 12
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_LSO_MT_M3_WM_M MAKEMASK(0x3, 12)
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_LSO_MT_M3_S 14
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_LSO_MT_M3_M MAKEMASK(0x3, 14)
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_ACK_MISS_FIFO_M3_WM_S 16
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_ACK_MISS_FIFO_M3_WM_M MAKEMASK(0x3, 16)
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_ACK_MISS_FIFO_M3_S 18
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_ACK_MISS_FIFO_M3_M MAKEMASK(0x3, 18)
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_EVICT_WM_S 20
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_EVICT_WM_M MAKEMASK(0x3, 20)
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_EVICT_S 22
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_EVICT_M MAKEMASK(0x3, 22)
+#define GLQDC_DFD_GEN_LOGGNG_2 0x002D2FFC /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_LOGGNG_2_WR_WHEN_FULL_S 0
+#define GLQDC_DFD_GEN_LOGGNG_2_WR_WHEN_FULL_M MAKEMASK(0x3F, 0)
+#define GLQDC_DFD_GEN_LOGGNG_2_WR_WHEN_FULL_LT_S 6
+#define GLQDC_DFD_GEN_LOGGNG_2_WR_WHEN_FULL_LT_M MAKEMASK(0x3F, 6)
+#define GLQDC_DFD_GEN_LOGGNG_2_TEST_S 24
+#define GLQDC_DFD_GEN_LOGGNG_2_TEST_M MAKEMASK(0xFF, 24)
+#define GLQDC_DFD_GEN_LOGGNG_3 0x002D3008 /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_LOGGNG_3_GEN_S 0
+#define GLQDC_DFD_GEN_LOGGNG_3_GEN_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLQDC_DFD_GEN_LOGGNG_4 0x002D300C /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_LOGGNG_4_GEN_S 0
+#define GLQDC_DFD_GEN_LOGGNG_4_GEN_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLQDC_DFD_GEN_LOGGNG_5 0x002D3010 /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_LOGGNG_5_GEN_S 0
+#define GLQDC_DFD_GEN_LOGGNG_5_GEN_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLQDC_DFD_GEN_LOGGNG_6 0x002D3014 /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_LOGGNG_6_GEN_S 0
+#define GLQDC_DFD_GEN_LOGGNG_6_GEN_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLQDC_DFD_GEN_STAT_REGS(_i) (0x002D3018 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_STAT_REGS_MAX_INDEX 15
+#define GLQDC_DFD_GEN_STAT_REGS_COUNT_S 0
+#define GLQDC_DFD_GEN_STAT_REGS_COUNT_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLQDC_DFD_LOG_0 0x002D2E3C /* Reset Source: CORER */
+#define GLQDC_DFD_LOG_0_SOURCE_S 0
+#define GLQDC_DFD_LOG_0_SOURCE_M MAKEMASK(0x3, 0)
+#define GLQDC_DFD_LOG_0_LVL_OR_EDGE_S 4
+#define GLQDC_DFD_LOG_0_LVL_OR_EDGE_M BIT(4)
+#define GLQDC_DFD_LOG_0_DLY_CYCL_S 16
+#define GLQDC_DFD_LOG_0_DLY_CYCL_M MAKEMASK(0x3FF, 16)
+#define GLQDC_DFD_LOG_1 0x002D2E40 /* Reset Source: CORER */
+#define GLQDC_DFD_LOG_1_NUM_EVENTS_S 0
+#define GLQDC_DFD_LOG_1_NUM_EVENTS_M MAKEMASK(0x3FF, 0)
+#define GLQDC_DFD_LOG_1_NUM_TRIGS_S 16
+#define GLQDC_DFD_LOG_1_NUM_TRIGS_M MAKEMASK(0x3FF, 16)
+#define GLQDC_DFD_LOG_1_TRIG_B2B_S 31
+#define GLQDC_DFD_LOG_1_TRIG_B2B_M BIT(31)
+#define GLQDC_DFD_LOG_ACTN_EN 0x002D2EA4 /* Reset Source: CORER */
+#define GLQDC_DFD_LOG_ACTN_EN_BLK_INJECT_M1_S 0
+#define GLQDC_DFD_LOG_ACTN_EN_BLK_INJECT_M1_M BIT(0)
+#define GLQDC_DFD_LOG_ACTN_EN_STP_WR_DFD_FIFO_S 1
+#define GLQDC_DFD_LOG_ACTN_EN_STP_WR_DFD_FIFO_M BIT(1)
+#define GLQDC_DFD_LOG_ACTN_EN_STP_UPDT_MALC_RPT_CSR_S 2
+#define GLQDC_DFD_LOG_ACTN_EN_STP_UPDT_MALC_RPT_CSR_M BIT(2)
+#define GLQDC_DFD_LOG_ACTN_RST 0x002D2EA8 /* Reset Source: CORER */
+#define GLQDC_DFD_LOG_ACTN_RST_BLK_INJECT_M1_S 0
+#define GLQDC_DFD_LOG_ACTN_RST_BLK_INJECT_M1_M BIT(0)
+#define GLQDC_DFD_LOG_ACTN_RST_STP_WR_DFD_FIFO_S 1
+#define GLQDC_DFD_LOG_ACTN_RST_STP_WR_DFD_FIFO_M BIT(1)
+#define GLQDC_DFD_LOG_ACTN_RST_STP_UPDT_MALC_RPT_CSR_S 2
+#define GLQDC_DFD_LOG_ACTN_RST_STP_UPDT_MALC_RPT_CSR_M BIT(2)
+#define GLQDC_DFD_LOG_DATA(_i) (0x002D2E44 + ((_i) * 4)) /* _i=0...11 */ /* Reset Source: CORER */
+#define GLQDC_DFD_LOG_DATA_MAX_INDEX 11
+#define GLQDC_DFD_LOG_DATA_DATA_S 0
+#define GLQDC_DFD_LOG_DATA_DATA_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLQDC_DFD_LOG_MASK(_i) (0x002D2E74 + ((_i) * 4)) /* _i=0...11 */ /* Reset Source: CORER */
+#define GLQDC_DFD_LOG_MASK_MAX_INDEX 11
+#define GLQDC_DFD_LOG_MASK_MASK_S 0
+#define GLQDC_DFD_LOG_MASK_MASK_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLQDC_DFD_LOG_TRG_0 0x002D2EAC /* Reset Source: CORER */
+#define GLQDC_DFD_LOG_TRG_0_QID_S 0
+#define GLQDC_DFD_LOG_TRG_0_QID_M MAKEMASK(0x3FFF, 0)
+#define GLQDC_DFD_LOG_TRG_0_ACT_TRIGGED_S 16
+#define GLQDC_DFD_LOG_TRG_0_ACT_TRIGGED_M BIT(16)
+#define GLQDC_DFD_LOG_TRG_0_TRIGGED_S 31
+#define GLQDC_DFD_LOG_TRG_0_TRIGGED_M BIT(31)
+#define GLQDC_DFD_LOG_TRG_DATA(_i) (0x002D2EB0 + ((_i) * 4)) /* _i=0...11 */ /* Reset Source: CORER */
+#define GLQDC_DFD_LOG_TRG_DATA_MAX_INDEX 11
+#define GLQDC_DFD_LOG_TRG_DATA_DATA_S 0
+#define GLQDC_DFD_LOG_TRG_DATA_DATA_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLQDC_DFD_PACE 0x002D3000 /* Reset Source: CORER */
+#define GLQDC_DFD_PACE_PUSH_S 0
+#define GLQDC_DFD_PACE_PUSH_M BIT(0)
+#define GLQDC_DFD_RST 0x002D2E30 /* Reset Source: CORER */
+#define GLQDC_DFD_RST_RST_S 0
+#define GLQDC_DFD_RST_RST_M BIT(0)
+#define GLQDC_DFD_RST_CLR_MALC_RPT_S 1
+#define GLQDC_DFD_RST_CLR_MALC_RPT_M BIT(1)
+#define GLQDC_DFD_RST_LOG_RST_S 2
+#define GLQDC_DFD_RST_LOG_RST_M BIT(2)
+#define GLQDC_DFD_SAMPLE_RO_CSR 0x002D3004 /* Reset Source: CORER */
+#define GLQDC_DFD_SAMPLE_RO_CSR_SMPL_S 0
+#define GLQDC_DFD_SAMPLE_RO_CSR_SMPL_M BIT(0)
+#define GLQDC_DFD_STATS_CFG_0 0x002D3058 /* Reset Source: CORER */
+#define GLQDC_DFD_STATS_CFG_0_CLR_S 0
+#define GLQDC_DFD_STATS_CFG_0_CLR_M BIT(0)
+#define GLQDC_DFD_STATS_CFG_1 0x002D305C /* Reset Source: CORER */
+#define GLQDC_DFD_STATS_CFG_1_QID_S 0
+#define GLQDC_DFD_STATS_CFG_1_QID_M MAKEMASK(0x3FFF, 0)
+#define GLQDC_DFD_STATS_CFG_1_GEN_CFG_S 16
+#define GLQDC_DFD_STATS_CFG_1_GEN_CFG_M MAKEMASK(0x1F, 16)
+#define GLQDC_DFD_STATS_CFG_EVNT(_i) (0x002D3060 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLQDC_DFD_STATS_CFG_EVNT_MAX_INDEX 15
+#define GLQDC_DFD_STATS_CFG_EVNT_EVNT_ID_S 0
+#define GLQDC_DFD_STATS_CFG_EVNT_EVNT_ID_M MAKEMASK(0x1F, 0)
+#define GLQDC_DFD_STATS_CFG_EVNT_WRAP_EN_S 31
+#define GLQDC_DFD_STATS_CFG_EVNT_WRAP_EN_M BIT(31)
+#define GLQDC_DFD_TEST_MNG 0x002D30A8 /* Reset Source: CORER */
+#define GLQDC_DFD_TEST_MNG_TST_S 2
+#define GLQDC_DFD_TEST_MNG_TST_M BIT(2)
+#define GLVFGEN_TIMER 0x000B8214 /* Reset Source: POR */
+#define GLVFGEN_TIMER_GTIME_S 0
+#define GLVFGEN_TIMER_GTIME_M MAKEMASK(0xFFFFFFFF, 0)
+#define PFGEN_CTRL 0x00091000 /* Reset Source: CORER */
+#define PFGEN_CTRL_PFSWR_S 0
+#define PFGEN_CTRL_PFSWR_M BIT(0)
+#define PFGEN_DRUN 0x00091180 /* Reset Source: CORER */
+#define PFGEN_DRUN_DRVUNLD_S 0
+#define PFGEN_DRUN_DRVUNLD_M BIT(0)
+#define PFGEN_PFRSTAT 0x00091080 /* Reset Source: CORER */
+#define PFGEN_PFRSTAT_PFRD_S 0
+#define PFGEN_PFRSTAT_PFRD_M BIT(0)
+#define PFGEN_PORTNUM 0x001D2400 /* Reset Source: CORER */
+#define PFGEN_PORTNUM_PORT_NUM_S 0
+#define PFGEN_PORTNUM_PORT_NUM_M MAKEMASK(0x7, 0)
+#define PFGEN_STATE 0x00088000 /* Reset Source: CORER */
+#define PFGEN_STATE_PFPEEN_S 0
+#define PFGEN_STATE_PFPEEN_M BIT(0)
+#define PFGEN_STATE_RSVD_S 1
+#define PFGEN_STATE_RSVD_M BIT(1)
+#define PFGEN_STATE_PFLINKEN_S 2
+#define PFGEN_STATE_PFLINKEN_M BIT(2)
+#define PFGEN_STATE_PFSCEN_S 3
+#define PFGEN_STATE_PFSCEN_M BIT(3)
+#define PRT_TCVMLR_DRAIN_CNTR 0x000A21C0 /* Reset Source: CORER */
+#define PRT_TCVMLR_DRAIN_CNTR_CNTR_S 0
+#define PRT_TCVMLR_DRAIN_CNTR_CNTR_M MAKEMASK(0x3FFF, 0)
+#define PRTGEN_CNF 0x000B8120 /* Reset Source: POR */
+#define PRTGEN_CNF_PORT_DIS_S 0
+#define PRTGEN_CNF_PORT_DIS_M BIT(0)
+#define PRTGEN_CNF_ALLOW_PORT_DIS_S 1
+#define PRTGEN_CNF_ALLOW_PORT_DIS_M BIT(1)
+#define PRTGEN_CNF_EMP_PORT_DIS_S 2
+#define PRTGEN_CNF_EMP_PORT_DIS_M BIT(2)
+#define PRTGEN_CNF2 0x000B8160 /* Reset Source: POR */
+#define PRTGEN_CNF2_ACTIVATE_PORT_LINK_S 0
+#define PRTGEN_CNF2_ACTIVATE_PORT_LINK_M BIT(0)
+#define PRTGEN_CNF3 0x000B8280 /* Reset Source: POR */
+#define PRTGEN_CNF3_PORT_STAGERING_EN_S 0
+#define PRTGEN_CNF3_PORT_STAGERING_EN_M BIT(0)
+#define PRTGEN_STATUS 0x000B8100 /* Reset Source: POR */
+#define PRTGEN_STATUS_PORT_VALID_S 0
+#define PRTGEN_STATUS_PORT_VALID_M BIT(0)
+#define PRTGEN_STATUS_PORT_ACTIVE_S 1
+#define PRTGEN_STATUS_PORT_ACTIVE_M BIT(1)
+#define VFGEN_RSTAT(_VF) (0x00074000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: VFR */
+#define VFGEN_RSTAT_MAX_INDEX 255
+#define VFGEN_RSTAT_VFR_STATE_S 0
+#define VFGEN_RSTAT_VFR_STATE_M MAKEMASK(0x3, 0)
+#define VPGEN_VFRSTAT(_VF) (0x00090800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPGEN_VFRSTAT_MAX_INDEX 255
+#define VPGEN_VFRSTAT_VFRD_S 0
+#define VPGEN_VFRSTAT_VFRD_M BIT(0)
+#define VPGEN_VFRTRIG(_VF) (0x00090000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPGEN_VFRTRIG_MAX_INDEX 255
+#define VPGEN_VFRTRIG_VFSWR_S 0
+#define VPGEN_VFRTRIG_VFSWR_M BIT(0)
+#define VSIGEN_RSTAT(_VSI) (0x00092800 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSIGEN_RSTAT_MAX_INDEX 767
+#define VSIGEN_RSTAT_VMRD_S 0
+#define VSIGEN_RSTAT_VMRD_M BIT(0)
+#define VSIGEN_RTRIG(_VSI) (0x00091800 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSIGEN_RTRIG_MAX_INDEX 767
+#define VSIGEN_RTRIG_VMSWR_S 0
+#define VSIGEN_RTRIG_VMSWR_M BIT(0)
+#define GLHMC_APBVTINUSEBASE(_i) (0x00524A00 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_APBVTINUSEBASE_MAX_INDEX 7
+#define GLHMC_APBVTINUSEBASE_FPMAPBINUSEBASE_S 0
+#define GLHMC_APBVTINUSEBASE_FPMAPBINUSEBASE_M MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_CEQPART(_i) (0x005031C0 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_CEQPART_MAX_INDEX 7
+#define GLHMC_CEQPART_PMCEQBASE_S 0
+#define GLHMC_CEQPART_PMCEQBASE_M MAKEMASK(0x3FF, 0)
+#define GLHMC_CEQPART_PMCEQSIZE_S 16
+#define GLHMC_CEQPART_PMCEQSIZE_M MAKEMASK(0x3FF, 16)
+#define GLHMC_DBCQMAX 0x005220F0 /* Reset Source: CORER */
+#define GLHMC_DBCQMAX_GLHMC_DBCQMAX_S 0
+#define GLHMC_DBCQMAX_GLHMC_DBCQMAX_M MAKEMASK(0xFFFFF, 0)
+#define GLHMC_DBCQPART(_i) (0x00503180 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_DBCQPART_MAX_INDEX 7
+#define GLHMC_DBCQPART_PMDBCQBASE_S 0
+#define GLHMC_DBCQPART_PMDBCQBASE_M MAKEMASK(0x3FFF, 0)
+#define GLHMC_DBCQPART_PMDBCQSIZE_S 16
+#define GLHMC_DBCQPART_PMDBCQSIZE_M MAKEMASK(0x7FFF, 16)
+#define GLHMC_DBQPMAX 0x005220EC /* Reset Source: CORER */
+#define GLHMC_DBQPMAX_GLHMC_DBQPMAX_S 0
+#define GLHMC_DBQPMAX_GLHMC_DBQPMAX_M MAKEMASK(0x7FFFF, 0)
+#define GLHMC_DBQPPART(_i) (0x005044C0 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_DBQPPART_MAX_INDEX 7
+#define GLHMC_DBQPPART_PMDBQPBASE_S 0
+#define GLHMC_DBQPPART_PMDBQPBASE_M MAKEMASK(0x3FFF, 0)
+#define GLHMC_DBQPPART_PMDBQPSIZE_S 16
+#define GLHMC_DBQPPART_PMDBQPSIZE_M MAKEMASK(0x7FFF, 16)
+#define GLHMC_FSIAVBASE(_i) (0x00525600 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_FSIAVBASE_MAX_INDEX 7
+#define GLHMC_FSIAVBASE_FPMFSIAVBASE_S 0
+#define GLHMC_FSIAVBASE_FPMFSIAVBASE_M MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_FSIAVCNT(_i) (0x00525700 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_FSIAVCNT_MAX_INDEX 7
+#define GLHMC_FSIAVCNT_FPMFSIAVCNT_S 0
+#define GLHMC_FSIAVCNT_FPMFSIAVCNT_M MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_FSIAVMAX 0x00522068 /* Reset Source: CORER */
+#define GLHMC_FSIAVMAX_PMFSIAVMAX_S 0
+#define GLHMC_FSIAVMAX_PMFSIAVMAX_M MAKEMASK(0x1FFFF, 0)
+#define GLHMC_FSIAVOBJSZ 0x00522064 /* Reset Source: CORER */
+#define GLHMC_FSIAVOBJSZ_PMFSIAVOBJSZ_S 0
+#define GLHMC_FSIAVOBJSZ_PMFSIAVOBJSZ_M MAKEMASK(0xF, 0)
+#define GLHMC_FSIMCBASE(_i) (0x00526000 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_FSIMCBASE_MAX_INDEX 7
+#define GLHMC_FSIMCBASE_FPMFSIMCBASE_S 0
+#define GLHMC_FSIMCBASE_FPMFSIMCBASE_M MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_FSIMCCNT(_i) (0x00526100 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_FSIMCCNT_MAX_INDEX 7
+#define GLHMC_FSIMCCNT_FPMFSIMCSZ_S 0
+#define GLHMC_FSIMCCNT_FPMFSIMCSZ_M MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_FSIMCMAX 0x00522060 /* Reset Source: CORER */
+#define GLHMC_FSIMCMAX_PMFSIMCMAX_S 0
+#define GLHMC_FSIMCMAX_PMFSIMCMAX_M MAKEMASK(0x3FFF, 0)
+#define GLHMC_FSIMCOBJSZ 0x0052205C /* Reset Source: CORER */
+#define GLHMC_FSIMCOBJSZ_PMFSIMCOBJSZ_S 0
+#define GLHMC_FSIMCOBJSZ_PMFSIMCOBJSZ_M MAKEMASK(0xF, 0)
+#define GLHMC_FWPDINV 0x0052207C /* Reset Source: CORER */
+#define GLHMC_FWPDINV_PMSDIDX_S 0
+#define GLHMC_FWPDINV_PMSDIDX_M MAKEMASK(0xFFF, 0)
+#define GLHMC_FWPDINV_PMSDPARTSEL_S 15
+#define GLHMC_FWPDINV_PMSDPARTSEL_M BIT(15)
+#define GLHMC_FWPDINV_PMPDIDX_S 16
+#define GLHMC_FWPDINV_PMPDIDX_M MAKEMASK(0x1FF, 16)
+#define GLHMC_FWPDINV_FPMAT 0x0010207c /* Reset Source: CORER */
+#define GLHMC_FWPDINV_FPMAT_PMSDIDX_S 0
+#define GLHMC_FWPDINV_FPMAT_PMSDIDX_M MAKEMASK(0xFFF, 0)
+#define GLHMC_FWPDINV_FPMAT_PMSDPARTSEL_S 15
+#define GLHMC_FWPDINV_FPMAT_PMSDPARTSEL_M BIT(15)
+#define GLHMC_FWPDINV_FPMAT_PMPDIDX_S 16
+#define GLHMC_FWPDINV_FPMAT_PMPDIDX_M MAKEMASK(0x1FF, 16)
+#define GLHMC_FWSDDATAHIGH 0x00522078 /* Reset Source: CORER */
+#define GLHMC_FWSDDATAHIGH_PMSDDATAHIGH_S 0
+#define GLHMC_FWSDDATAHIGH_PMSDDATAHIGH_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_FWSDDATAHIGH_FPMAT 0x00102078 /* Reset Source: CORER */
+#define GLHMC_FWSDDATAHIGH_FPMAT_PMSDDATAHIGH_S 0
+#define GLHMC_FWSDDATAHIGH_FPMAT_PMSDDATAHIGH_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_FWSDDATALOW 0x00522074 /* Reset Source: CORER */
+#define GLHMC_FWSDDATALOW_PMSDVALID_S 0
+#define GLHMC_FWSDDATALOW_PMSDVALID_M BIT(0)
+#define GLHMC_FWSDDATALOW_PMSDTYPE_S 1
+#define GLHMC_FWSDDATALOW_PMSDTYPE_M BIT(1)
+#define GLHMC_FWSDDATALOW_PMSDBPCOUNT_S 2
+#define GLHMC_FWSDDATALOW_PMSDBPCOUNT_M MAKEMASK(0x3FF, 2)
+#define GLHMC_FWSDDATALOW_PMSDDATALOW_S 12
+#define GLHMC_FWSDDATALOW_PMSDDATALOW_M MAKEMASK(0xFFFFF, 12)
+#define GLHMC_FWSDDATALOW_FPMAT 0x00102074 /* Reset Source: CORER */
+#define GLHMC_FWSDDATALOW_FPMAT_PMSDVALID_S 0
+#define GLHMC_FWSDDATALOW_FPMAT_PMSDVALID_M BIT(0)
+#define GLHMC_FWSDDATALOW_FPMAT_PMSDTYPE_S 1
+#define GLHMC_FWSDDATALOW_FPMAT_PMSDTYPE_M BIT(1)
+#define GLHMC_FWSDDATALOW_FPMAT_PMSDBPCOUNT_S 2
+#define GLHMC_FWSDDATALOW_FPMAT_PMSDBPCOUNT_M MAKEMASK(0x3FF, 2)
+#define GLHMC_FWSDDATALOW_FPMAT_PMSDDATALOW_S 12
+#define GLHMC_FWSDDATALOW_FPMAT_PMSDDATALOW_M MAKEMASK(0xFFFFF, 12)
+#define GLHMC_PEARPBASE(_i) (0x00524800 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEARPBASE_MAX_INDEX 7
+#define GLHMC_PEARPBASE_FPMPEARPBASE_S 0
+#define GLHMC_PEARPBASE_FPMPEARPBASE_M MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PEARPCNT(_i) (0x00524900 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEARPCNT_MAX_INDEX 7
+#define GLHMC_PEARPCNT_FPMPEARPCNT_S 0
+#define GLHMC_PEARPCNT_FPMPEARPCNT_M MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PEARPMAX 0x00522038 /* Reset Source: CORER */
+#define GLHMC_PEARPMAX_PMPEARPMAX_S 0
+#define GLHMC_PEARPMAX_PMPEARPMAX_M MAKEMASK(0x1FFFF, 0)
+#define GLHMC_PEARPOBJSZ 0x00522034 /* Reset Source: CORER */
+#define GLHMC_PEARPOBJSZ_PMPEARPOBJSZ_S 0
+#define GLHMC_PEARPOBJSZ_PMPEARPOBJSZ_M MAKEMASK(0x7, 0)
+#define GLHMC_PECQBASE(_i) (0x00524200 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PECQBASE_MAX_INDEX 7
+#define GLHMC_PECQBASE_FPMPECQBASE_S 0
+#define GLHMC_PECQBASE_FPMPECQBASE_M MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PECQCNT(_i) (0x00524300 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PECQCNT_MAX_INDEX 7
+#define GLHMC_PECQCNT_FPMPECQCNT_S 0
+#define GLHMC_PECQCNT_FPMPECQCNT_M MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PECQOBJSZ 0x00522020 /* Reset Source: CORER */
+#define GLHMC_PECQOBJSZ_PMPECQOBJSZ_S 0
+#define GLHMC_PECQOBJSZ_PMPECQOBJSZ_M MAKEMASK(0xF, 0)
+#define GLHMC_PEHDRBASE(_i) (0x00526200 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEHDRBASE_MAX_INDEX 7
+#define GLHMC_PEHDRBASE_GLHMC_PEHDRBASE_S 0
+#define GLHMC_PEHDRBASE_GLHMC_PEHDRBASE_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_PEHDRCNT(_i) (0x00526300 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEHDRCNT_MAX_INDEX 7
+#define GLHMC_PEHDRCNT_GLHMC_PEHDRCNT_S 0
+#define GLHMC_PEHDRCNT_GLHMC_PEHDRCNT_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_PEHDRMAX 0x00522008 /* Reset Source: CORER */
+#define GLHMC_PEHDRMAX_PMPEHDRMAX_S 0
+#define GLHMC_PEHDRMAX_PMPEHDRMAX_M MAKEMASK(0x7FFFF, 0)
+#define GLHMC_PEHDRMAX_RSVD_S 19
+#define GLHMC_PEHDRMAX_RSVD_M MAKEMASK(0x1FFF, 19)
+#define GLHMC_PEHDROBJSZ 0x00522004 /* Reset Source: CORER */
+#define GLHMC_PEHDROBJSZ_PMPEHDROBJSZ_S 0
+#define GLHMC_PEHDROBJSZ_PMPEHDROBJSZ_M MAKEMASK(0xF, 0)
+#define GLHMC_PEHDROBJSZ_RSVD_S 4
+#define GLHMC_PEHDROBJSZ_RSVD_M MAKEMASK(0xFFFFFFF, 4)
+#define GLHMC_PEHTCNT(_i) (0x00524700 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEHTCNT_MAX_INDEX 7
+#define GLHMC_PEHTCNT_FPMPEHTCNT_S 0
+#define GLHMC_PEHTCNT_FPMPEHTCNT_M MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PEHTCNT_FPMAT(_i) (0x00104700 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEHTCNT_FPMAT_MAX_INDEX 7
+#define GLHMC_PEHTCNT_FPMAT_FPMPEHTCNT_S 0
+#define GLHMC_PEHTCNT_FPMAT_FPMPEHTCNT_M MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PEHTEBASE(_i) (0x00524600 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEHTEBASE_MAX_INDEX 7
+#define GLHMC_PEHTEBASE_FPMPEHTEBASE_S 0
+#define GLHMC_PEHTEBASE_FPMPEHTEBASE_M MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PEHTEBASE_FPMAT(_i) (0x00104600 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEHTEBASE_FPMAT_MAX_INDEX 7
+#define GLHMC_PEHTEBASE_FPMAT_FPMPEHTEBASE_S 0
+#define GLHMC_PEHTEBASE_FPMAT_FPMPEHTEBASE_M MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PEHTEOBJSZ 0x0052202C /* Reset Source: CORER */
+#define GLHMC_PEHTEOBJSZ_PMPEHTEOBJSZ_S 0
+#define GLHMC_PEHTEOBJSZ_PMPEHTEOBJSZ_M MAKEMASK(0xF, 0)
+#define GLHMC_PEHTEOBJSZ_FPMAT 0x0010202c /* Reset Source: CORER */
+#define GLHMC_PEHTEOBJSZ_FPMAT_PMPEHTEOBJSZ_S 0
+#define GLHMC_PEHTEOBJSZ_FPMAT_PMPEHTEOBJSZ_M MAKEMASK(0xF, 0)
+#define GLHMC_PEHTMAX 0x00522030 /* Reset Source: CORER */
+#define GLHMC_PEHTMAX_PMPEHTMAX_S 0
+#define GLHMC_PEHTMAX_PMPEHTMAX_M MAKEMASK(0x1FFFFF, 0)
+#define GLHMC_PEHTMAX_FPMAT 0x00102030 /* Reset Source: CORER */
+#define GLHMC_PEHTMAX_FPMAT_PMPEHTMAX_S 0
+#define GLHMC_PEHTMAX_FPMAT_PMPEHTMAX_M MAKEMASK(0x1FFFFF, 0)
+#define GLHMC_PEMDBASE(_i) (0x00526400 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEMDBASE_MAX_INDEX 7
+#define GLHMC_PEMDBASE_GLHMC_PEMDBASE_S 0
+#define GLHMC_PEMDBASE_GLHMC_PEMDBASE_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_PEMDCNT(_i) (0x00526500 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEMDCNT_MAX_INDEX 7
+#define GLHMC_PEMDCNT_GLHMC_PEMDCNT_S 0
+#define GLHMC_PEMDCNT_GLHMC_PEMDCNT_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_PEMDMAX 0x00522010 /* Reset Source: CORER */
+#define GLHMC_PEMDMAX_PMPEMDMAX_S 0
+#define GLHMC_PEMDMAX_PMPEMDMAX_M MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PEMDMAX_RSVD_S 24
+#define GLHMC_PEMDMAX_RSVD_M MAKEMASK(0xFF, 24)
+#define GLHMC_PEMDOBJSZ 0x0052200C /* Reset Source: CORER */
+#define GLHMC_PEMDOBJSZ_PMPEMDOBJSZ_S 0
+#define GLHMC_PEMDOBJSZ_PMPEMDOBJSZ_M MAKEMASK(0xF, 0)
+#define GLHMC_PEMDOBJSZ_RSVD_S 4
+#define GLHMC_PEMDOBJSZ_RSVD_M MAKEMASK(0xFFFFFFF, 4)
+#define GLHMC_PEMRBASE(_i) (0x00524C00 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEMRBASE_MAX_INDEX 7
+#define GLHMC_PEMRBASE_FPMPEMRBASE_S 0
+#define GLHMC_PEMRBASE_FPMPEMRBASE_M MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PEMRCNT(_i) (0x00524D00 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEMRCNT_MAX_INDEX 7
+#define GLHMC_PEMRCNT_FPMPEMRSZ_S 0
+#define GLHMC_PEMRCNT_FPMPEMRSZ_M MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PEMRMAX 0x00522040 /* Reset Source: CORER */
+#define GLHMC_PEMRMAX_PMPEMRMAX_S 0
+#define GLHMC_PEMRMAX_PMPEMRMAX_M MAKEMASK(0x7FFFFF, 0)
+#define GLHMC_PEMROBJSZ 0x0052203c /* Reset Source: CORER */
+#define GLHMC_PEMROBJSZ_PMPEMROBJSZ_S 0
+#define GLHMC_PEMROBJSZ_PMPEMROBJSZ_M MAKEMASK(0xF, 0)
+#define GLHMC_PEOOISCBASE(_i) (0x00526600 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEOOISCBASE_MAX_INDEX 7
+#define GLHMC_PEOOISCBASE_GLHMC_PEOOISCBASE_S 0
+#define GLHMC_PEOOISCBASE_GLHMC_PEOOISCBASE_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_PEOOISCCNT(_i) (0x00526700 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEOOISCCNT_MAX_INDEX 7
+#define GLHMC_PEOOISCCNT_GLHMC_PEOOISCCNT_S 0
+#define GLHMC_PEOOISCCNT_GLHMC_PEOOISCCNT_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_PEOOISCFFLBASE(_i) (0x00526C00 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEOOISCFFLBASE_MAX_INDEX 7
+#define GLHMC_PEOOISCFFLBASE_GLHMC_PEOOISCFFLBASE_S 0
+#define GLHMC_PEOOISCFFLBASE_GLHMC_PEOOISCFFLBASE_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_PEOOISCFFLCNT_PMAT(_i) (0x00526D00 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEOOISCFFLCNT_PMAT_MAX_INDEX 7
+#define GLHMC_PEOOISCFFLCNT_PMAT_FPMPEOOISCFLCNT_S 0
+#define GLHMC_PEOOISCFFLCNT_PMAT_FPMPEOOISCFLCNT_M MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PEOOISCFFLMAX 0x005220A4 /* Reset Source: CORER */
+#define GLHMC_PEOOISCFFLMAX_PMPEOOISCFFLMAX_S 0
+#define GLHMC_PEOOISCFFLMAX_PMPEOOISCFFLMAX_M MAKEMASK(0x7FFFF, 0)
+#define GLHMC_PEOOISCFFLMAX_RSVD_S 19
+#define GLHMC_PEOOISCFFLMAX_RSVD_M MAKEMASK(0x1FFF, 19)
+#define GLHMC_PEOOISCMAX 0x00522018 /* Reset Source: CORER */
+#define GLHMC_PEOOISCMAX_PMPEOOISCMAX_S 0
+#define GLHMC_PEOOISCMAX_PMPEOOISCMAX_M MAKEMASK(0x7FFFF, 0)
+#define GLHMC_PEOOISCMAX_RSVD_S 19
+#define GLHMC_PEOOISCMAX_RSVD_M MAKEMASK(0x1FFF, 19)
+#define GLHMC_PEOOISCOBJSZ 0x00522014 /* Reset Source: CORER */
+#define GLHMC_PEOOISCOBJSZ_PMPEOOISCOBJSZ_S 0
+#define GLHMC_PEOOISCOBJSZ_PMPEOOISCOBJSZ_M MAKEMASK(0xF, 0)
+#define GLHMC_PEOOISCOBJSZ_RSVD_S 4
+#define GLHMC_PEOOISCOBJSZ_RSVD_M MAKEMASK(0xFFFFFFF, 4)
+#define GLHMC_PEPBLBASE(_i) (0x00525800 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEPBLBASE_MAX_INDEX 7
+#define GLHMC_PEPBLBASE_FPMPEPBLBASE_S 0
+#define GLHMC_PEPBLBASE_FPMPEPBLBASE_M MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PEPBLCNT(_i) (0x00525900 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEPBLCNT_MAX_INDEX 7
+#define GLHMC_PEPBLCNT_FPMPEPBLCNT_S 0
+#define GLHMC_PEPBLCNT_FPMPEPBLCNT_M MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PEPBLMAX 0x0052206C /* Reset Source: CORER */
+#define GLHMC_PEPBLMAX_PMPEPBLMAX_S 0
+#define GLHMC_PEPBLMAX_PMPEPBLMAX_M MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PEQ1BASE(_i) (0x00525200 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEQ1BASE_MAX_INDEX 7
+#define GLHMC_PEQ1BASE_FPMPEQ1BASE_S 0
+#define GLHMC_PEQ1BASE_FPMPEQ1BASE_M MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PEQ1CNT(_i) (0x00525300 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEQ1CNT_MAX_INDEX 7
+#define GLHMC_PEQ1CNT_FPMPEQ1CNT_S 0
+#define GLHMC_PEQ1CNT_FPMPEQ1CNT_M MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PEQ1FLBASE(_i) (0x00525400 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEQ1FLBASE_MAX_INDEX 7
+#define GLHMC_PEQ1FLBASE_FPMPEQ1FLBASE_S 0
+#define GLHMC_PEQ1FLBASE_FPMPEQ1FLBASE_M MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PEQ1FLMAX 0x00522058 /* Reset Source: CORER */
+#define GLHMC_PEQ1FLMAX_PMPEQ1FLMAX_S 0
+#define GLHMC_PEQ1FLMAX_PMPEQ1FLMAX_M MAKEMASK(0x3FFFFFF, 0)
+#define GLHMC_PEQ1MAX 0x00522054 /* Reset Source: CORER */
+#define GLHMC_PEQ1MAX_PMPEQ1MAX_S 0
+#define GLHMC_PEQ1MAX_PMPEQ1MAX_M MAKEMASK(0xFFFFFFF, 0)
+#define GLHMC_PEQ1OBJSZ 0x00522050 /* Reset Source: CORER */
+#define GLHMC_PEQ1OBJSZ_PMPEQ1OBJSZ_S 0
+#define GLHMC_PEQ1OBJSZ_PMPEQ1OBJSZ_M MAKEMASK(0xF, 0)
+#define GLHMC_PEQPBASE(_i) (0x00524000 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEQPBASE_MAX_INDEX 7
+#define GLHMC_PEQPBASE_FPMPEQPBASE_S 0
+#define GLHMC_PEQPBASE_FPMPEQPBASE_M MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PEQPCNT(_i) (0x00524100 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEQPCNT_MAX_INDEX 7
+#define GLHMC_PEQPCNT_FPMPEQPCNT_S 0
+#define GLHMC_PEQPCNT_FPMPEQPCNT_M MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PEQPOBJSZ 0x0052201C /* Reset Source: CORER */
+#define GLHMC_PEQPOBJSZ_PMPEQPOBJSZ_S 0
+#define GLHMC_PEQPOBJSZ_PMPEQPOBJSZ_M MAKEMASK(0xF, 0)
+#define GLHMC_PERRFBASE(_i) (0x00526800 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PERRFBASE_MAX_INDEX 7
+#define GLHMC_PERRFBASE_GLHMC_PERRFBASE_S 0
+#define GLHMC_PERRFBASE_GLHMC_PERRFBASE_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_PERRFCNT(_i) (0x00526900 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PERRFCNT_MAX_INDEX 7
+#define GLHMC_PERRFCNT_GLHMC_PERRFCNT_S 0
+#define GLHMC_PERRFCNT_GLHMC_PERRFCNT_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_PERRFFLBASE(_i) (0x00526A00 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PERRFFLBASE_MAX_INDEX 7
+#define GLHMC_PERRFFLBASE_GLHMC_PERRFFLBASE_S 0
+#define GLHMC_PERRFFLBASE_GLHMC_PERRFFLBASE_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_PERRFFLCNT_PMAT(_i) (0x00526B00 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PERRFFLCNT_PMAT_MAX_INDEX 7
+#define GLHMC_PERRFFLCNT_PMAT_FPMPERRFFLCNT_S 0
+#define GLHMC_PERRFFLCNT_PMAT_FPMPERRFFLCNT_M MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PERRFFLMAX 0x005220A0 /* Reset Source: CORER */
+#define GLHMC_PERRFFLMAX_PMPERRFFLMAX_S 0
+#define GLHMC_PERRFFLMAX_PMPERRFFLMAX_M MAKEMASK(0x3FFFFFF, 0)
+#define GLHMC_PERRFFLMAX_RSVD_S 26
+#define GLHMC_PERRFFLMAX_RSVD_M MAKEMASK(0x3F, 26)
+#define GLHMC_PERRFMAX 0x0052209C /* Reset Source: CORER */
+#define GLHMC_PERRFMAX_PMPERRFMAX_S 0
+#define GLHMC_PERRFMAX_PMPERRFMAX_M MAKEMASK(0xFFFFFFF, 0)
+#define GLHMC_PERRFMAX_RSVD_S 28
+#define GLHMC_PERRFMAX_RSVD_M MAKEMASK(0xF, 28)
+#define GLHMC_PERRFOBJSZ 0x00522098 /* Reset Source: CORER */
+#define GLHMC_PERRFOBJSZ_PMPERRFOBJSZ_S 0
+#define GLHMC_PERRFOBJSZ_PMPERRFOBJSZ_M MAKEMASK(0xF, 0)
+#define GLHMC_PERRFOBJSZ_RSVD_S 4
+#define GLHMC_PERRFOBJSZ_RSVD_M MAKEMASK(0xFFFFFFF, 4)
+#define GLHMC_PETIMERBASE(_i) (0x00525A00 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PETIMERBASE_MAX_INDEX 7
+#define GLHMC_PETIMERBASE_FPMPETIMERBASE_S 0
+#define GLHMC_PETIMERBASE_FPMPETIMERBASE_M MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PETIMERCNT(_i) (0x00525B00 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PETIMERCNT_MAX_INDEX 7
+#define GLHMC_PETIMERCNT_FPMPETIMERCNT_S 0
+#define GLHMC_PETIMERCNT_FPMPETIMERCNT_M MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PETIMERMAX 0x00522084 /* Reset Source: CORER */
+#define GLHMC_PETIMERMAX_PMPETIMERMAX_S 0
+#define GLHMC_PETIMERMAX_PMPETIMERMAX_M MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PETIMEROBJSZ 0x00522080 /* Reset Source: CORER */
+#define GLHMC_PETIMEROBJSZ_PMPETIMEROBJSZ_S 0
+#define GLHMC_PETIMEROBJSZ_PMPETIMEROBJSZ_M MAKEMASK(0xF, 0)
+#define GLHMC_PEXFBASE(_i) (0x00524E00 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEXFBASE_MAX_INDEX 7
+#define GLHMC_PEXFBASE_FPMPEXFBASE_S 0
+#define GLHMC_PEXFBASE_FPMPEXFBASE_M MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PEXFCNT(_i) (0x00524F00 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEXFCNT_MAX_INDEX 7
+#define GLHMC_PEXFCNT_FPMPEXFCNT_S 0
+#define GLHMC_PEXFCNT_FPMPEXFCNT_M MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PEXFFLBASE(_i) (0x00525000 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEXFFLBASE_MAX_INDEX 7
+#define GLHMC_PEXFFLBASE_FPMPEXFFLBASE_S 0
+#define GLHMC_PEXFFLBASE_FPMPEXFFLBASE_M MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PEXFFLMAX 0x0052204C /* Reset Source: CORER */
+#define GLHMC_PEXFFLMAX_PMPEXFFLMAX_S 0
+#define GLHMC_PEXFFLMAX_PMPEXFFLMAX_M MAKEMASK(0x3FFFFFF, 0)
+#define GLHMC_PEXFMAX 0x00522048 /* Reset Source: CORER */
+#define GLHMC_PEXFMAX_PMPEXFMAX_S 0
+#define GLHMC_PEXFMAX_PMPEXFMAX_M MAKEMASK(0xFFFFFFF, 0)
+#define GLHMC_PEXFOBJSZ 0x00522044 /* Reset Source: CORER */
+#define GLHMC_PEXFOBJSZ_PMPEXFOBJSZ_S 0
+#define GLHMC_PEXFOBJSZ_PMPEXFOBJSZ_M MAKEMASK(0xF, 0)
+#define GLHMC_PFPESDPART(_i) (0x00520880 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PFPESDPART_MAX_INDEX 7
+#define GLHMC_PFPESDPART_PMSDBASE_S 0
+#define GLHMC_PFPESDPART_PMSDBASE_M MAKEMASK(0xFFF, 0)
+#define GLHMC_PFPESDPART_PMSDSIZE_S 16
+#define GLHMC_PFPESDPART_PMSDSIZE_M MAKEMASK(0x1FFF, 16)
+#define GLHMC_PFPESDPART_FPMAT(_i) (0x00100880 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PFPESDPART_FPMAT_MAX_INDEX 7
+#define GLHMC_PFPESDPART_FPMAT_PMSDBASE_S 0
+#define GLHMC_PFPESDPART_FPMAT_PMSDBASE_M MAKEMASK(0xFFF, 0)
+#define GLHMC_PFPESDPART_FPMAT_PMSDSIZE_S 16
+#define GLHMC_PFPESDPART_FPMAT_PMSDSIZE_M MAKEMASK(0x1FFF, 16)
+#define GLHMC_SDPART(_i) (0x00520800 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_SDPART_MAX_INDEX 7
+#define GLHMC_SDPART_PMSDBASE_S 0
+#define GLHMC_SDPART_PMSDBASE_M MAKEMASK(0xFFF, 0)
+#define GLHMC_SDPART_PMSDSIZE_S 16
+#define GLHMC_SDPART_PMSDSIZE_M MAKEMASK(0x1FFF, 16)
+#define GLHMC_SDPART_FPMAT(_i) (0x00100800 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_SDPART_FPMAT_MAX_INDEX 7
+#define GLHMC_SDPART_FPMAT_PMSDBASE_S 0
+#define GLHMC_SDPART_FPMAT_PMSDBASE_M MAKEMASK(0xFFF, 0)
+#define GLHMC_SDPART_FPMAT_PMSDSIZE_S 16
+#define GLHMC_SDPART_FPMAT_PMSDSIZE_M MAKEMASK(0x1FFF, 16)
+#define GLHMC_VFAPBVTINUSEBASE(_i) (0x0052CA00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFAPBVTINUSEBASE_MAX_INDEX 31
+#define GLHMC_VFAPBVTINUSEBASE_FPMAPBINUSEBASE_S 0
+#define GLHMC_VFAPBVTINUSEBASE_FPMAPBINUSEBASE_M MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFCEQPART(_i) (0x00502F00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFCEQPART_MAX_INDEX 31
+#define GLHMC_VFCEQPART_PMCEQBASE_S 0
+#define GLHMC_VFCEQPART_PMCEQBASE_M MAKEMASK(0x3FF, 0)
+#define GLHMC_VFCEQPART_PMCEQSIZE_S 16
+#define GLHMC_VFCEQPART_PMCEQSIZE_M MAKEMASK(0x3FF, 16)
+#define GLHMC_VFDBCQPART(_i) (0x00502E00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFDBCQPART_MAX_INDEX 31
+#define GLHMC_VFDBCQPART_PMDBCQBASE_S 0
+#define GLHMC_VFDBCQPART_PMDBCQBASE_M MAKEMASK(0x3FFF, 0)
+#define GLHMC_VFDBCQPART_PMDBCQSIZE_S 16
+#define GLHMC_VFDBCQPART_PMDBCQSIZE_M MAKEMASK(0x7FFF, 16)
+#define GLHMC_VFDBQPPART(_i) (0x00504520 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFDBQPPART_MAX_INDEX 31
+#define GLHMC_VFDBQPPART_PMDBQPBASE_S 0
+#define GLHMC_VFDBQPPART_PMDBQPBASE_M MAKEMASK(0x3FFF, 0)
+#define GLHMC_VFDBQPPART_PMDBQPSIZE_S 16
+#define GLHMC_VFDBQPPART_PMDBQPSIZE_M MAKEMASK(0x7FFF, 16)
+#define GLHMC_VFFSIAVBASE(_i) (0x0052D600 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFFSIAVBASE_MAX_INDEX 31
+#define GLHMC_VFFSIAVBASE_FPMFSIAVBASE_S 0
+#define GLHMC_VFFSIAVBASE_FPMFSIAVBASE_M MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFFSIAVCNT(_i) (0x0052D700 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFFSIAVCNT_MAX_INDEX 31
+#define GLHMC_VFFSIAVCNT_FPMFSIAVCNT_S 0
+#define GLHMC_VFFSIAVCNT_FPMFSIAVCNT_M MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFFSIMCBASE(_i) (0x0052E000 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFFSIMCBASE_MAX_INDEX 31
+#define GLHMC_VFFSIMCBASE_FPMFSIMCBASE_S 0
+#define GLHMC_VFFSIMCBASE_FPMFSIMCBASE_M MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFFSIMCCNT(_i) (0x0052E100 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFFSIMCCNT_MAX_INDEX 31
+#define GLHMC_VFFSIMCCNT_FPMFSIMCSZ_S 0
+#define GLHMC_VFFSIMCCNT_FPMFSIMCSZ_M MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFPDINV(_i) (0x00528300 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPDINV_MAX_INDEX 31
+#define GLHMC_VFPDINV_PMSDIDX_S 0
+#define GLHMC_VFPDINV_PMSDIDX_M MAKEMASK(0xFFF, 0)
+#define GLHMC_VFPDINV_PMSDPARTSEL_S 15
+#define GLHMC_VFPDINV_PMSDPARTSEL_M BIT(15)
+#define GLHMC_VFPDINV_PMPDIDX_S 16
+#define GLHMC_VFPDINV_PMPDIDX_M MAKEMASK(0x1FF, 16)
+#define GLHMC_VFPDINV_FPMAT(_i) (0x00108300 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPDINV_FPMAT_MAX_INDEX 31
+#define GLHMC_VFPDINV_FPMAT_PMSDIDX_S 0
+#define GLHMC_VFPDINV_FPMAT_PMSDIDX_M MAKEMASK(0xFFF, 0)
+#define GLHMC_VFPDINV_FPMAT_PMSDPARTSEL_S 15
+#define GLHMC_VFPDINV_FPMAT_PMSDPARTSEL_M BIT(15)
+#define GLHMC_VFPDINV_FPMAT_PMPDIDX_S 16
+#define GLHMC_VFPDINV_FPMAT_PMPDIDX_M MAKEMASK(0x1FF, 16)
+#define GLHMC_VFPEARPBASE(_i) (0x0052C800 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEARPBASE_MAX_INDEX 31
+#define GLHMC_VFPEARPBASE_FPMPEARPBASE_S 0
+#define GLHMC_VFPEARPBASE_FPMPEARPBASE_M MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFPEARPCNT(_i) (0x0052C900 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEARPCNT_MAX_INDEX 31
+#define GLHMC_VFPEARPCNT_FPMPEARPCNT_S 0
+#define GLHMC_VFPEARPCNT_FPMPEARPCNT_M MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFPECQBASE(_i) (0x0052C200 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPECQBASE_MAX_INDEX 31
+#define GLHMC_VFPECQBASE_FPMPECQBASE_S 0
+#define GLHMC_VFPECQBASE_FPMPECQBASE_M MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFPECQCNT(_i) (0x0052C300 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPECQCNT_MAX_INDEX 31
+#define GLHMC_VFPECQCNT_FPMPECQCNT_S 0
+#define GLHMC_VFPECQCNT_FPMPECQCNT_M MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFPEHDRBASE(_i) (0x0052E200 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEHDRBASE_MAX_INDEX 31
+#define GLHMC_VFPEHDRBASE_GLHMC_PEHDRBASE_S 0
+#define GLHMC_VFPEHDRBASE_GLHMC_PEHDRBASE_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFPEHDRCNT(_i) (0x0052E300 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEHDRCNT_MAX_INDEX 31
+#define GLHMC_VFPEHDRCNT_GLHMC_PEHDRCNT_S 0
+#define GLHMC_VFPEHDRCNT_GLHMC_PEHDRCNT_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFPEHTCNT(_i) (0x0052C700 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEHTCNT_MAX_INDEX 31
+#define GLHMC_VFPEHTCNT_FPMPEHTCNT_S 0
+#define GLHMC_VFPEHTCNT_FPMPEHTCNT_M MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFPEHTCNT_FPMAT(_i) (0x0010c700 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEHTCNT_FPMAT_MAX_INDEX 31
+#define GLHMC_VFPEHTCNT_FPMAT_FPMPEHTCNT_S 0
+#define GLHMC_VFPEHTCNT_FPMAT_FPMPEHTCNT_M MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFPEHTEBASE(_i) (0x0052C600 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEHTEBASE_MAX_INDEX 31
+#define GLHMC_VFPEHTEBASE_FPMPEHTEBASE_S 0
+#define GLHMC_VFPEHTEBASE_FPMPEHTEBASE_M MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFPEHTEBASE_FPMAT(_i) (0x0010C600 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEHTEBASE_FPMAT_MAX_INDEX 31
+#define GLHMC_VFPEHTEBASE_FPMAT_FPMPEHTEBASE_S 0
+#define GLHMC_VFPEHTEBASE_FPMAT_FPMPEHTEBASE_M MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFPEMDBASE(_i) (0x0052E400 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEMDBASE_MAX_INDEX 31
+#define GLHMC_VFPEMDBASE_GLHMC_PEMDBASE_S 0
+#define GLHMC_VFPEMDBASE_GLHMC_PEMDBASE_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFPEMDCNT(_i) (0x0052E500 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEMDCNT_MAX_INDEX 31
+#define GLHMC_VFPEMDCNT_GLHMC_PEMDCNT_S 0
+#define GLHMC_VFPEMDCNT_GLHMC_PEMDCNT_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFPEMRBASE(_i) (0x0052CC00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEMRBASE_MAX_INDEX 31
+#define GLHMC_VFPEMRBASE_FPMPEMRBASE_S 0
+#define GLHMC_VFPEMRBASE_FPMPEMRBASE_M MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFPEMRCNT(_i) (0x0052CD00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEMRCNT_MAX_INDEX 31
+#define GLHMC_VFPEMRCNT_FPMPEMRSZ_S 0
+#define GLHMC_VFPEMRCNT_FPMPEMRSZ_M MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFPEOOISCBASE(_i) (0x0052E600 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEOOISCBASE_MAX_INDEX 31
+#define GLHMC_VFPEOOISCBASE_GLHMC_PEOOISCBASE_S 0
+#define GLHMC_VFPEOOISCBASE_GLHMC_PEOOISCBASE_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFPEOOISCCNT(_i) (0x0052E700 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEOOISCCNT_MAX_INDEX 31
+#define GLHMC_VFPEOOISCCNT_GLHMC_PEOOISCCNT_S 0
+#define GLHMC_VFPEOOISCCNT_GLHMC_PEOOISCCNT_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFPEOOISCFFLBASE(_i) (0x0052EC00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEOOISCFFLBASE_MAX_INDEX 31
+#define GLHMC_VFPEOOISCFFLBASE_GLHMC_PEOOISCFFLBASE_S 0
+#define GLHMC_VFPEOOISCFFLBASE_GLHMC_PEOOISCFFLBASE_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFPEPBLBASE(_i) (0x0052D800 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEPBLBASE_MAX_INDEX 31
+#define GLHMC_VFPEPBLBASE_FPMPEPBLBASE_S 0
+#define GLHMC_VFPEPBLBASE_FPMPEPBLBASE_M MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFPEPBLCNT(_i) (0x0052D900 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEPBLCNT_MAX_INDEX 31
+#define GLHMC_VFPEPBLCNT_FPMPEPBLCNT_S 0
+#define GLHMC_VFPEPBLCNT_FPMPEPBLCNT_M MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFPEQ1BASE(_i) (0x0052D200 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEQ1BASE_MAX_INDEX 31
+#define GLHMC_VFPEQ1BASE_FPMPEQ1BASE_S 0
+#define GLHMC_VFPEQ1BASE_FPMPEQ1BASE_M MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFPEQ1CNT(_i) (0x0052D300 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEQ1CNT_MAX_INDEX 31
+#define GLHMC_VFPEQ1CNT_FPMPEQ1CNT_S 0
+#define GLHMC_VFPEQ1CNT_FPMPEQ1CNT_M MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFPEQ1FLBASE(_i) (0x0052D400 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEQ1FLBASE_MAX_INDEX 31
+#define GLHMC_VFPEQ1FLBASE_FPMPEQ1FLBASE_S 0
+#define GLHMC_VFPEQ1FLBASE_FPMPEQ1FLBASE_M MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFPEQPBASE(_i) (0x0052C000 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEQPBASE_MAX_INDEX 31
+#define GLHMC_VFPEQPBASE_FPMPEQPBASE_S 0
+#define GLHMC_VFPEQPBASE_FPMPEQPBASE_M MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFPEQPCNT(_i) (0x0052C100 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEQPCNT_MAX_INDEX 31
+#define GLHMC_VFPEQPCNT_FPMPEQPCNT_S 0
+#define GLHMC_VFPEQPCNT_FPMPEQPCNT_M MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFPERRFBASE(_i) (0x0052E800 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPERRFBASE_MAX_INDEX 31
+#define GLHMC_VFPERRFBASE_GLHMC_PERRFBASE_S 0
+#define GLHMC_VFPERRFBASE_GLHMC_PERRFBASE_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFPERRFCNT(_i) (0x0052E900 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPERRFCNT_MAX_INDEX 31
+#define GLHMC_VFPERRFCNT_GLHMC_PERRFCNT_S 0
+#define GLHMC_VFPERRFCNT_GLHMC_PERRFCNT_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFPERRFFLBASE(_i) (0x0052EA00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPERRFFLBASE_MAX_INDEX 31
+#define GLHMC_VFPERRFFLBASE_GLHMC_PERRFFLBASE_S 0
+#define GLHMC_VFPERRFFLBASE_GLHMC_PERRFFLBASE_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFPETIMERBASE(_i) (0x0052DA00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPETIMERBASE_MAX_INDEX 31
+#define GLHMC_VFPETIMERBASE_FPMPETIMERBASE_S 0
+#define GLHMC_VFPETIMERBASE_FPMPETIMERBASE_M MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFPETIMERCNT(_i) (0x0052DB00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPETIMERCNT_MAX_INDEX 31
+#define GLHMC_VFPETIMERCNT_FPMPETIMERCNT_S 0
+#define GLHMC_VFPETIMERCNT_FPMPETIMERCNT_M MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFPEXFBASE(_i) (0x0052CE00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEXFBASE_MAX_INDEX 31
+#define GLHMC_VFPEXFBASE_FPMPEXFBASE_S 0
+#define GLHMC_VFPEXFBASE_FPMPEXFBASE_M MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFPEXFCNT(_i) (0x0052CF00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEXFCNT_MAX_INDEX 31
+#define GLHMC_VFPEXFCNT_FPMPEXFCNT_S 0
+#define GLHMC_VFPEXFCNT_FPMPEXFCNT_M MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFPEXFFLBASE(_i) (0x0052D000 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEXFFLBASE_MAX_INDEX 31
+#define GLHMC_VFPEXFFLBASE_FPMPEXFFLBASE_S 0
+#define GLHMC_VFPEXFFLBASE_FPMPEXFFLBASE_M MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFSDDATAHIGH(_i) (0x00528200 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFSDDATAHIGH_MAX_INDEX 31
+#define GLHMC_VFSDDATAHIGH_PMSDDATAHIGH_S 0
+#define GLHMC_VFSDDATAHIGH_PMSDDATAHIGH_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFSDDATAHIGH_FPMAT(_i) (0x00108200 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFSDDATAHIGH_FPMAT_MAX_INDEX 31
+#define GLHMC_VFSDDATAHIGH_FPMAT_PMSDDATAHIGH_S 0
+#define GLHMC_VFSDDATAHIGH_FPMAT_PMSDDATAHIGH_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFSDDATALOW(_i) (0x00528100 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFSDDATALOW_MAX_INDEX 31
+#define GLHMC_VFSDDATALOW_PMSDVALID_S 0
+#define GLHMC_VFSDDATALOW_PMSDVALID_M BIT(0)
+#define GLHMC_VFSDDATALOW_PMSDTYPE_S 1
+#define GLHMC_VFSDDATALOW_PMSDTYPE_M BIT(1)
+#define GLHMC_VFSDDATALOW_PMSDBPCOUNT_S 2
+#define GLHMC_VFSDDATALOW_PMSDBPCOUNT_M MAKEMASK(0x3FF, 2)
+#define GLHMC_VFSDDATALOW_PMSDDATALOW_S 12
+#define GLHMC_VFSDDATALOW_PMSDDATALOW_M MAKEMASK(0xFFFFF, 12)
+#define GLHMC_VFSDDATALOW_FPMAT(_i) (0x00108100 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFSDDATALOW_FPMAT_MAX_INDEX 31
+#define GLHMC_VFSDDATALOW_FPMAT_PMSDVALID_S 0
+#define GLHMC_VFSDDATALOW_FPMAT_PMSDVALID_M BIT(0)
+#define GLHMC_VFSDDATALOW_FPMAT_PMSDTYPE_S 1
+#define GLHMC_VFSDDATALOW_FPMAT_PMSDTYPE_M BIT(1)
+#define GLHMC_VFSDDATALOW_FPMAT_PMSDBPCOUNT_S 2
+#define GLHMC_VFSDDATALOW_FPMAT_PMSDBPCOUNT_M MAKEMASK(0x3FF, 2)
+#define GLHMC_VFSDDATALOW_FPMAT_PMSDDATALOW_S 12
+#define GLHMC_VFSDDATALOW_FPMAT_PMSDDATALOW_M MAKEMASK(0xFFFFF, 12)
+#define GLHMC_VFSDPART(_i) (0x00528800 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFSDPART_MAX_INDEX 31
+#define GLHMC_VFSDPART_PMSDBASE_S 0
+#define GLHMC_VFSDPART_PMSDBASE_M MAKEMASK(0xFFF, 0)
+#define GLHMC_VFSDPART_PMSDSIZE_S 16
+#define GLHMC_VFSDPART_PMSDSIZE_M MAKEMASK(0x1FFF, 16)
+#define GLHMC_VFSDPART_FPMAT(_i) (0x00108800 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFSDPART_FPMAT_MAX_INDEX 31
+#define GLHMC_VFSDPART_FPMAT_PMSDBASE_S 0
+#define GLHMC_VFSDPART_FPMAT_PMSDBASE_M MAKEMASK(0xFFF, 0)
+#define GLHMC_VFSDPART_FPMAT_PMSDSIZE_S 16
+#define GLHMC_VFSDPART_FPMAT_PMSDSIZE_M MAKEMASK(0x1FFF, 16)
+#define GLMDOC_CACHESIZE 0x0051C06C /* Reset Source: CORER */
+#define GLMDOC_CACHESIZE_WORD_SIZE_S 0
+#define GLMDOC_CACHESIZE_WORD_SIZE_M MAKEMASK(0xFF, 0)
+#define GLMDOC_CACHESIZE_SETS_S 8
+#define GLMDOC_CACHESIZE_SETS_M MAKEMASK(0xFFF, 8)
+#define GLMDOC_CACHESIZE_WAYS_S 20
+#define GLMDOC_CACHESIZE_WAYS_M MAKEMASK(0xF, 20)
+#define GLPBLOC0_CACHESIZE 0x00518074 /* Reset Source: CORER */
+#define GLPBLOC0_CACHESIZE_WORD_SIZE_S 0
+#define GLPBLOC0_CACHESIZE_WORD_SIZE_M MAKEMASK(0xFF, 0)
+#define GLPBLOC0_CACHESIZE_SETS_S 8
+#define GLPBLOC0_CACHESIZE_SETS_M MAKEMASK(0xFFF, 8)
+#define GLPBLOC0_CACHESIZE_WAYS_S 20
+#define GLPBLOC0_CACHESIZE_WAYS_M MAKEMASK(0xF, 20)
+#define GLPBLOC1_CACHESIZE 0x0051A074 /* Reset Source: CORER */
+#define GLPBLOC1_CACHESIZE_WORD_SIZE_S 0
+#define GLPBLOC1_CACHESIZE_WORD_SIZE_M MAKEMASK(0xFF, 0)
+#define GLPBLOC1_CACHESIZE_SETS_S 8
+#define GLPBLOC1_CACHESIZE_SETS_M MAKEMASK(0xFFF, 8)
+#define GLPBLOC1_CACHESIZE_WAYS_S 20
+#define GLPBLOC1_CACHESIZE_WAYS_M MAKEMASK(0xF, 20)
+#define GLPDOC_CACHESIZE 0x00530048 /* Reset Source: CORER */
+#define GLPDOC_CACHESIZE_WORD_SIZE_S 0
+#define GLPDOC_CACHESIZE_WORD_SIZE_M MAKEMASK(0xFF, 0)
+#define GLPDOC_CACHESIZE_SETS_S 8
+#define GLPDOC_CACHESIZE_SETS_M MAKEMASK(0xFFF, 8)
+#define GLPDOC_CACHESIZE_WAYS_S 20
+#define GLPDOC_CACHESIZE_WAYS_M MAKEMASK(0xF, 20)
+#define GLPDOC_CACHESIZE_FPMAT 0x00110088 /* Reset Source: CORER */
+#define GLPDOC_CACHESIZE_FPMAT_WORD_SIZE_S 0
+#define GLPDOC_CACHESIZE_FPMAT_WORD_SIZE_M MAKEMASK(0xFF, 0)
+#define GLPDOC_CACHESIZE_FPMAT_SETS_S 8
+#define GLPDOC_CACHESIZE_FPMAT_SETS_M MAKEMASK(0xFFF, 8)
+#define GLPDOC_CACHESIZE_FPMAT_WAYS_S 20
+#define GLPDOC_CACHESIZE_FPMAT_WAYS_M MAKEMASK(0xF, 20)
+#define GLPEOC0_CACHESIZE 0x005140A8 /* Reset Source: CORER */
+#define GLPEOC0_CACHESIZE_WORD_SIZE_S 0
+#define GLPEOC0_CACHESIZE_WORD_SIZE_M MAKEMASK(0xFF, 0)
+#define GLPEOC0_CACHESIZE_SETS_S 8
+#define GLPEOC0_CACHESIZE_SETS_M MAKEMASK(0xFFF, 8)
+#define GLPEOC0_CACHESIZE_WAYS_S 20
+#define GLPEOC0_CACHESIZE_WAYS_M MAKEMASK(0xF, 20)
+#define GLPEOC1_CACHESIZE 0x005160A8 /* Reset Source: CORER */
+#define GLPEOC1_CACHESIZE_WORD_SIZE_S 0
+#define GLPEOC1_CACHESIZE_WORD_SIZE_M MAKEMASK(0xFF, 0)
+#define GLPEOC1_CACHESIZE_SETS_S 8
+#define GLPEOC1_CACHESIZE_SETS_M MAKEMASK(0xFFF, 8)
+#define GLPEOC1_CACHESIZE_WAYS_S 20
+#define GLPEOC1_CACHESIZE_WAYS_M MAKEMASK(0xF, 20)
+#define PFHMC_ERRORDATA 0x00520500 /* Reset Source: PFR */
+#define PFHMC_ERRORDATA_HMC_ERROR_DATA_S 0
+#define PFHMC_ERRORDATA_HMC_ERROR_DATA_M MAKEMASK(0x3FFFFFFF, 0)
+#define PFHMC_ERRORDATA_FPMAT 0x00100500 /* Reset Source: PFR */
+#define PFHMC_ERRORDATA_FPMAT_HMC_ERROR_DATA_S 0
+#define PFHMC_ERRORDATA_FPMAT_HMC_ERROR_DATA_M MAKEMASK(0x3FFFFFFF, 0)
+#define PFHMC_ERRORINFO 0x00520400 /* Reset Source: PFR */
+#define PFHMC_ERRORINFO_PMF_INDEX_S 0
+#define PFHMC_ERRORINFO_PMF_INDEX_M MAKEMASK(0x1F, 0)
+#define PFHMC_ERRORINFO_PMF_ISVF_S 7
+#define PFHMC_ERRORINFO_PMF_ISVF_M BIT(7)
+#define PFHMC_ERRORINFO_HMC_ERROR_TYPE_S 8
+#define PFHMC_ERRORINFO_HMC_ERROR_TYPE_M MAKEMASK(0xF, 8)
+#define PFHMC_ERRORINFO_HMC_OBJECT_TYPE_S 16
+#define PFHMC_ERRORINFO_HMC_OBJECT_TYPE_M MAKEMASK(0x1F, 16)
+#define PFHMC_ERRORINFO_ERROR_DETECTED_S 31
+#define PFHMC_ERRORINFO_ERROR_DETECTED_M BIT(31)
+#define PFHMC_ERRORINFO_FPMAT 0x00100400 /* Reset Source: PFR */
+#define PFHMC_ERRORINFO_FPMAT_PMF_INDEX_S 0
+#define PFHMC_ERRORINFO_FPMAT_PMF_INDEX_M MAKEMASK(0x1F, 0)
+#define PFHMC_ERRORINFO_FPMAT_PMF_ISVF_S 7
+#define PFHMC_ERRORINFO_FPMAT_PMF_ISVF_M BIT(7)
+#define PFHMC_ERRORINFO_FPMAT_HMC_ERROR_TYPE_S 8
+#define PFHMC_ERRORINFO_FPMAT_HMC_ERROR_TYPE_M MAKEMASK(0xF, 8)
+#define PFHMC_ERRORINFO_FPMAT_HMC_OBJECT_TYPE_S 16
+#define PFHMC_ERRORINFO_FPMAT_HMC_OBJECT_TYPE_M MAKEMASK(0x1F, 16)
+#define PFHMC_ERRORINFO_FPMAT_ERROR_DETECTED_S 31
+#define PFHMC_ERRORINFO_FPMAT_ERROR_DETECTED_M BIT(31)
+#define PFHMC_PDINV 0x00520300 /* Reset Source: PFR */
+#define PFHMC_PDINV_PMSDIDX_S 0
+#define PFHMC_PDINV_PMSDIDX_M MAKEMASK(0xFFF, 0)
+#define PFHMC_PDINV_PMSDPARTSEL_S 15
+#define PFHMC_PDINV_PMSDPARTSEL_M BIT(15)
+#define PFHMC_PDINV_PMPDIDX_S 16
+#define PFHMC_PDINV_PMPDIDX_M MAKEMASK(0x1FF, 16)
+#define PFHMC_PDINV_FPMAT 0x00100300 /* Reset Source: PFR */
+#define PFHMC_PDINV_FPMAT_PMSDIDX_S 0
+#define PFHMC_PDINV_FPMAT_PMSDIDX_M MAKEMASK(0xFFF, 0)
+#define PFHMC_PDINV_FPMAT_PMSDPARTSEL_S 15
+#define PFHMC_PDINV_FPMAT_PMSDPARTSEL_M BIT(15)
+#define PFHMC_PDINV_FPMAT_PMPDIDX_S 16
+#define PFHMC_PDINV_FPMAT_PMPDIDX_M MAKEMASK(0x1FF, 16)
+#define PFHMC_SDCMD 0x00520000 /* Reset Source: PFR */
+#define PFHMC_SDCMD_PMSDIDX_S 0
+#define PFHMC_SDCMD_PMSDIDX_M MAKEMASK(0xFFF, 0)
+#define PFHMC_SDCMD_PMSDPARTSEL_S 15
+#define PFHMC_SDCMD_PMSDPARTSEL_M BIT(15)
+#define PFHMC_SDCMD_PMSDWR_S 31
+#define PFHMC_SDCMD_PMSDWR_M BIT(31)
+#define PFHMC_SDCMD_FPMAT 0x00100000 /* Reset Source: PFR */
+#define PFHMC_SDCMD_FPMAT_PMSDIDX_S 0
+#define PFHMC_SDCMD_FPMAT_PMSDIDX_M MAKEMASK(0xFFF, 0)
+#define PFHMC_SDCMD_FPMAT_PMSDPARTSEL_S 15
+#define PFHMC_SDCMD_FPMAT_PMSDPARTSEL_M BIT(15)
+#define PFHMC_SDCMD_FPMAT_PMSDWR_S 31
+#define PFHMC_SDCMD_FPMAT_PMSDWR_M BIT(31)
+#define PFHMC_SDDATAHIGH 0x00520200 /* Reset Source: PFR */
+#define PFHMC_SDDATAHIGH_PMSDDATAHIGH_S 0
+#define PFHMC_SDDATAHIGH_PMSDDATAHIGH_M MAKEMASK(0xFFFFFFFF, 0)
+#define PFHMC_SDDATAHIGH_FPMAT 0x00100200 /* Reset Source: PFR */
+#define PFHMC_SDDATAHIGH_FPMAT_PMSDDATAHIGH_S 0
+#define PFHMC_SDDATAHIGH_FPMAT_PMSDDATAHIGH_M MAKEMASK(0xFFFFFFFF, 0)
+#define PFHMC_SDDATALOW 0x00520100 /* Reset Source: PFR */
+#define PFHMC_SDDATALOW_PMSDVALID_S 0
+#define PFHMC_SDDATALOW_PMSDVALID_M BIT(0)
+#define PFHMC_SDDATALOW_PMSDTYPE_S 1
+#define PFHMC_SDDATALOW_PMSDTYPE_M BIT(1)
+#define PFHMC_SDDATALOW_PMSDBPCOUNT_S 2
+#define PFHMC_SDDATALOW_PMSDBPCOUNT_M MAKEMASK(0x3FF, 2)
+#define PFHMC_SDDATALOW_PMSDDATALOW_S 12
+#define PFHMC_SDDATALOW_PMSDDATALOW_M MAKEMASK(0xFFFFF, 12)
+#define PFHMC_SDDATALOW_FPMAT 0x00100100 /* Reset Source: PFR */
+#define PFHMC_SDDATALOW_FPMAT_PMSDVALID_S 0
+#define PFHMC_SDDATALOW_FPMAT_PMSDVALID_M BIT(0)
+#define PFHMC_SDDATALOW_FPMAT_PMSDTYPE_S 1
+#define PFHMC_SDDATALOW_FPMAT_PMSDTYPE_M BIT(1)
+#define PFHMC_SDDATALOW_FPMAT_PMSDBPCOUNT_S 2
+#define PFHMC_SDDATALOW_FPMAT_PMSDBPCOUNT_M MAKEMASK(0x3FF, 2)
+#define PFHMC_SDDATALOW_FPMAT_PMSDDATALOW_S 12
+#define PFHMC_SDDATALOW_FPMAT_PMSDDATALOW_M MAKEMASK(0xFFFFF, 12)
+#define GL_DSI_RDPC 0x00294204 /* Reset Source: CORER */
+#define GL_DSI_RDPC_RDPC_S 0
+#define GL_DSI_RDPC_RDPC_M MAKEMASK(0xFFFFFFFF, 0)
+#define GL_DSI_REPC 0x00294208 /* Reset Source: CORER */
+#define GL_DSI_REPC_NO_DESC_CNT_S 0
+#define GL_DSI_REPC_NO_DESC_CNT_M MAKEMASK(0xFFFF, 0)
+#define GL_DSI_REPC_ERROR_CNT_S 16
+#define GL_DSI_REPC_ERROR_CNT_M MAKEMASK(0xFFFF, 16)
+#define GL_MDCK_TDAT_TCLAN 0x000FC0DC /* Reset Source: CORER */
+#define GL_MDCK_TDAT_TCLAN_WRONG_ORDER_FORMAT_DESC_S 0
+#define GL_MDCK_TDAT_TCLAN_WRONG_ORDER_FORMAT_DESC_M BIT(0)
+#define GL_MDCK_TDAT_TCLAN_UR_S 1
+#define GL_MDCK_TDAT_TCLAN_UR_M BIT(1)
+#define GL_MDCK_TDAT_TCLAN_TAIL_DESC_NOT_DDESC_EOP_NOP_S 2
+#define GL_MDCK_TDAT_TCLAN_TAIL_DESC_NOT_DDESC_EOP_NOP_M BIT(2)
+#define GL_MDCK_TDAT_TCLAN_FALSE_SCHEDULING_S 3
+#define GL_MDCK_TDAT_TCLAN_FALSE_SCHEDULING_M BIT(3)
+#define GL_MDCK_TDAT_TCLAN_TAIL_VALUE_BIGGER_THAN_RING_LEN_S 4
+#define GL_MDCK_TDAT_TCLAN_TAIL_VALUE_BIGGER_THAN_RING_LEN_M BIT(4)
+#define GL_MDCK_TDAT_TCLAN_MORE_THAN_8_DCMDS_IN_PKT_S 5
+#define GL_MDCK_TDAT_TCLAN_MORE_THAN_8_DCMDS_IN_PKT_M BIT(5)
+#define GL_MDCK_TDAT_TCLAN_NO_HEAD_UPDATE_IN_QUANTA_S 6
+#define GL_MDCK_TDAT_TCLAN_NO_HEAD_UPDATE_IN_QUANTA_M BIT(6)
+#define GL_MDCK_TDAT_TCLAN_PKT_LEN_NOT_LEGAL_S 7
+#define GL_MDCK_TDAT_TCLAN_PKT_LEN_NOT_LEGAL_M BIT(7)
+#define GL_MDCK_TDAT_TCLAN_TSO_TLEN_NOT_COHERENT_WITH_SUM_BUFS_S 8
+#define GL_MDCK_TDAT_TCLAN_TSO_TLEN_NOT_COHERENT_WITH_SUM_BUFS_M BIT(8)
+#define GL_MDCK_TDAT_TCLAN_TSO_TAIL_REACHED_BEFORE_TLEN_END_S 9
+#define GL_MDCK_TDAT_TCLAN_TSO_TAIL_REACHED_BEFORE_TLEN_END_M BIT(9)
+#define GL_MDCK_TDAT_TCLAN_TSO_MORE_THAN_3_HDRS_S 10
+#define GL_MDCK_TDAT_TCLAN_TSO_MORE_THAN_3_HDRS_M BIT(10)
+#define GL_MDCK_TDAT_TCLAN_TSO_SUM_BUFFS_LT_SUM_HDRS_S 11
+#define GL_MDCK_TDAT_TCLAN_TSO_SUM_BUFFS_LT_SUM_HDRS_M BIT(11)
+#define GL_MDCK_TDAT_TCLAN_TSO_ZERO_MSS_TLEN_HDRS_S 12
+#define GL_MDCK_TDAT_TCLAN_TSO_ZERO_MSS_TLEN_HDRS_M BIT(12)
+#define GL_MDCK_TDAT_TCLAN_TSO_CTX_DESC_IPSEC_S 13
+#define GL_MDCK_TDAT_TCLAN_TSO_CTX_DESC_IPSEC_M BIT(13)
+#define GL_MDCK_TDAT_TCLAN_SSO_COMS_NOT_WHOLE_PKT_NUM_IN_QUANTA_S 14
+#define GL_MDCK_TDAT_TCLAN_SSO_COMS_NOT_WHOLE_PKT_NUM_IN_QUANTA_M BIT(14)
+#define GL_MDCK_TDAT_TCLAN_COMS_QUANTA_BYTES_EXCEED_PKTLEN_X_64_S 15
+#define GL_MDCK_TDAT_TCLAN_COMS_QUANTA_BYTES_EXCEED_PKTLEN_X_64_M BIT(15)
+#define GL_MDCK_TDAT_TCLAN_COMS_QUANTA_CMDS_EXCEED_S 16
+#define GL_MDCK_TDAT_TCLAN_COMS_QUANTA_CMDS_EXCEED_M BIT(16)
+#define GL_MDCK_TDAT_TCLAN_TSO_COMS_TSO_DESCS_LAST_LSO_QUANTA_S 17
+#define GL_MDCK_TDAT_TCLAN_TSO_COMS_TSO_DESCS_LAST_LSO_QUANTA_M BIT(17)
+#define GL_MDCK_TDAT_TCLAN_TSO_COMS_TSO_DESCS_TLEN_S 18
+#define GL_MDCK_TDAT_TCLAN_TSO_COMS_TSO_DESCS_TLEN_M BIT(18)
+#define GL_MDCK_TDAT_TCLAN_TSO_COMS_QUANTA_FINISHED_TOO_EARLY_S 19
+#define GL_MDCK_TDAT_TCLAN_TSO_COMS_QUANTA_FINISHED_TOO_EARLY_M BIT(19)
+#define GL_MDCK_TDAT_TCLAN_COMS_NUM_PKTS_IN_QUANTA_S 20
+#define GL_MDCK_TDAT_TCLAN_COMS_NUM_PKTS_IN_QUANTA_M BIT(20)
+#define GL_PPRS_SPARE_0 0x000841A8 /* Reset Source: CORER */
+#define GL_PPRS_SPARE_0_GL_PPRS_SPARE_S 0
+#define GL_PPRS_SPARE_0_GL_PPRS_SPARE_M MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PPRS_SPARE_1 0x000851A8 /* Reset Source: CORER */
+#define GL_PPRS_SPARE_1_GL_PPRS_SPARE_S 0
+#define GL_PPRS_SPARE_1_GL_PPRS_SPARE_M MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PPRS_SPARE_2 0x000861A8 /* Reset Source: CORER */
+#define GL_PPRS_SPARE_2_GL_PPRS_SPARE_S 0
+#define GL_PPRS_SPARE_2_GL_PPRS_SPARE_M MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PPRS_SPARE_3 0x000871A8 /* Reset Source: CORER */
+#define GL_PPRS_SPARE_3_GL_PPRS_SPARE_S 0
+#define GL_PPRS_SPARE_3_GL_PPRS_SPARE_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLCORE_CLKCTL_H 0x000B81E8 /* Reset Source: POR */
+#define GLCORE_CLKCTL_H_UPPER_CLK_SRC_H_S 0
+#define GLCORE_CLKCTL_H_UPPER_CLK_SRC_H_M MAKEMASK(0x3, 0)
+#define GLCORE_CLKCTL_H_LOWER_CLK_SRC_H_S 2
+#define GLCORE_CLKCTL_H_LOWER_CLK_SRC_H_M MAKEMASK(0x3, 2)
+#define GLCORE_CLKCTL_H_PSM_CLK_SRC_H_S 4
+#define GLCORE_CLKCTL_H_PSM_CLK_SRC_H_M MAKEMASK(0x3, 4)
+#define GLCORE_CLKCTL_H_RXCTL_CLK_SRC_H_S 6
+#define GLCORE_CLKCTL_H_RXCTL_CLK_SRC_H_M MAKEMASK(0x3, 6)
+#define GLCORE_CLKCTL_H_UANA_CLK_SRC_H_S 8
+#define GLCORE_CLKCTL_H_UANA_CLK_SRC_H_M MAKEMASK(0x7, 8)
+#define GLCORE_CLKCTL_L 0x000B8254 /* Reset Source: POR */
+#define GLCORE_CLKCTL_L_UPPER_CLK_SRC_L_S 0
+#define GLCORE_CLKCTL_L_UPPER_CLK_SRC_L_M MAKEMASK(0x3, 0)
+#define GLCORE_CLKCTL_L_LOWER_CLK_SRC_L_S 2
+#define GLCORE_CLKCTL_L_LOWER_CLK_SRC_L_M MAKEMASK(0x3, 2)
+#define GLCORE_CLKCTL_L_PSM_CLK_SRC_L_S 4
+#define GLCORE_CLKCTL_L_PSM_CLK_SRC_L_M MAKEMASK(0x3, 4)
+#define GLCORE_CLKCTL_L_RXCTL_CLK_SRC_L_S 6
+#define GLCORE_CLKCTL_L_RXCTL_CLK_SRC_L_M MAKEMASK(0x3, 6)
+#define GLCORE_CLKCTL_L_UANA_CLK_SRC_L_S 8
+#define GLCORE_CLKCTL_L_UANA_CLK_SRC_L_M MAKEMASK(0x7, 8)
+#define GLCORE_CLKCTL_M 0x000B8258 /* Reset Source: POR */
+#define GLCORE_CLKCTL_M_UPPER_CLK_SRC_M_S 0
+#define GLCORE_CLKCTL_M_UPPER_CLK_SRC_M_M MAKEMASK(0x3, 0)
+#define GLCORE_CLKCTL_M_LOWER_CLK_SRC_M_S 2
+#define GLCORE_CLKCTL_M_LOWER_CLK_SRC_M_M MAKEMASK(0x3, 2)
+#define GLCORE_CLKCTL_M_PSM_CLK_SRC_M_S 4
+#define GLCORE_CLKCTL_M_PSM_CLK_SRC_M_M MAKEMASK(0x3, 4)
+#define GLCORE_CLKCTL_M_RXCTL_CLK_SRC_M_S 6
+#define GLCORE_CLKCTL_M_RXCTL_CLK_SRC_M_M MAKEMASK(0x3, 6)
+#define GLCORE_CLKCTL_M_UANA_CLK_SRC_M_S 8
+#define GLCORE_CLKCTL_M_UANA_CLK_SRC_M_M MAKEMASK(0x7, 8)
+#define GLFOC_CACHESIZE 0x000AA074 /* Reset Source: CORER */
+#define GLFOC_CACHESIZE_WORD_SIZE_S 0
+#define GLFOC_CACHESIZE_WORD_SIZE_M MAKEMASK(0xFF, 0)
+#define GLFOC_CACHESIZE_SETS_S 8
+#define GLFOC_CACHESIZE_SETS_M MAKEMASK(0xFFF, 8)
+#define GLFOC_CACHESIZE_WAYS_S 20
+#define GLFOC_CACHESIZE_WAYS_M MAKEMASK(0xF, 20)
+#define GLGEN_CAR_DEBUG 0x000B81C0 /* Reset Source: POR */
+#define GLGEN_CAR_DEBUG_CAR_UPPER_CORE_CLK_EN_S 0
+#define GLGEN_CAR_DEBUG_CAR_UPPER_CORE_CLK_EN_M BIT(0)
+#define GLGEN_CAR_DEBUG_CAR_PCIE_HIU_CLK_EN_S 1
+#define GLGEN_CAR_DEBUG_CAR_PCIE_HIU_CLK_EN_M BIT(1)
+#define GLGEN_CAR_DEBUG_CAR_PE_CLK_EN_S 2
+#define GLGEN_CAR_DEBUG_CAR_PE_CLK_EN_M BIT(2)
+#define GLGEN_CAR_DEBUG_CAR_PCIE_PRIM_CLK_ACTIVE_S 3
+#define GLGEN_CAR_DEBUG_CAR_PCIE_PRIM_CLK_ACTIVE_M BIT(3)
+#define GLGEN_CAR_DEBUG_CDC_PE_ACTIVE_S 4
+#define GLGEN_CAR_DEBUG_CDC_PE_ACTIVE_M BIT(4)
+#define GLGEN_CAR_DEBUG_CAR_PCIE_RAW_PRST_RESET_N_S 5
+#define GLGEN_CAR_DEBUG_CAR_PCIE_RAW_PRST_RESET_N_M BIT(5)
+#define GLGEN_CAR_DEBUG_CAR_PCIE_RAW_SCLR_RESET_N_S 6
+#define GLGEN_CAR_DEBUG_CAR_PCIE_RAW_SCLR_RESET_N_M BIT(6)
+#define GLGEN_CAR_DEBUG_CAR_PCIE_RAW_IB_RESET_N_S 7
+#define GLGEN_CAR_DEBUG_CAR_PCIE_RAW_IB_RESET_N_M BIT(7)
+#define GLGEN_CAR_DEBUG_CAR_PCIE_RAW_IMIB_RESET_N_S 8
+#define GLGEN_CAR_DEBUG_CAR_PCIE_RAW_IMIB_RESET_N_M BIT(8)
+#define GLGEN_CAR_DEBUG_CAR_RAW_EMP_RESET_N_S 9
+#define GLGEN_CAR_DEBUG_CAR_RAW_EMP_RESET_N_M BIT(9)
+#define GLGEN_CAR_DEBUG_CAR_RAW_GLOBAL_RESET_N_S 10
+#define GLGEN_CAR_DEBUG_CAR_RAW_GLOBAL_RESET_N_M BIT(10)
+#define GLGEN_CAR_DEBUG_CAR_RAW_LAN_POWER_GOOD_S 11
+#define GLGEN_CAR_DEBUG_CAR_RAW_LAN_POWER_GOOD_M BIT(11)
+#define GLGEN_CAR_DEBUG_CDC_IOSF_PRIMERY_RST_B_S 12
+#define GLGEN_CAR_DEBUG_CDC_IOSF_PRIMERY_RST_B_M BIT(12)
+#define GLGEN_CAR_DEBUG_GBE_GLOBALRST_B_S 13
+#define GLGEN_CAR_DEBUG_GBE_GLOBALRST_B_M BIT(13)
+#define GLGEN_CAR_DEBUG_FLEEP_AL_GLOBR_DONE_S 14
+#define GLGEN_CAR_DEBUG_FLEEP_AL_GLOBR_DONE_M BIT(14)
+#define GLGEN_CAR_DEBUG_CAR_RST_STATE_S 15
+#define GLGEN_CAR_DEBUG_CAR_RST_STATE_M MAKEMASK(0xF, 15)
+#define GLGEN_CAR_SPARE 0x000B81C4 /* Reset Source: POR */
+#define GLGEN_CAR_SPARE_SPARE_CLEAR_S 0
+#define GLGEN_CAR_SPARE_SPARE_CLEAR_M MAKEMASK(0xFFFF, 0)
+#define GLGEN_CAR_SPARE_SPARE_SET_S 16
+#define GLGEN_CAR_SPARE_SPARE_SET_M MAKEMASK(0xFFFF, 16)
+#define GLMAC_CLKSTAT 0x000B8210 /* Reset Source: POR */
+#define GLMAC_CLKSTAT_P0_CLK_SPEED_S 0
+#define GLMAC_CLKSTAT_P0_CLK_SPEED_M MAKEMASK(0xF, 0)
+#define GLMAC_CLKSTAT_P1_CLK_SPEED_S 4
+#define GLMAC_CLKSTAT_P1_CLK_SPEED_M MAKEMASK(0xF, 4)
+#define GLMAC_CLKSTAT_P2_CLK_SPEED_S 8
+#define GLMAC_CLKSTAT_P2_CLK_SPEED_M MAKEMASK(0xF, 8)
+#define GLMAC_CLKSTAT_P3_CLK_SPEED_S 12
+#define GLMAC_CLKSTAT_P3_CLK_SPEED_M MAKEMASK(0xF, 12)
+#define GLMAC_CLKSTAT_P4_CLK_SPEED_S 16
+#define GLMAC_CLKSTAT_P4_CLK_SPEED_M MAKEMASK(0xF, 16)
+#define GLMAC_CLKSTAT_P5_CLK_SPEED_S 20
+#define GLMAC_CLKSTAT_P5_CLK_SPEED_M MAKEMASK(0xF, 20)
+#define GLMAC_CLKSTAT_P6_CLK_SPEED_S 24
+#define GLMAC_CLKSTAT_P6_CLK_SPEED_M MAKEMASK(0xF, 24)
+#define GLMAC_CLKSTAT_P7_CLK_SPEED_S 28
+#define GLMAC_CLKSTAT_P7_CLK_SPEED_M MAKEMASK(0xF, 28)
+#define GLRCB_DCB_LAN_PMS 0x001223F8 /* Reset Source: CORER */
+#define GLRCB_DCB_LAN_PMS_PSM_LAN_S 0
+#define GLRCB_DCB_LAN_PMS_PSM_LAN_M MAKEMASK(0x3FFF, 0)
+#define GLRCB_DCB_RDMA_PMS 0x001223FC /* Reset Source: CORER */
+#define GLRCB_DCB_RDMA_PMS_PSM_RDMA_S 0
+#define GLRCB_DCB_RDMA_PMS_PSM_RDMA_M MAKEMASK(0x3FFF, 0)
+#define GLRLAN_MDET 0x00294200 /* Reset Source: CORER */
+#define GLRLAN_MDET_PCKT_EXTRCT_ERR_S 0
+#define GLRLAN_MDET_PCKT_EXTRCT_ERR_M BIT(0)
+#define GLTPB_100G_MAC_FC_THRESH 0x00099510 /* Reset Source: CORER */
+#define GLTPB_100G_MAC_FC_THRESH_PORT0_FC_THRESH_S 0
+#define GLTPB_100G_MAC_FC_THRESH_PORT0_FC_THRESH_M MAKEMASK(0xFFFF, 0)
+#define GLTPB_100G_MAC_FC_THRESH_PORT1_FC_THRESH_S 16
+#define GLTPB_100G_MAC_FC_THRESH_PORT1_FC_THRESH_M MAKEMASK(0xFFFF, 16)
+#define GLTPB_100G_RPB_FC_THRESH 0x0009963C /* Reset Source: CORER */
+#define GLTPB_100G_RPB_FC_THRESH_PORT0_FC_THRESH_S 0
+#define GLTPB_100G_RPB_FC_THRESH_PORT0_FC_THRESH_M MAKEMASK(0xFFFF, 0)
+#define GLTPB_100G_RPB_FC_THRESH_PORT1_FC_THRESH_S 16
+#define GLTPB_100G_RPB_FC_THRESH_PORT1_FC_THRESH_M MAKEMASK(0xFFFF, 16)
+#define GLTPB_PACING_10G 0x000994E4 /* Reset Source: CORER */
+#define GLTPB_PACING_10G_N_S 0
+#define GLTPB_PACING_10G_N_M MAKEMASK(0xFF, 0)
+#define GLTPB_PACING_10G_K_S 8
+#define GLTPB_PACING_10G_K_M MAKEMASK(0xFF, 8)
+#define GLTPB_PACING_10G_S_S 16
+#define GLTPB_PACING_10G_S_M MAKEMASK(0x1FF, 16)
+#define GLTPB_PACING_25G 0x000994E0 /* Reset Source: CORER */
+#define GLTPB_PACING_25G_N_S 0
+#define GLTPB_PACING_25G_N_M MAKEMASK(0xFF, 0)
+#define GLTPB_PACING_25G_K_S 8
+#define GLTPB_PACING_25G_K_M MAKEMASK(0xFF, 8)
+#define GLTPB_PACING_25G_S_S 16
+#define GLTPB_PACING_25G_S_M MAKEMASK(0x1FF, 16)
+#define GLTPB_PORT_PACING_SPEED 0x000994E8 /* Reset Source: CORER */
+#define GLTPB_PORT_PACING_SPEED_PORT0_SPEED_S 0
+#define GLTPB_PORT_PACING_SPEED_PORT0_SPEED_M BIT(0)
+#define GLTPB_PORT_PACING_SPEED_PORT1_SPEED_S 1
+#define GLTPB_PORT_PACING_SPEED_PORT1_SPEED_M BIT(1)
+#define GLTPB_PORT_PACING_SPEED_PORT2_SPEED_S 2
+#define GLTPB_PORT_PACING_SPEED_PORT2_SPEED_M BIT(2)
+#define GLTPB_PORT_PACING_SPEED_PORT3_SPEED_S 3
+#define GLTPB_PORT_PACING_SPEED_PORT3_SPEED_M BIT(3)
+#define GLTPB_PORT_PACING_SPEED_PORT4_SPEED_S 4
+#define GLTPB_PORT_PACING_SPEED_PORT4_SPEED_M BIT(4)
+#define GLTPB_PORT_PACING_SPEED_PORT5_SPEED_S 5
+#define GLTPB_PORT_PACING_SPEED_PORT5_SPEED_M BIT(5)
+#define GLTPB_PORT_PACING_SPEED_PORT6_SPEED_S 6
+#define GLTPB_PORT_PACING_SPEED_PORT6_SPEED_M BIT(6)
+#define GLTPB_PORT_PACING_SPEED_PORT7_SPEED_S 7
+#define GLTPB_PORT_PACING_SPEED_PORT7_SPEED_M BIT(7)
+#define GLTSYN_HH_DBG 0x000889F0 /* Reset Source: CORER */
+#define GLTSYN_HH_DBG_HH_SYNC_S 0
+#define GLTSYN_HH_DBG_HH_SYNC_M BIT(0)
+#define GLTSYN_HH_DBG_HH_LATCH_EN_S 1
+#define GLTSYN_HH_DBG_HH_LATCH_EN_M BIT(1)
+#define TPB_CFG_SCHEDULED_BC_THRESHOLD 0x00099494 /* Reset Source: CORER */
+#define TPB_CFG_SCHEDULED_BC_THRESHOLD_THRESHOLD_S 0
+#define TPB_CFG_SCHEDULED_BC_THRESHOLD_THRESHOLD_M MAKEMASK(0x7FFF, 0)
+#define GL_UFUSE_SOC 0x000A400C /* Reset Source: POR */
+#define GL_UFUSE_SOC_PORT_MODE_S 0
+#define GL_UFUSE_SOC_PORT_MODE_M MAKEMASK(0x3, 0)
+#define GL_UFUSE_SOC_BANDWIDTH_S 2
+#define GL_UFUSE_SOC_BANDWIDTH_M MAKEMASK(0x3, 2)
+#define GL_UFUSE_SOC_PE_DISABLE_S 4
+#define GL_UFUSE_SOC_PE_DISABLE_M BIT(4)
+#define GL_UFUSE_SOC_SWITCH_MODE_S 5
+#define GL_UFUSE_SOC_SWITCH_MODE_M BIT(5)
+#define GL_UFUSE_SOC_CSR_PROTECTION_ENABLE_S 6
+#define GL_UFUSE_SOC_CSR_PROTECTION_ENABLE_M BIT(6)
+#define GL_UFUSE_SOC_SERIAL_50G_S 7
+#define GL_UFUSE_SOC_SERIAL_50G_M BIT(7)
+#define GL_UFUSE_SOC_NIC_ID_S 8
+#define GL_UFUSE_SOC_NIC_ID_M BIT(8)
+#define GL_UFUSE_SOC_BLOCK_BME_TO_FW_S 9
+#define GL_UFUSE_SOC_BLOCK_BME_TO_FW_M BIT(9)
+#define GL_UFUSE_SOC_SOC_TYPE_S 10
+#define GL_UFUSE_SOC_SOC_TYPE_M BIT(10)
+#define GL_UFUSE_SOC_BTS_MODE_S 11
+#define GL_UFUSE_SOC_BTS_MODE_M BIT(11)
+#define GL_UFUSE_SOC_SPARE_FUSES_S 12
+#define GL_UFUSE_SOC_SPARE_FUSES_M MAKEMASK(0xF, 12)
+#define EMPINT_GPIO_ENA 0x000880C0 /* Reset Source: POR */
+#define EMPINT_GPIO_ENA_GPIO0_ENA_S 0
+#define EMPINT_GPIO_ENA_GPIO0_ENA_M BIT(0)
+#define EMPINT_GPIO_ENA_GPIO1_ENA_S 1
+#define EMPINT_GPIO_ENA_GPIO1_ENA_M BIT(1)
+#define EMPINT_GPIO_ENA_GPIO2_ENA_S 2
+#define EMPINT_GPIO_ENA_GPIO2_ENA_M BIT(2)
+#define EMPINT_GPIO_ENA_GPIO3_ENA_S 3
+#define EMPINT_GPIO_ENA_GPIO3_ENA_M BIT(3)
+#define EMPINT_GPIO_ENA_GPIO4_ENA_S 4
+#define EMPINT_GPIO_ENA_GPIO4_ENA_M BIT(4)
+#define EMPINT_GPIO_ENA_GPIO5_ENA_S 5
+#define EMPINT_GPIO_ENA_GPIO5_ENA_M BIT(5)
+#define EMPINT_GPIO_ENA_GPIO6_ENA_S 6
+#define EMPINT_GPIO_ENA_GPIO6_ENA_M BIT(6)
+#define GL_CLKGEN_DEBUG 0x000B8268 /* Reset Source: POR */
+#define GL_CLKGEN_DEBUG_PROBE_S 0
+#define GL_CLKGEN_DEBUG_PROBE_M MAKEMASK(0xFFFFFFFF, 0)
+#define GL_CLKGEN_DEBUG_SEL 0x000B8264 /* Reset Source: POR */
+#define GL_CLKGEN_DEBUG_SEL_GL_CLKGEN_DEBUG_SEL_S 0
+#define GL_CLKGEN_DEBUG_SEL_GL_CLKGEN_DEBUG_SEL_M MAKEMASK(0xFFFF, 0)
+#define GLGEN_MAC_LINK_TOPO 0x000B81DC /* Reset Source: GLOBR */
+#define GLGEN_MAC_LINK_TOPO_LINK_TOPO_S 0
+#define GLGEN_MAC_LINK_TOPO_LINK_TOPO_M MAKEMASK(0x3, 0)
+#define GLINT_CEQCTL(_INT) (0x0015C000 + ((_INT) * 4)) /* _i=0...2047 */ /* Reset Source: CORER */
+#define GLINT_CEQCTL_MAX_INDEX 2047
+#define GLINT_CEQCTL_MSIX_INDX_S 0
+#define GLINT_CEQCTL_MSIX_INDX_M MAKEMASK(0x7FF, 0)
+#define GLINT_CEQCTL_ITR_INDX_S 11
+#define GLINT_CEQCTL_ITR_INDX_M MAKEMASK(0x3, 11)
+#define GLINT_CEQCTL_CAUSE_ENA_S 30
+#define GLINT_CEQCTL_CAUSE_ENA_M BIT(30)
+#define GLINT_CEQCTL_INTEVENT_S 31
+#define GLINT_CEQCTL_INTEVENT_M BIT(31)
+#define GLINT_CTL 0x0016CC54 /* Reset Source: CORER */
+#define GLINT_CTL_DIS_AUTOMASK_S 0
+#define GLINT_CTL_DIS_AUTOMASK_M BIT(0)
+#define GLINT_CTL_RSVD_S 1
+#define GLINT_CTL_RSVD_M MAKEMASK(0x7FFF, 1)
+#define GLINT_CTL_ITR_GRAN_200_S 16
+#define GLINT_CTL_ITR_GRAN_200_M MAKEMASK(0xF, 16)
+#define GLINT_CTL_ITR_GRAN_100_S 20
+#define GLINT_CTL_ITR_GRAN_100_M MAKEMASK(0xF, 20)
+#define GLINT_CTL_ITR_GRAN_50_S 24
+#define GLINT_CTL_ITR_GRAN_50_M MAKEMASK(0xF, 24)
+#define GLINT_CTL_ITR_GRAN_25_S 28
+#define GLINT_CTL_ITR_GRAN_25_M MAKEMASK(0xF, 28)
+#define GLINT_DYN_CTL(_INT) (0x00160000 + ((_INT) * 4)) /* _i=0...2047 */ /* Reset Source: PFR */
+#define GLINT_DYN_CTL_MAX_INDEX 2047
+#define GLINT_DYN_CTL_INTENA_S 0
+#define GLINT_DYN_CTL_INTENA_M BIT(0)
+#define GLINT_DYN_CTL_CLEARPBA_S 1
+#define GLINT_DYN_CTL_CLEARPBA_M BIT(1)
+#define GLINT_DYN_CTL_SWINT_TRIG_S 2
+#define GLINT_DYN_CTL_SWINT_TRIG_M BIT(2)
+#define GLINT_DYN_CTL_ITR_INDX_S 3
+#define GLINT_DYN_CTL_ITR_INDX_M MAKEMASK(0x3, 3)
+#define GLINT_DYN_CTL_INTERVAL_S 5
+#define GLINT_DYN_CTL_INTERVAL_M MAKEMASK(0xFFF, 5)
+#define GLINT_DYN_CTL_SW_ITR_INDX_ENA_S 24
+#define GLINT_DYN_CTL_SW_ITR_INDX_ENA_M BIT(24)
+#define GLINT_DYN_CTL_SW_ITR_INDX_S 25
+#define GLINT_DYN_CTL_SW_ITR_INDX_M MAKEMASK(0x3, 25)
+#define GLINT_DYN_CTL_WB_ON_ITR_S 30
+#define GLINT_DYN_CTL_WB_ON_ITR_M BIT(30)
+#define GLINT_DYN_CTL_INTENA_MSK_S 31
+#define GLINT_DYN_CTL_INTENA_MSK_M BIT(31)
+#define GLINT_FW_TOOL_CTL 0x0016C840 /* Reset Source: CORER */
+#define GLINT_FW_TOOL_CTL_MSIX_INDX_S 0
+#define GLINT_FW_TOOL_CTL_MSIX_INDX_M MAKEMASK(0x7FF, 0)
+#define GLINT_FW_TOOL_CTL_ITR_INDX_S 11
+#define GLINT_FW_TOOL_CTL_ITR_INDX_M MAKEMASK(0x3, 11)
+#define GLINT_FW_TOOL_CTL_CAUSE_ENA_S 30
+#define GLINT_FW_TOOL_CTL_CAUSE_ENA_M BIT(30)
+#define GLINT_FW_TOOL_CTL_INTEVENT_S 31
+#define GLINT_FW_TOOL_CTL_INTEVENT_M BIT(31)
+#define GLINT_ITR(_i, _INT) (0x00154000 + ((_i) * 8192 + (_INT) * 4)) /* _i=0...2, _INT=0...2047 */ /* Reset Source: PFR */
+#define GLINT_ITR_MAX_INDEX 2
+#define GLINT_ITR_INTERVAL_S 0
+#define GLINT_ITR_INTERVAL_M MAKEMASK(0xFFF, 0)
+#define GLINT_RATE(_INT) (0x0015A000 + ((_INT) * 4)) /* _i=0...2047 */ /* Reset Source: PFR */
+#define GLINT_RATE_MAX_INDEX 2047
+#define GLINT_RATE_INTERVAL_S 0
+#define GLINT_RATE_INTERVAL_M MAKEMASK(0x3F, 0)
+#define GLINT_RATE_INTRL_ENA_S 6
+#define GLINT_RATE_INTRL_ENA_M BIT(6)
+#define GLINT_TSYN_PFMSTR(_i) (0x0016CCC0 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLINT_TSYN_PFMSTR_MAX_INDEX 1
+#define GLINT_TSYN_PFMSTR_PF_MASTER_S 0
+#define GLINT_TSYN_PFMSTR_PF_MASTER_M MAKEMASK(0x7, 0)
+#define GLINT_TSYN_PHY 0x0016CC50 /* Reset Source: CORER */
+#define GLINT_TSYN_PHY_PHY_INDX_S 0
+#define GLINT_TSYN_PHY_PHY_INDX_M MAKEMASK(0x1F, 0)
+#define GLINT_VECT2FUNC(_INT) (0x00162000 + ((_INT) * 4)) /* _i=0...2047 */ /* Reset Source: CORER */
+#define GLINT_VECT2FUNC_MAX_INDEX 2047
+#define GLINT_VECT2FUNC_VF_NUM_S 0
+#define GLINT_VECT2FUNC_VF_NUM_M MAKEMASK(0xFF, 0)
+#define GLINT_VECT2FUNC_PF_NUM_S 12
+#define GLINT_VECT2FUNC_PF_NUM_M MAKEMASK(0x7, 12)
+#define GLINT_VECT2FUNC_IS_PF_S 16
+#define GLINT_VECT2FUNC_IS_PF_M BIT(16)
+#define PF0INT_FW_HLP_CTL 0x0016C844 /* Reset Source: CORER */
+#define PF0INT_FW_HLP_CTL_MSIX_INDX_S 0
+#define PF0INT_FW_HLP_CTL_MSIX_INDX_M MAKEMASK(0x7FF, 0)
+#define PF0INT_FW_HLP_CTL_ITR_INDX_S 11
+#define PF0INT_FW_HLP_CTL_ITR_INDX_M MAKEMASK(0x3, 11)
+#define PF0INT_FW_HLP_CTL_CAUSE_ENA_S 30
+#define PF0INT_FW_HLP_CTL_CAUSE_ENA_M BIT(30)
+#define PF0INT_FW_HLP_CTL_INTEVENT_S 31
+#define PF0INT_FW_HLP_CTL_INTEVENT_M BIT(31)
+#define PF0INT_FW_PSM_CTL 0x0016C848 /* Reset Source: CORER */
+#define PF0INT_FW_PSM_CTL_MSIX_INDX_S 0
+#define PF0INT_FW_PSM_CTL_MSIX_INDX_M MAKEMASK(0x7FF, 0)
+#define PF0INT_FW_PSM_CTL_ITR_INDX_S 11
+#define PF0INT_FW_PSM_CTL_ITR_INDX_M MAKEMASK(0x3, 11)
+#define PF0INT_FW_PSM_CTL_CAUSE_ENA_S 30
+#define PF0INT_FW_PSM_CTL_CAUSE_ENA_M BIT(30)
+#define PF0INT_FW_PSM_CTL_INTEVENT_S 31
+#define PF0INT_FW_PSM_CTL_INTEVENT_M BIT(31)
+#define PF0INT_MBX_CPM_CTL 0x0016B2C0 /* Reset Source: CORER */
+#define PF0INT_MBX_CPM_CTL_MSIX_INDX_S 0
+#define PF0INT_MBX_CPM_CTL_MSIX_INDX_M MAKEMASK(0x7FF, 0)
+#define PF0INT_MBX_CPM_CTL_ITR_INDX_S 11
+#define PF0INT_MBX_CPM_CTL_ITR_INDX_M MAKEMASK(0x3, 11)
+#define PF0INT_MBX_CPM_CTL_CAUSE_ENA_S 30
+#define PF0INT_MBX_CPM_CTL_CAUSE_ENA_M BIT(30)
+#define PF0INT_MBX_CPM_CTL_INTEVENT_S 31
+#define PF0INT_MBX_CPM_CTL_INTEVENT_M BIT(31)
+#define PF0INT_MBX_HLP_CTL 0x0016B2C4 /* Reset Source: CORER */
+#define PF0INT_MBX_HLP_CTL_MSIX_INDX_S 0
+#define PF0INT_MBX_HLP_CTL_MSIX_INDX_M MAKEMASK(0x7FF, 0)
+#define PF0INT_MBX_HLP_CTL_ITR_INDX_S 11
+#define PF0INT_MBX_HLP_CTL_ITR_INDX_M MAKEMASK(0x3, 11)
+#define PF0INT_MBX_HLP_CTL_CAUSE_ENA_S 30
+#define PF0INT_MBX_HLP_CTL_CAUSE_ENA_M BIT(30)
+#define PF0INT_MBX_HLP_CTL_INTEVENT_S 31
+#define PF0INT_MBX_HLP_CTL_INTEVENT_M BIT(31)
+#define PF0INT_MBX_PSM_CTL 0x0016B2C8 /* Reset Source: CORER */
+#define PF0INT_MBX_PSM_CTL_MSIX_INDX_S 0
+#define PF0INT_MBX_PSM_CTL_MSIX_INDX_M MAKEMASK(0x7FF, 0)
+#define PF0INT_MBX_PSM_CTL_ITR_INDX_S 11
+#define PF0INT_MBX_PSM_CTL_ITR_INDX_M MAKEMASK(0x3, 11)
+#define PF0INT_MBX_PSM_CTL_CAUSE_ENA_S 30
+#define PF0INT_MBX_PSM_CTL_CAUSE_ENA_M BIT(30)
+#define PF0INT_MBX_PSM_CTL_INTEVENT_S 31
+#define PF0INT_MBX_PSM_CTL_INTEVENT_M BIT(31)
+#define PF0INT_OICR_CPM 0x0016CC40 /* Reset Source: CORER */
+#define PF0INT_OICR_CPM_INTEVENT_S 0
+#define PF0INT_OICR_CPM_INTEVENT_M BIT(0)
+#define PF0INT_OICR_CPM_QUEUE_S 1
+#define PF0INT_OICR_CPM_QUEUE_M BIT(1)
+#define PF0INT_OICR_CPM_RSV1_S 2
+#define PF0INT_OICR_CPM_RSV1_M MAKEMASK(0xFF, 2)
+#define PF0INT_OICR_CPM_HH_COMP_S 10
+#define PF0INT_OICR_CPM_HH_COMP_M BIT(10)
+#define PF0INT_OICR_CPM_TSYN_TX_S 11
+#define PF0INT_OICR_CPM_TSYN_TX_M BIT(11)
+#define PF0INT_OICR_CPM_TSYN_EVNT_S 12
+#define PF0INT_OICR_CPM_TSYN_EVNT_M BIT(12)
+#define PF0INT_OICR_CPM_TSYN_TGT_S 13
+#define PF0INT_OICR_CPM_TSYN_TGT_M BIT(13)
+#define PF0INT_OICR_CPM_HLP_RDY_S 14
+#define PF0INT_OICR_CPM_HLP_RDY_M BIT(14)
+#define PF0INT_OICR_CPM_CPM_RDY_S 15
+#define PF0INT_OICR_CPM_CPM_RDY_M BIT(15)
+#define PF0INT_OICR_CPM_ECC_ERR_S 16
+#define PF0INT_OICR_CPM_ECC_ERR_M BIT(16)
+#define PF0INT_OICR_CPM_RSV2_S 17
+#define PF0INT_OICR_CPM_RSV2_M MAKEMASK(0x3, 17)
+#define PF0INT_OICR_CPM_MAL_DETECT_S 19
+#define PF0INT_OICR_CPM_MAL_DETECT_M BIT(19)
+#define PF0INT_OICR_CPM_GRST_S 20
+#define PF0INT_OICR_CPM_GRST_M BIT(20)
+#define PF0INT_OICR_CPM_PCI_EXCEPTION_S 21
+#define PF0INT_OICR_CPM_PCI_EXCEPTION_M BIT(21)
+#define PF0INT_OICR_CPM_GPIO_S 22
+#define PF0INT_OICR_CPM_GPIO_M BIT(22)
+#define PF0INT_OICR_CPM_RSV3_S 23
+#define PF0INT_OICR_CPM_RSV3_M BIT(23)
+#define PF0INT_OICR_CPM_STORM_DETECT_S 24
+#define PF0INT_OICR_CPM_STORM_DETECT_M BIT(24)
+#define PF0INT_OICR_CPM_LINK_STAT_CHANGE_S 25
+#define PF0INT_OICR_CPM_LINK_STAT_CHANGE_M BIT(25)
+#define PF0INT_OICR_CPM_HMC_ERR_S 26
+#define PF0INT_OICR_CPM_HMC_ERR_M BIT(26)
+#define PF0INT_OICR_CPM_PE_PUSH_S 27
+#define PF0INT_OICR_CPM_PE_PUSH_M BIT(27)
+#define PF0INT_OICR_CPM_PE_CRITERR_S 28
+#define PF0INT_OICR_CPM_PE_CRITERR_M BIT(28)
+#define PF0INT_OICR_CPM_VFLR_S 29
+#define PF0INT_OICR_CPM_VFLR_M BIT(29)
+#define PF0INT_OICR_CPM_XLR_HW_DONE_S 30
+#define PF0INT_OICR_CPM_XLR_HW_DONE_M BIT(30)
+#define PF0INT_OICR_CPM_SWINT_S 31
+#define PF0INT_OICR_CPM_SWINT_M BIT(31)
+#define PF0INT_OICR_CTL_CPM 0x0016CC48 /* Reset Source: CORER */
+#define PF0INT_OICR_CTL_CPM_MSIX_INDX_S 0
+#define PF0INT_OICR_CTL_CPM_MSIX_INDX_M MAKEMASK(0x7FF, 0)
+#define PF0INT_OICR_CTL_CPM_ITR_INDX_S 11
+#define PF0INT_OICR_CTL_CPM_ITR_INDX_M MAKEMASK(0x3, 11)
+#define PF0INT_OICR_CTL_CPM_CAUSE_ENA_S 30
+#define PF0INT_OICR_CTL_CPM_CAUSE_ENA_M BIT(30)
+#define PF0INT_OICR_CTL_CPM_INTEVENT_S 31
+#define PF0INT_OICR_CTL_CPM_INTEVENT_M BIT(31)
+#define PF0INT_OICR_CTL_HLP 0x0016CC5C /* Reset Source: CORER */
+#define PF0INT_OICR_CTL_HLP_MSIX_INDX_S 0
+#define PF0INT_OICR_CTL_HLP_MSIX_INDX_M MAKEMASK(0x7FF, 0)
+#define PF0INT_OICR_CTL_HLP_ITR_INDX_S 11
+#define PF0INT_OICR_CTL_HLP_ITR_INDX_M MAKEMASK(0x3, 11)
+#define PF0INT_OICR_CTL_HLP_CAUSE_ENA_S 30
+#define PF0INT_OICR_CTL_HLP_CAUSE_ENA_M BIT(30)
+#define PF0INT_OICR_CTL_HLP_INTEVENT_S 31
+#define PF0INT_OICR_CTL_HLP_INTEVENT_M BIT(31)
+#define PF0INT_OICR_CTL_PSM 0x0016CC64 /* Reset Source: CORER */
+#define PF0INT_OICR_CTL_PSM_MSIX_INDX_S 0
+#define PF0INT_OICR_CTL_PSM_MSIX_INDX_M MAKEMASK(0x7FF, 0)
+#define PF0INT_OICR_CTL_PSM_ITR_INDX_S 11
+#define PF0INT_OICR_CTL_PSM_ITR_INDX_M MAKEMASK(0x3, 11)
+#define PF0INT_OICR_CTL_PSM_CAUSE_ENA_S 30
+#define PF0INT_OICR_CTL_PSM_CAUSE_ENA_M BIT(30)
+#define PF0INT_OICR_CTL_PSM_INTEVENT_S 31
+#define PF0INT_OICR_CTL_PSM_INTEVENT_M BIT(31)
+#define PF0INT_OICR_ENA_CPM 0x0016CC60 /* Reset Source: CORER */
+#define PF0INT_OICR_ENA_CPM_RSV0_S 0
+#define PF0INT_OICR_ENA_CPM_RSV0_M BIT(0)
+#define PF0INT_OICR_ENA_CPM_INT_ENA_S 1
+#define PF0INT_OICR_ENA_CPM_INT_ENA_M MAKEMASK(0x7FFFFFFF, 1)
+#define PF0INT_OICR_ENA_HLP 0x0016CC4C /* Reset Source: CORER */
+#define PF0INT_OICR_ENA_HLP_RSV0_S 0
+#define PF0INT_OICR_ENA_HLP_RSV0_M BIT(0)
+#define PF0INT_OICR_ENA_HLP_INT_ENA_S 1
+#define PF0INT_OICR_ENA_HLP_INT_ENA_M MAKEMASK(0x7FFFFFFF, 1)
+#define PF0INT_OICR_ENA_PSM 0x0016CC58 /* Reset Source: CORER */
+#define PF0INT_OICR_ENA_PSM_RSV0_S 0
+#define PF0INT_OICR_ENA_PSM_RSV0_M BIT(0)
+#define PF0INT_OICR_ENA_PSM_INT_ENA_S 1
+#define PF0INT_OICR_ENA_PSM_INT_ENA_M MAKEMASK(0x7FFFFFFF, 1)
+#define PF0INT_OICR_HLP 0x0016CC68 /* Reset Source: CORER */
+#define PF0INT_OICR_HLP_INTEVENT_S 0
+#define PF0INT_OICR_HLP_INTEVENT_M BIT(0)
+#define PF0INT_OICR_HLP_QUEUE_S 1
+#define PF0INT_OICR_HLP_QUEUE_M BIT(1)
+#define PF0INT_OICR_HLP_RSV1_S 2
+#define PF0INT_OICR_HLP_RSV1_M MAKEMASK(0xFF, 2)
+#define PF0INT_OICR_HLP_HH_COMP_S 10
+#define PF0INT_OICR_HLP_HH_COMP_M BIT(10)
+#define PF0INT_OICR_HLP_TSYN_TX_S 11
+#define PF0INT_OICR_HLP_TSYN_TX_M BIT(11)
+#define PF0INT_OICR_HLP_TSYN_EVNT_S 12
+#define PF0INT_OICR_HLP_TSYN_EVNT_M BIT(12)
+#define PF0INT_OICR_HLP_TSYN_TGT_S 13
+#define PF0INT_OICR_HLP_TSYN_TGT_M BIT(13)
+#define PF0INT_OICR_HLP_HLP_RDY_S 14
+#define PF0INT_OICR_HLP_HLP_RDY_M BIT(14)
+#define PF0INT_OICR_HLP_CPM_RDY_S 15
+#define PF0INT_OICR_HLP_CPM_RDY_M BIT(15)
+#define PF0INT_OICR_HLP_ECC_ERR_S 16
+#define PF0INT_OICR_HLP_ECC_ERR_M BIT(16)
+#define PF0INT_OICR_HLP_RSV2_S 17
+#define PF0INT_OICR_HLP_RSV2_M MAKEMASK(0x3, 17)
+#define PF0INT_OICR_HLP_MAL_DETECT_S 19
+#define PF0INT_OICR_HLP_MAL_DETECT_M BIT(19)
+#define PF0INT_OICR_HLP_GRST_S 20
+#define PF0INT_OICR_HLP_GRST_M BIT(20)
+#define PF0INT_OICR_HLP_PCI_EXCEPTION_S 21
+#define PF0INT_OICR_HLP_PCI_EXCEPTION_M BIT(21)
+#define PF0INT_OICR_HLP_GPIO_S 22
+#define PF0INT_OICR_HLP_GPIO_M BIT(22)
+#define PF0INT_OICR_HLP_RSV3_S 23
+#define PF0INT_OICR_HLP_RSV3_M BIT(23)
+#define PF0INT_OICR_HLP_STORM_DETECT_S 24
+#define PF0INT_OICR_HLP_STORM_DETECT_M BIT(24)
+#define PF0INT_OICR_HLP_LINK_STAT_CHANGE_S 25
+#define PF0INT_OICR_HLP_LINK_STAT_CHANGE_M BIT(25)
+#define PF0INT_OICR_HLP_HMC_ERR_S 26
+#define PF0INT_OICR_HLP_HMC_ERR_M BIT(26)
+#define PF0INT_OICR_HLP_PE_PUSH_S 27
+#define PF0INT_OICR_HLP_PE_PUSH_M BIT(27)
+#define PF0INT_OICR_HLP_PE_CRITERR_S 28
+#define PF0INT_OICR_HLP_PE_CRITERR_M BIT(28)
+#define PF0INT_OICR_HLP_VFLR_S 29
+#define PF0INT_OICR_HLP_VFLR_M BIT(29)
+#define PF0INT_OICR_HLP_XLR_HW_DONE_S 30
+#define PF0INT_OICR_HLP_XLR_HW_DONE_M BIT(30)
+#define PF0INT_OICR_HLP_SWINT_S 31
+#define PF0INT_OICR_HLP_SWINT_M BIT(31)
+#define PF0INT_OICR_PSM 0x0016CC44 /* Reset Source: CORER */
+#define PF0INT_OICR_PSM_INTEVENT_S 0
+#define PF0INT_OICR_PSM_INTEVENT_M BIT(0)
+#define PF0INT_OICR_PSM_QUEUE_S 1
+#define PF0INT_OICR_PSM_QUEUE_M BIT(1)
+#define PF0INT_OICR_PSM_RSV1_S 2
+#define PF0INT_OICR_PSM_RSV1_M MAKEMASK(0xFF, 2)
+#define PF0INT_OICR_PSM_HH_COMP_S 10
+#define PF0INT_OICR_PSM_HH_COMP_M BIT(10)
+#define PF0INT_OICR_PSM_TSYN_TX_S 11
+#define PF0INT_OICR_PSM_TSYN_TX_M BIT(11)
+#define PF0INT_OICR_PSM_TSYN_EVNT_S 12
+#define PF0INT_OICR_PSM_TSYN_EVNT_M BIT(12)
+#define PF0INT_OICR_PSM_TSYN_TGT_S 13
+#define PF0INT_OICR_PSM_TSYN_TGT_M BIT(13)
+#define PF0INT_OICR_PSM_HLP_RDY_S 14
+#define PF0INT_OICR_PSM_HLP_RDY_M BIT(14)
+#define PF0INT_OICR_PSM_CPM_RDY_S 15
+#define PF0INT_OICR_PSM_CPM_RDY_M BIT(15)
+#define PF0INT_OICR_PSM_ECC_ERR_S 16
+#define PF0INT_OICR_PSM_ECC_ERR_M BIT(16)
+#define PF0INT_OICR_PSM_RSV2_S 17
+#define PF0INT_OICR_PSM_RSV2_M MAKEMASK(0x3, 17)
+#define PF0INT_OICR_PSM_MAL_DETECT_S 19
+#define PF0INT_OICR_PSM_MAL_DETECT_M BIT(19)
+#define PF0INT_OICR_PSM_GRST_S 20
+#define PF0INT_OICR_PSM_GRST_M BIT(20)
+#define PF0INT_OICR_PSM_PCI_EXCEPTION_S 21
+#define PF0INT_OICR_PSM_PCI_EXCEPTION_M BIT(21)
+#define PF0INT_OICR_PSM_GPIO_S 22
+#define PF0INT_OICR_PSM_GPIO_M BIT(22)
+#define PF0INT_OICR_PSM_RSV3_S 23
+#define PF0INT_OICR_PSM_RSV3_M BIT(23)
+#define PF0INT_OICR_PSM_STORM_DETECT_S 24
+#define PF0INT_OICR_PSM_STORM_DETECT_M BIT(24)
+#define PF0INT_OICR_PSM_LINK_STAT_CHANGE_S 25
+#define PF0INT_OICR_PSM_LINK_STAT_CHANGE_M BIT(25)
+#define PF0INT_OICR_PSM_HMC_ERR_S 26
+#define PF0INT_OICR_PSM_HMC_ERR_M BIT(26)
+#define PF0INT_OICR_PSM_PE_PUSH_S 27
+#define PF0INT_OICR_PSM_PE_PUSH_M BIT(27)
+#define PF0INT_OICR_PSM_PE_CRITERR_S 28
+#define PF0INT_OICR_PSM_PE_CRITERR_M BIT(28)
+#define PF0INT_OICR_PSM_VFLR_S 29
+#define PF0INT_OICR_PSM_VFLR_M BIT(29)
+#define PF0INT_OICR_PSM_XLR_HW_DONE_S 30
+#define PF0INT_OICR_PSM_XLR_HW_DONE_M BIT(30)
+#define PF0INT_OICR_PSM_SWINT_S 31
+#define PF0INT_OICR_PSM_SWINT_M BIT(31)
+#define PF0INT_SB_CPM_CTL 0x0016B2CC /* Reset Source: CORER */
+#define PF0INT_SB_CPM_CTL_MSIX_INDX_S 0
+#define PF0INT_SB_CPM_CTL_MSIX_INDX_M MAKEMASK(0x7FF, 0)
+#define PF0INT_SB_CPM_CTL_ITR_INDX_S 11
+#define PF0INT_SB_CPM_CTL_ITR_INDX_M MAKEMASK(0x3, 11)
+#define PF0INT_SB_CPM_CTL_CAUSE_ENA_S 30
+#define PF0INT_SB_CPM_CTL_CAUSE_ENA_M BIT(30)
+#define PF0INT_SB_CPM_CTL_INTEVENT_S 31
+#define PF0INT_SB_CPM_CTL_INTEVENT_M BIT(31)
+#define PF0INT_SB_HLP_CTL 0x0016B640 /* Reset Source: CORER */
+#define PF0INT_SB_HLP_CTL_MSIX_INDX_S 0
+#define PF0INT_SB_HLP_CTL_MSIX_INDX_M MAKEMASK(0x7FF, 0)
+#define PF0INT_SB_HLP_CTL_ITR_INDX_S 11
+#define PF0INT_SB_HLP_CTL_ITR_INDX_M MAKEMASK(0x3, 11)
+#define PF0INT_SB_HLP_CTL_CAUSE_ENA_S 30
+#define PF0INT_SB_HLP_CTL_CAUSE_ENA_M BIT(30)
+#define PF0INT_SB_HLP_CTL_INTEVENT_S 31
+#define PF0INT_SB_HLP_CTL_INTEVENT_M BIT(31)
+#define PFINT_AEQCTL 0x0016CB00 /* Reset Source: CORER */
+#define PFINT_AEQCTL_MSIX_INDX_S 0
+#define PFINT_AEQCTL_MSIX_INDX_M MAKEMASK(0x7FF, 0)
+#define PFINT_AEQCTL_ITR_INDX_S 11
+#define PFINT_AEQCTL_ITR_INDX_M MAKEMASK(0x3, 11)
+#define PFINT_AEQCTL_CAUSE_ENA_S 30
+#define PFINT_AEQCTL_CAUSE_ENA_M BIT(30)
+#define PFINT_AEQCTL_INTEVENT_S 31
+#define PFINT_AEQCTL_INTEVENT_M BIT(31)
+#define PFINT_ALLOC 0x001D2600 /* Reset Source: CORER */
+#define PFINT_ALLOC_FIRST_S 0
+#define PFINT_ALLOC_FIRST_M MAKEMASK(0x7FF, 0)
+#define PFINT_ALLOC_LAST_S 12
+#define PFINT_ALLOC_LAST_M MAKEMASK(0x7FF, 12)
+#define PFINT_ALLOC_VALID_S 31
+#define PFINT_ALLOC_VALID_M BIT(31)
+#define PFINT_ALLOC_PCI 0x0009D800 /* Reset Source: PCIR */
+#define PFINT_ALLOC_PCI_FIRST_S 0
+#define PFINT_ALLOC_PCI_FIRST_M MAKEMASK(0x7FF, 0)
+#define PFINT_ALLOC_PCI_LAST_S 12
+#define PFINT_ALLOC_PCI_LAST_M MAKEMASK(0x7FF, 12)
+#define PFINT_ALLOC_PCI_VALID_S 31
+#define PFINT_ALLOC_PCI_VALID_M BIT(31)
+#define PFINT_FW_CTL 0x0016C800 /* Reset Source: CORER */
+#define PFINT_FW_CTL_MSIX_INDX_S 0
+#define PFINT_FW_CTL_MSIX_INDX_M MAKEMASK(0x7FF, 0)
+#define PFINT_FW_CTL_ITR_INDX_S 11
+#define PFINT_FW_CTL_ITR_INDX_M MAKEMASK(0x3, 11)
+#define PFINT_FW_CTL_CAUSE_ENA_S 30
+#define PFINT_FW_CTL_CAUSE_ENA_M BIT(30)
+#define PFINT_FW_CTL_INTEVENT_S 31
+#define PFINT_FW_CTL_INTEVENT_M BIT(31)
+#define PFINT_GPIO_ENA 0x00088080 /* Reset Source: CORER */
+#define PFINT_GPIO_ENA_GPIO0_ENA_S 0
+#define PFINT_GPIO_ENA_GPIO0_ENA_M BIT(0)
+#define PFINT_GPIO_ENA_GPIO1_ENA_S 1
+#define PFINT_GPIO_ENA_GPIO1_ENA_M BIT(1)
+#define PFINT_GPIO_ENA_GPIO2_ENA_S 2
+#define PFINT_GPIO_ENA_GPIO2_ENA_M BIT(2)
+#define PFINT_GPIO_ENA_GPIO3_ENA_S 3
+#define PFINT_GPIO_ENA_GPIO3_ENA_M BIT(3)
+#define PFINT_GPIO_ENA_GPIO4_ENA_S 4
+#define PFINT_GPIO_ENA_GPIO4_ENA_M BIT(4)
+#define PFINT_GPIO_ENA_GPIO5_ENA_S 5
+#define PFINT_GPIO_ENA_GPIO5_ENA_M BIT(5)
+#define PFINT_GPIO_ENA_GPIO6_ENA_S 6
+#define PFINT_GPIO_ENA_GPIO6_ENA_M BIT(6)
+#define PFINT_MBX_CTL 0x0016B280 /* Reset Source: CORER */
+#define PFINT_MBX_CTL_MSIX_INDX_S 0
+#define PFINT_MBX_CTL_MSIX_INDX_M MAKEMASK(0x7FF, 0)
+#define PFINT_MBX_CTL_ITR_INDX_S 11
+#define PFINT_MBX_CTL_ITR_INDX_M MAKEMASK(0x3, 11)
+#define PFINT_MBX_CTL_CAUSE_ENA_S 30
+#define PFINT_MBX_CTL_CAUSE_ENA_M BIT(30)
+#define PFINT_MBX_CTL_INTEVENT_S 31
+#define PFINT_MBX_CTL_INTEVENT_M BIT(31)
+#define PFINT_OICR 0x0016CA00 /* Reset Source: CORER */
+#define PFINT_OICR_INTEVENT_S 0
+#define PFINT_OICR_INTEVENT_M BIT(0)
+#define PFINT_OICR_QUEUE_S 1
+#define PFINT_OICR_QUEUE_M BIT(1)
+#define PFINT_OICR_RSV1_S 2
+#define PFINT_OICR_RSV1_M MAKEMASK(0xFF, 2)
+#define PFINT_OICR_HH_COMP_S 10
+#define PFINT_OICR_HH_COMP_M BIT(10)
+#define PFINT_OICR_TSYN_TX_S 11
+#define PFINT_OICR_TSYN_TX_M BIT(11)
+#define PFINT_OICR_TSYN_EVNT_S 12
+#define PFINT_OICR_TSYN_EVNT_M BIT(12)
+#define PFINT_OICR_TSYN_TGT_S 13
+#define PFINT_OICR_TSYN_TGT_M BIT(13)
+#define PFINT_OICR_HLP_RDY_S 14
+#define PFINT_OICR_HLP_RDY_M BIT(14)
+#define PFINT_OICR_CPM_RDY_S 15
+#define PFINT_OICR_CPM_RDY_M BIT(15)
+#define PFINT_OICR_ECC_ERR_S 16
+#define PFINT_OICR_ECC_ERR_M BIT(16)
+#define PFINT_OICR_RSV2_S 17
+#define PFINT_OICR_RSV2_M MAKEMASK(0x3, 17)
+#define PFINT_OICR_MAL_DETECT_S 19
+#define PFINT_OICR_MAL_DETECT_M BIT(19)
+#define PFINT_OICR_GRST_S 20
+#define PFINT_OICR_GRST_M BIT(20)
+#define PFINT_OICR_PCI_EXCEPTION_S 21
+#define PFINT_OICR_PCI_EXCEPTION_M BIT(21)
+#define PFINT_OICR_GPIO_S 22
+#define PFINT_OICR_GPIO_M BIT(22)
+#define PFINT_OICR_RSV3_S 23
+#define PFINT_OICR_RSV3_M BIT(23)
+#define PFINT_OICR_STORM_DETECT_S 24
+#define PFINT_OICR_STORM_DETECT_M BIT(24)
+#define PFINT_OICR_LINK_STAT_CHANGE_S 25
+#define PFINT_OICR_LINK_STAT_CHANGE_M BIT(25)
+#define PFINT_OICR_HMC_ERR_S 26
+#define PFINT_OICR_HMC_ERR_M BIT(26)
+#define PFINT_OICR_PE_PUSH_S 27
+#define PFINT_OICR_PE_PUSH_M BIT(27)
+#define PFINT_OICR_PE_CRITERR_S 28
+#define PFINT_OICR_PE_CRITERR_M BIT(28)
+#define PFINT_OICR_VFLR_S 29
+#define PFINT_OICR_VFLR_M BIT(29)
+#define PFINT_OICR_XLR_HW_DONE_S 30
+#define PFINT_OICR_XLR_HW_DONE_M BIT(30)
+#define PFINT_OICR_SWINT_S 31
+#define PFINT_OICR_SWINT_M BIT(31)
+#define PFINT_OICR_CTL 0x0016CA80 /* Reset Source: CORER */
+#define PFINT_OICR_CTL_MSIX_INDX_S 0
+#define PFINT_OICR_CTL_MSIX_INDX_M MAKEMASK(0x7FF, 0)
+#define PFINT_OICR_CTL_ITR_INDX_S 11
+#define PFINT_OICR_CTL_ITR_INDX_M MAKEMASK(0x3, 11)
+#define PFINT_OICR_CTL_CAUSE_ENA_S 30
+#define PFINT_OICR_CTL_CAUSE_ENA_M BIT(30)
+#define PFINT_OICR_CTL_INTEVENT_S 31
+#define PFINT_OICR_CTL_INTEVENT_M BIT(31)
+#define PFINT_OICR_ENA 0x0016C900 /* Reset Source: CORER */
+#define PFINT_OICR_ENA_RSV0_S 0
+#define PFINT_OICR_ENA_RSV0_M BIT(0)
+#define PFINT_OICR_ENA_INT_ENA_S 1
+#define PFINT_OICR_ENA_INT_ENA_M MAKEMASK(0x7FFFFFFF, 1)
+#define PFINT_SB_CTL 0x0016B600 /* Reset Source: CORER */
+#define PFINT_SB_CTL_MSIX_INDX_S 0
+#define PFINT_SB_CTL_MSIX_INDX_M MAKEMASK(0x7FF, 0)
+#define PFINT_SB_CTL_ITR_INDX_S 11
+#define PFINT_SB_CTL_ITR_INDX_M MAKEMASK(0x3, 11)
+#define PFINT_SB_CTL_CAUSE_ENA_S 30
+#define PFINT_SB_CTL_CAUSE_ENA_M BIT(30)
+#define PFINT_SB_CTL_INTEVENT_S 31
+#define PFINT_SB_CTL_INTEVENT_M BIT(31)
+#define PFINT_TSYN_MSK 0x0016C980 /* Reset Source: CORER */
+#define PFINT_TSYN_MSK_PHY_INDX_S 0
+#define PFINT_TSYN_MSK_PHY_INDX_M MAKEMASK(0x1F, 0)
+#define QINT_RQCTL(_QRX) (0x00150000 + ((_QRX) * 4)) /* _i=0...2047 */ /* Reset Source: CORER */
+#define QINT_RQCTL_MAX_INDEX 2047
+#define QINT_RQCTL_MSIX_INDX_S 0
+#define QINT_RQCTL_MSIX_INDX_M MAKEMASK(0x7FF, 0)
+#define QINT_RQCTL_ITR_INDX_S 11
+#define QINT_RQCTL_ITR_INDX_M MAKEMASK(0x3, 11)
+#define QINT_RQCTL_CAUSE_ENA_S 30
+#define QINT_RQCTL_CAUSE_ENA_M BIT(30)
+#define QINT_RQCTL_INTEVENT_S 31
+#define QINT_RQCTL_INTEVENT_M BIT(31)
+#define QINT_TQCTL(_DBQM) (0x00140000 + ((_DBQM) * 4)) /* _i=0...16383 */ /* Reset Source: CORER */
+#define QINT_TQCTL_MAX_INDEX 16383
+#define QINT_TQCTL_MSIX_INDX_S 0
+#define QINT_TQCTL_MSIX_INDX_M MAKEMASK(0x7FF, 0)
+#define QINT_TQCTL_ITR_INDX_S 11
+#define QINT_TQCTL_ITR_INDX_M MAKEMASK(0x3, 11)
+#define QINT_TQCTL_CAUSE_ENA_S 30
+#define QINT_TQCTL_CAUSE_ENA_M BIT(30)
+#define QINT_TQCTL_INTEVENT_S 31
+#define QINT_TQCTL_INTEVENT_M BIT(31)
+#define VPINT_AEQCTL(_VF) (0x0016B800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPINT_AEQCTL_MAX_INDEX 255
+#define VPINT_AEQCTL_MSIX_INDX_S 0
+#define VPINT_AEQCTL_MSIX_INDX_M MAKEMASK(0x7FF, 0)
+#define VPINT_AEQCTL_ITR_INDX_S 11
+#define VPINT_AEQCTL_ITR_INDX_M MAKEMASK(0x3, 11)
+#define VPINT_AEQCTL_CAUSE_ENA_S 30
+#define VPINT_AEQCTL_CAUSE_ENA_M BIT(30)
+#define VPINT_AEQCTL_INTEVENT_S 31
+#define VPINT_AEQCTL_INTEVENT_M BIT(31)
+#define VPINT_ALLOC(_VF) (0x001D1000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPINT_ALLOC_MAX_INDEX 255
+#define VPINT_ALLOC_FIRST_S 0
+#define VPINT_ALLOC_FIRST_M MAKEMASK(0x7FF, 0)
+#define VPINT_ALLOC_LAST_S 12
+#define VPINT_ALLOC_LAST_M MAKEMASK(0x7FF, 12)
+#define VPINT_ALLOC_VALID_S 31
+#define VPINT_ALLOC_VALID_M BIT(31)
+#define VPINT_ALLOC_PCI(_VF) (0x0009D000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PCIR */
+#define VPINT_ALLOC_PCI_MAX_INDEX 255
+#define VPINT_ALLOC_PCI_FIRST_S 0
+#define VPINT_ALLOC_PCI_FIRST_M MAKEMASK(0x7FF, 0)
+#define VPINT_ALLOC_PCI_LAST_S 12
+#define VPINT_ALLOC_PCI_LAST_M MAKEMASK(0x7FF, 12)
+#define VPINT_ALLOC_PCI_VALID_S 31
+#define VPINT_ALLOC_PCI_VALID_M BIT(31)
+#define VPINT_MBX_CPM_CTL(_VP128) (0x0016B000 + ((_VP128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VPINT_MBX_CPM_CTL_MAX_INDEX 127
+#define VPINT_MBX_CPM_CTL_MSIX_INDX_S 0
+#define VPINT_MBX_CPM_CTL_MSIX_INDX_M MAKEMASK(0x7FF, 0)
+#define VPINT_MBX_CPM_CTL_ITR_INDX_S 11
+#define VPINT_MBX_CPM_CTL_ITR_INDX_M MAKEMASK(0x3, 11)
+#define VPINT_MBX_CPM_CTL_CAUSE_ENA_S 30
+#define VPINT_MBX_CPM_CTL_CAUSE_ENA_M BIT(30)
+#define VPINT_MBX_CPM_CTL_INTEVENT_S 31
+#define VPINT_MBX_CPM_CTL_INTEVENT_M BIT(31)
+#define VPINT_MBX_CTL(_VSI) (0x0016A000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VPINT_MBX_CTL_MAX_INDEX 767
+#define VPINT_MBX_CTL_MSIX_INDX_S 0
+#define VPINT_MBX_CTL_MSIX_INDX_M MAKEMASK(0x7FF, 0)
+#define VPINT_MBX_CTL_ITR_INDX_S 11
+#define VPINT_MBX_CTL_ITR_INDX_M MAKEMASK(0x3, 11)
+#define VPINT_MBX_CTL_CAUSE_ENA_S 30
+#define VPINT_MBX_CTL_CAUSE_ENA_M BIT(30)
+#define VPINT_MBX_CTL_INTEVENT_S 31
+#define VPINT_MBX_CTL_INTEVENT_M BIT(31)
+#define VPINT_MBX_HLP_CTL(_VP16) (0x0016B200 + ((_VP16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VPINT_MBX_HLP_CTL_MAX_INDEX 15
+#define VPINT_MBX_HLP_CTL_MSIX_INDX_S 0
+#define VPINT_MBX_HLP_CTL_MSIX_INDX_M MAKEMASK(0x7FF, 0)
+#define VPINT_MBX_HLP_CTL_ITR_INDX_S 11
+#define VPINT_MBX_HLP_CTL_ITR_INDX_M MAKEMASK(0x3, 11)
+#define VPINT_MBX_HLP_CTL_CAUSE_ENA_S 30
+#define VPINT_MBX_HLP_CTL_CAUSE_ENA_M BIT(30)
+#define VPINT_MBX_HLP_CTL_INTEVENT_S 31
+#define VPINT_MBX_HLP_CTL_INTEVENT_M BIT(31)
+#define VPINT_MBX_PSM_CTL(_VP16) (0x0016B240 + ((_VP16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VPINT_MBX_PSM_CTL_MAX_INDEX 15
+#define VPINT_MBX_PSM_CTL_MSIX_INDX_S 0
+#define VPINT_MBX_PSM_CTL_MSIX_INDX_M MAKEMASK(0x7FF, 0)
+#define VPINT_MBX_PSM_CTL_ITR_INDX_S 11
+#define VPINT_MBX_PSM_CTL_ITR_INDX_M MAKEMASK(0x3, 11)
+#define VPINT_MBX_PSM_CTL_CAUSE_ENA_S 30
+#define VPINT_MBX_PSM_CTL_CAUSE_ENA_M BIT(30)
+#define VPINT_MBX_PSM_CTL_INTEVENT_S 31
+#define VPINT_MBX_PSM_CTL_INTEVENT_M BIT(31)
+#define VPINT_SB_CPM_CTL(_VP128) (0x0016B400 + ((_VP128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VPINT_SB_CPM_CTL_MAX_INDEX 127
+#define VPINT_SB_CPM_CTL_MSIX_INDX_S 0
+#define VPINT_SB_CPM_CTL_MSIX_INDX_M MAKEMASK(0x7FF, 0)
+#define VPINT_SB_CPM_CTL_ITR_INDX_S 11
+#define VPINT_SB_CPM_CTL_ITR_INDX_M MAKEMASK(0x3, 11)
+#define VPINT_SB_CPM_CTL_CAUSE_ENA_S 30
+#define VPINT_SB_CPM_CTL_CAUSE_ENA_M BIT(30)
+#define VPINT_SB_CPM_CTL_INTEVENT_S 31
+#define VPINT_SB_CPM_CTL_INTEVENT_M BIT(31)
+#define GL_HLP_PRT_IPG_PREAMBLE_SIZE(_i) (0x00049240 + ((_i) * 4)) /* _i=0...20 */ /* Reset Source: CORER */
+#define GL_HLP_PRT_IPG_PREAMBLE_SIZE_MAX_INDEX 20
+#define GL_HLP_PRT_IPG_PREAMBLE_SIZE_IPG_PREAMBLE_SIZE_S 0
+#define GL_HLP_PRT_IPG_PREAMBLE_SIZE_IPG_PREAMBLE_SIZE_M MAKEMASK(0xFF, 0)
+#define GL_TDPU_PSM_DEFAULT_RECIPE(_i) (0x00049294 + ((_i) * 4)) /* _i=0...3 */ /* Reset Source: CORER */
+#define GL_TDPU_PSM_DEFAULT_RECIPE_MAX_INDEX 3
+#define GL_TDPU_PSM_DEFAULT_RECIPE_ADD_IPG_S 0
+#define GL_TDPU_PSM_DEFAULT_RECIPE_ADD_IPG_M BIT(0)
+#define GL_TDPU_PSM_DEFAULT_RECIPE_SUB_CRC_S 1
+#define GL_TDPU_PSM_DEFAULT_RECIPE_SUB_CRC_M BIT(1)
+#define GL_TDPU_PSM_DEFAULT_RECIPE_SUB_ESP_TRAILER_S 2
+#define GL_TDPU_PSM_DEFAULT_RECIPE_SUB_ESP_TRAILER_M BIT(2)
+#define GL_TDPU_PSM_DEFAULT_RECIPE_INCLUDE_L2_PAD_S 3
+#define GL_TDPU_PSM_DEFAULT_RECIPE_INCLUDE_L2_PAD_M BIT(3)
+#define GL_TDPU_PSM_DEFAULT_RECIPE_DEFAULT_UPDATE_MODE_S 4
+#define GL_TDPU_PSM_DEFAULT_RECIPE_DEFAULT_UPDATE_MODE_M BIT(4)
+#define GLLAN_PF_RECIPE(_i) (0x0029420C + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLLAN_PF_RECIPE_MAX_INDEX 7
+#define GLLAN_PF_RECIPE_RECIPE_S 0
+#define GLLAN_PF_RECIPE_RECIPE_M MAKEMASK(0x3, 0)
+#define GLLAN_RCTL_0 0x002941F8 /* Reset Source: CORER */
+#define GLLAN_RCTL_0_PXE_MODE_S 0
+#define GLLAN_RCTL_0_PXE_MODE_M BIT(0)
+#define GLLAN_RCTL_1 0x002941FC /* Reset Source: CORER */
+#define GLLAN_RCTL_1_RXMAX_EXPANSION_S 12
+#define GLLAN_RCTL_1_RXMAX_EXPANSION_M MAKEMASK(0xF, 12)
+#define GLLAN_RCTL_1_RXDRDCTL_S 17
+#define GLLAN_RCTL_1_RXDRDCTL_M BIT(17)
+#define GLLAN_RCTL_1_RXDESCRDROEN_S 18
+#define GLLAN_RCTL_1_RXDESCRDROEN_M BIT(18)
+#define GLLAN_RCTL_1_RXDATAWRROEN_S 19
+#define GLLAN_RCTL_1_RXDATAWRROEN_M BIT(19)
+#define GLLAN_TSOMSK_F 0x00049308 /* Reset Source: CORER */
+#define GLLAN_TSOMSK_F_TCPMSKF_S 0
+#define GLLAN_TSOMSK_F_TCPMSKF_M MAKEMASK(0xFFF, 0)
+#define GLLAN_TSOMSK_L 0x00049310 /* Reset Source: CORER */
+#define GLLAN_TSOMSK_L_TCPMSKL_S 0
+#define GLLAN_TSOMSK_L_TCPMSKL_M MAKEMASK(0xFFF, 0)
+#define GLLAN_TSOMSK_M 0x0004930C /* Reset Source: CORER */
+#define GLLAN_TSOMSK_M_TCPMSKM_S 0
+#define GLLAN_TSOMSK_M_TCPMSKM_M MAKEMASK(0xFFF, 0)
+#define PFLAN_CP_QALLOC 0x00075700 /* Reset Source: CORER */
+#define PFLAN_CP_QALLOC_FIRSTQ_S 0
+#define PFLAN_CP_QALLOC_FIRSTQ_M MAKEMASK(0x1FF, 0)
+#define PFLAN_CP_QALLOC_LASTQ_S 16
+#define PFLAN_CP_QALLOC_LASTQ_M MAKEMASK(0x1FF, 16)
+#define PFLAN_CP_QALLOC_VALID_S 31
+#define PFLAN_CP_QALLOC_VALID_M BIT(31)
+#define PFLAN_DB_QALLOC 0x00075680 /* Reset Source: CORER */
+#define PFLAN_DB_QALLOC_FIRSTQ_S 0
+#define PFLAN_DB_QALLOC_FIRSTQ_M MAKEMASK(0xFF, 0)
+#define PFLAN_DB_QALLOC_LASTQ_S 16
+#define PFLAN_DB_QALLOC_LASTQ_M MAKEMASK(0xFF, 16)
+#define PFLAN_DB_QALLOC_VALID_S 31
+#define PFLAN_DB_QALLOC_VALID_M BIT(31)
+#define PFLAN_RX_QALLOC 0x001D2500 /* Reset Source: CORER */
+#define PFLAN_RX_QALLOC_FIRSTQ_S 0
+#define PFLAN_RX_QALLOC_FIRSTQ_M MAKEMASK(0x7FF, 0)
+#define PFLAN_RX_QALLOC_LASTQ_S 16
+#define PFLAN_RX_QALLOC_LASTQ_M MAKEMASK(0x7FF, 16)
+#define PFLAN_RX_QALLOC_VALID_S 31
+#define PFLAN_RX_QALLOC_VALID_M BIT(31)
+#define PFLAN_TX_QALLOC 0x001D2580 /* Reset Source: CORER */
+#define PFLAN_TX_QALLOC_FIRSTQ_S 0
+#define PFLAN_TX_QALLOC_FIRSTQ_M MAKEMASK(0x3FFF, 0)
+#define PFLAN_TX_QALLOC_LASTQ_S 16
+#define PFLAN_TX_QALLOC_LASTQ_M MAKEMASK(0x3FFF, 16)
+#define PFLAN_TX_QALLOC_VALID_S 31
+#define PFLAN_TX_QALLOC_VALID_M BIT(31)
+#define QRX_CONTEXT(_i, _QRX) (0x00280000 + ((_i) * 8192 + (_QRX) * 4)) /* _i=0...7, _QRX=0...2047 */ /* Reset Source: CORER */
+#define QRX_CONTEXT_MAX_INDEX 7
+#define QRX_CONTEXT_RXQ_CONTEXT_S 0
+#define QRX_CONTEXT_RXQ_CONTEXT_M MAKEMASK(0xFFFFFFFF, 0)
+#define QRX_CTRL(_QRX) (0x00120000 + ((_QRX) * 4)) /* _i=0...2047 */ /* Reset Source: PFR */
+#define QRX_CTRL_MAX_INDEX 2047
+#define QRX_CTRL_QENA_REQ_S 0
+#define QRX_CTRL_QENA_REQ_M BIT(0)
+#define QRX_CTRL_FAST_QDIS_S 1
+#define QRX_CTRL_FAST_QDIS_M BIT(1)
+#define QRX_CTRL_QENA_STAT_S 2
+#define QRX_CTRL_QENA_STAT_M BIT(2)
+#define QRX_CTRL_CDE_S 3
+#define QRX_CTRL_CDE_M BIT(3)
+#define QRX_CTRL_CDS_S 4
+#define QRX_CTRL_CDS_M BIT(4)
+#define QRX_ITR(_QRX) (0x00292000 + ((_QRX) * 4)) /* _i=0...2047 */ /* Reset Source: CORER */
+#define QRX_ITR_MAX_INDEX 2047
+#define QRX_ITR_NO_EXPR_S 0
+#define QRX_ITR_NO_EXPR_M BIT(0)
+#define QRX_TAIL(_QRX) (0x00290000 + ((_QRX) * 4)) /* _i=0...2047 */ /* Reset Source: CORER */
+#define QRX_TAIL_MAX_INDEX 2047
+#define QRX_TAIL_TAIL_S 0
+#define QRX_TAIL_TAIL_M MAKEMASK(0x1FFF, 0)
+#define VPDSI_RX_QTABLE(_i, _VP16) (0x00074C00 + ((_i) * 64 + (_VP16) * 4)) /* _i=0...15, _VP16=0...15 */ /* Reset Source: CORER */
+#define VPDSI_RX_QTABLE_MAX_INDEX 15
+#define VPDSI_RX_QTABLE_PAGE_INDEX0_S 0
+#define VPDSI_RX_QTABLE_PAGE_INDEX0_M MAKEMASK(0x7F, 0)
+#define VPDSI_RX_QTABLE_PAGE_INDEX1_S 8
+#define VPDSI_RX_QTABLE_PAGE_INDEX1_M MAKEMASK(0x7F, 8)
+#define VPDSI_RX_QTABLE_PAGE_INDEX2_S 16
+#define VPDSI_RX_QTABLE_PAGE_INDEX2_M MAKEMASK(0x7F, 16)
+#define VPDSI_RX_QTABLE_PAGE_INDEX3_S 24
+#define VPDSI_RX_QTABLE_PAGE_INDEX3_M MAKEMASK(0x7F, 24)
+#define VPDSI_TX_QTABLE(_i, _VP16) (0x001D2000 + ((_i) * 64 + (_VP16) * 4)) /* _i=0...15, _VP16=0...15 */ /* Reset Source: CORER */
+#define VPDSI_TX_QTABLE_MAX_INDEX 15
+#define VPDSI_TX_QTABLE_PAGE_INDEX0_S 0
+#define VPDSI_TX_QTABLE_PAGE_INDEX0_M MAKEMASK(0x7F, 0)
+#define VPDSI_TX_QTABLE_PAGE_INDEX1_S 8
+#define VPDSI_TX_QTABLE_PAGE_INDEX1_M MAKEMASK(0x7F, 8)
+#define VPDSI_TX_QTABLE_PAGE_INDEX2_S 16
+#define VPDSI_TX_QTABLE_PAGE_INDEX2_M MAKEMASK(0x7F, 16)
+#define VPDSI_TX_QTABLE_PAGE_INDEX3_S 24
+#define VPDSI_TX_QTABLE_PAGE_INDEX3_M MAKEMASK(0x7F, 24)
+#define VPLAN_DB_QTABLE(_i, _VF) (0x00070000 + ((_i) * 2048 + (_VF) * 4)) /* _i=0...3, _VF=0...255 */ /* Reset Source: CORER */
+#define VPLAN_DB_QTABLE_MAX_INDEX 3
+#define VPLAN_DB_QTABLE_QINDEX_S 0
+#define VPLAN_DB_QTABLE_QINDEX_M MAKEMASK(0x1FF, 0)
+#define VPLAN_DSI_VF_MODE(_VP16) (0x002D2C00 + ((_VP16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VPLAN_DSI_VF_MODE_MAX_INDEX 15
+#define VPLAN_DSI_VF_MODE_LAN_DSI_VF_MODE_S 0
+#define VPLAN_DSI_VF_MODE_LAN_DSI_VF_MODE_M BIT(0)
+#define VPLAN_RX_QBASE(_VF) (0x00072000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPLAN_RX_QBASE_MAX_INDEX 255
+#define VPLAN_RX_QBASE_VFFIRSTQ_S 0
+#define VPLAN_RX_QBASE_VFFIRSTQ_M MAKEMASK(0x7FF, 0)
+#define VPLAN_RX_QBASE_VFNUMQ_S 16
+#define VPLAN_RX_QBASE_VFNUMQ_M MAKEMASK(0xFF, 16)
+#define VPLAN_RX_QBASE_VFQTABLE_ENA_S 31
+#define VPLAN_RX_QBASE_VFQTABLE_ENA_M BIT(31)
+#define VPLAN_RX_QTABLE(_i, _VF) (0x00060000 + ((_i) * 2048 + (_VF) * 4)) /* _i=0...15, _VF=0...255 */ /* Reset Source: CORER */
+#define VPLAN_RX_QTABLE_MAX_INDEX 15
+#define VPLAN_RX_QTABLE_QINDEX_S 0
+#define VPLAN_RX_QTABLE_QINDEX_M MAKEMASK(0xFFF, 0)
+#define VPLAN_RXQ_MAPENA(_VF) (0x00073000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPLAN_RXQ_MAPENA_MAX_INDEX 255
+#define VPLAN_RXQ_MAPENA_RX_ENA_S 0
+#define VPLAN_RXQ_MAPENA_RX_ENA_M BIT(0)
+#define VPLAN_TX_QBASE(_VF) (0x001D1800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPLAN_TX_QBASE_MAX_INDEX 255
+#define VPLAN_TX_QBASE_VFFIRSTQ_S 0
+#define VPLAN_TX_QBASE_VFFIRSTQ_M MAKEMASK(0x3FFF, 0)
+#define VPLAN_TX_QBASE_VFNUMQ_S 16
+#define VPLAN_TX_QBASE_VFNUMQ_M MAKEMASK(0xFF, 16)
+#define VPLAN_TX_QBASE_VFQTABLE_ENA_S 31
+#define VPLAN_TX_QBASE_VFQTABLE_ENA_M BIT(31)
+#define VPLAN_TX_QTABLE(_i, _VF) (0x001C0000 + ((_i) * 2048 + (_VF) * 4)) /* _i=0...15, _VF=0...255 */ /* Reset Source: CORER */
+#define VPLAN_TX_QTABLE_MAX_INDEX 15
+#define VPLAN_TX_QTABLE_QINDEX_S 0
+#define VPLAN_TX_QTABLE_QINDEX_M MAKEMASK(0x7FFF, 0)
+#define VPLAN_TXQ_MAPENA(_VF) (0x00073800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPLAN_TXQ_MAPENA_MAX_INDEX 255
+#define VPLAN_TXQ_MAPENA_TX_ENA_S 0
+#define VPLAN_TXQ_MAPENA_TX_ENA_M BIT(0)
+#define VSILAN_QBASE(_VSI) (0x0044c000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSILAN_QBASE_MAX_INDEX 767
+#define VSILAN_QBASE_VSIBASE_S 0
+#define VSILAN_QBASE_VSIBASE_M MAKEMASK(0x7FF, 0)
+#define VSILAN_QBASE_VSIQTABLE_ENA_S 11
+#define VSILAN_QBASE_VSIQTABLE_ENA_M BIT(11)
+#define VSILAN_QTABLE(_i, _VSI) (0x00440000 + ((_i) * 4096 + (_VSI) * 4)) /* _i=0...7, _VSI=0...767 */ /* Reset Source: PFR */
+#define VSILAN_QTABLE_MAX_INDEX 7
+#define VSILAN_QTABLE_QINDEX_0_S 0
+#define VSILAN_QTABLE_QINDEX_0_M MAKEMASK(0x7FF, 0)
+#define VSILAN_QTABLE_QINDEX_1_S 16
+#define VSILAN_QTABLE_QINDEX_1_M MAKEMASK(0x7FF, 16)
+#define PRTMAC_HSEC_CTL_RX_ENABLE_GCP 0x001E31C0 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_RX_ENABLE_GCP_HSEC_CTL_RX_ENABLE_GCP_S 0
+#define PRTMAC_HSEC_CTL_RX_ENABLE_GCP_HSEC_CTL_RX_ENABLE_GCP_M BIT(0)
+#define PRTMAC_HSEC_CTL_RX_ENABLE_GPP 0x001E34C0 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_RX_ENABLE_GPP_HSEC_CTL_RX_ENABLE_GPP_S 0
+#define PRTMAC_HSEC_CTL_RX_ENABLE_GPP_HSEC_CTL_RX_ENABLE_GPP_M BIT(0)
+#define PRTMAC_HSEC_CTL_RX_ENABLE_PPP 0x001E35C0 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_RX_ENABLE_PPP_HSEC_CTL_RX_ENABLE_PPP_S 0
+#define PRTMAC_HSEC_CTL_RX_ENABLE_PPP_HSEC_CTL_RX_ENABLE_PPP_M BIT(0)
+#define PRTMAC_HSEC_CTL_RX_FORWARD_CONTROL 0x001E36C0 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_RX_FORWARD_CONTROL_HSEC_CTL_RX_FORWARD_CONTROL_S 0
+#define PRTMAC_HSEC_CTL_RX_FORWARD_CONTROL_HSEC_CTL_RX_FORWARD_CONTROL_M BIT(0)
+#define PRTMAC_HSEC_CTL_RX_PAUSE_DA_UCAST_PART1 0x001E3220 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_RX_PAUSE_DA_UCAST_PART1_HSEC_CTL_RX_PAUSE_DA_UCAST_PART1_S 0
+#define PRTMAC_HSEC_CTL_RX_PAUSE_DA_UCAST_PART1_HSEC_CTL_RX_PAUSE_DA_UCAST_PART1_M MAKEMASK(0xFFFFFFFF, 0)
+#define PRTMAC_HSEC_CTL_RX_PAUSE_DA_UCAST_PART2 0x001E3240 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_RX_PAUSE_DA_UCAST_PART2_HSEC_CTL_RX_PAUSE_DA_UCAST_PART2_S 0
+#define PRTMAC_HSEC_CTL_RX_PAUSE_DA_UCAST_PART2_HSEC_CTL_RX_PAUSE_DA_UCAST_PART2_M MAKEMASK(0xFFFF, 0)
+#define PRTMAC_HSEC_CTL_RX_PAUSE_ENABLE 0x001E3180 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_RX_PAUSE_ENABLE_HSEC_CTL_RX_PAUSE_ENABLE_S 0
+#define PRTMAC_HSEC_CTL_RX_PAUSE_ENABLE_HSEC_CTL_RX_PAUSE_ENABLE_M MAKEMASK(0x1FF, 0)
+#define PRTMAC_HSEC_CTL_RX_PAUSE_SA_PART1 0x001E3280 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_RX_PAUSE_SA_PART1_HSEC_CTL_RX_PAUSE_SA_PART1_S 0
+#define PRTMAC_HSEC_CTL_RX_PAUSE_SA_PART1_HSEC_CTL_RX_PAUSE_SA_PART1_M MAKEMASK(0xFFFFFFFF, 0)
+#define PRTMAC_HSEC_CTL_RX_PAUSE_SA_PART2 0x001E32A0 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_RX_PAUSE_SA_PART2_HSEC_CTL_RX_PAUSE_SA_PART2_S 0
+#define PRTMAC_HSEC_CTL_RX_PAUSE_SA_PART2_HSEC_CTL_RX_PAUSE_SA_PART2_M MAKEMASK(0xFFFF, 0)
+#define PRTMAC_HSEC_CTL_RX_QUANTA_S 0x001E3C40 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_RX_QUANTA_SHIFT_PRTMAC_HSEC_CTL_RX_QUANTA_SHIFT_S 0
+#define PRTMAC_HSEC_CTL_RX_QUANTA_SHIFT_PRTMAC_HSEC_CTL_RX_QUANTA_SHIFT_M MAKEMASK(0xFFFF, 0)
+#define PRTMAC_HSEC_CTL_TX_PAUSE_ENABLE 0x001E31A0 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_TX_PAUSE_ENABLE_HSEC_CTL_TX_PAUSE_ENABLE_S 0
+#define PRTMAC_HSEC_CTL_TX_PAUSE_ENABLE_HSEC_CTL_TX_PAUSE_ENABLE_M MAKEMASK(0x1FF, 0)
+#define PRTMAC_HSEC_CTL_TX_PAUSE_QUANTA(_i) (0x001E36E0 + ((_i) * 32)) /* _i=0...8 */ /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_TX_PAUSE_QUANTA_MAX_INDEX 8
+#define PRTMAC_HSEC_CTL_TX_PAUSE_QUANTA_HSEC_CTL_TX_PAUSE_QUANTA_S 0
+#define PRTMAC_HSEC_CTL_TX_PAUSE_QUANTA_HSEC_CTL_TX_PAUSE_QUANTA_M MAKEMASK(0xFFFF, 0)
+#define PRTMAC_HSEC_CTL_TX_PAUSE_REFRESH_TIMER(_i) (0x001E3800 + ((_i) * 32)) /* _i=0...8 */ /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_TX_PAUSE_REFRESH_TIMER_MAX_INDEX 8
+#define PRTMAC_HSEC_CTL_TX_PAUSE_REFRESH_TIMER_HSEC_CTL_TX_PAUSE_REFRESH_TIMER_S 0
+#define PRTMAC_HSEC_CTL_TX_PAUSE_REFRESH_TIMER_HSEC_CTL_TX_PAUSE_REFRESH_TIMER_M MAKEMASK(0xFFFF, 0)
+#define PRTMAC_HSEC_CTL_TX_SA_PART1 0x001E3960 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_TX_SA_PART1_HSEC_CTL_TX_SA_PART1_S 0
+#define PRTMAC_HSEC_CTL_TX_SA_PART1_HSEC_CTL_TX_SA_PART1_M MAKEMASK(0xFFFFFFFF, 0)
+#define PRTMAC_HSEC_CTL_TX_SA_PART2 0x001E3980 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_TX_SA_PART2_HSEC_CTL_TX_SA_PART2_S 0
+#define PRTMAC_HSEC_CTL_TX_SA_PART2_HSEC_CTL_TX_SA_PART2_M MAKEMASK(0xFFFF, 0)
+#define PRTMAC_LINK_DOWN_COUNTER 0x001E47C0 /* Reset Source: GLOBR */
+#define PRTMAC_LINK_DOWN_COUNTER_LINK_DOWN_COUNTER_S 0
+#define PRTMAC_LINK_DOWN_COUNTER_LINK_DOWN_COUNTER_M MAKEMASK(0xFFFF, 0)
+#define PRTMAC_MD_OVRRIDE_ENABLE(_i) (0x001E3C60 + ((_i) * 32)) /* _i=0...7 */ /* Reset Source: GLOBR */
+#define PRTMAC_MD_OVRRIDE_ENABLE_MAX_INDEX 7
+#define PRTMAC_MD_OVRRIDE_ENABLE_PRTMAC_MD_OVRRIDE_ENABLE_S 0
+#define PRTMAC_MD_OVRRIDE_ENABLE_PRTMAC_MD_OVRRIDE_ENABLE_M MAKEMASK(0xFFFFFFFF, 0)
+#define PRTMAC_MD_OVRRIDE_VAL(_i) (0x001E3D60 + ((_i) * 32)) /* _i=0...7 */ /* Reset Source: GLOBR */
+#define PRTMAC_MD_OVRRIDE_VAL_MAX_INDEX 7
+#define PRTMAC_MD_OVRRIDE_VAL_PRTMAC_MD_OVRRIDE_ENABLE_S 0
+#define PRTMAC_MD_OVRRIDE_VAL_PRTMAC_MD_OVRRIDE_ENABLE_M MAKEMASK(0xFFFFFFFF, 0)
+#define PRTMAC_RX_CNT_MRKR 0x001E48E0 /* Reset Source: GLOBR */
+#define PRTMAC_RX_CNT_MRKR_RX_CNT_MRKR_S 0
+#define PRTMAC_RX_CNT_MRKR_RX_CNT_MRKR_M MAKEMASK(0xFFFF, 0)
+#define PRTMAC_RX_PKT_DRP_CNT 0x001E3C20 /* Reset Source: GLOBR */
+#define PRTMAC_RX_PKT_DRP_CNT_RX_PKT_DRP_CNT_S 0
+#define PRTMAC_RX_PKT_DRP_CNT_RX_PKT_DRP_CNT_M MAKEMASK(0xFFFF, 0)
+#define PRTMAC_RX_PKT_DRP_CNT_RX_MKR_PKT_DRP_CNT_S 16
+#define PRTMAC_RX_PKT_DRP_CNT_RX_MKR_PKT_DRP_CNT_M MAKEMASK(0xFFFF, 16)
+#define PRTMAC_TX_CNT_MRKR 0x001E48C0 /* Reset Source: GLOBR */
+#define PRTMAC_TX_CNT_MRKR_TX_CNT_MRKR_S 0
+#define PRTMAC_TX_CNT_MRKR_TX_CNT_MRKR_M MAKEMASK(0xFFFF, 0)
+#define PRTMAC_TX_LNK_UP_CNT 0x001E4840 /* Reset Source: GLOBR */
+#define PRTMAC_TX_LNK_UP_CNT_TX_LINK_UP_CNT_S 0
+#define PRTMAC_TX_LNK_UP_CNT_TX_LINK_UP_CNT_M MAKEMASK(0xFFFF, 0)
+#define GL_MDCK_CFG1_TX_PQM 0x002D2DF4 /* Reset Source: CORER */
+#define GL_MDCK_CFG1_TX_PQM_SSO_MAX_DATA_LEN_S 0
+#define GL_MDCK_CFG1_TX_PQM_SSO_MAX_DATA_LEN_M MAKEMASK(0xFF, 0)
+#define GL_MDCK_CFG1_TX_PQM_SSO_MAX_PKT_CNT_S 8
+#define GL_MDCK_CFG1_TX_PQM_SSO_MAX_PKT_CNT_M MAKEMASK(0x3F, 8)
+#define GL_MDCK_CFG1_TX_PQM_SSO_MAX_DESC_CNT_S 16
+#define GL_MDCK_CFG1_TX_PQM_SSO_MAX_DESC_CNT_M MAKEMASK(0x3F, 16)
+#define GL_MDCK_EN_TX_PQM 0x002D2DFC /* Reset Source: CORER */
+#define GL_MDCK_EN_TX_PQM_PCI_DUMMY_COMP_S 0
+#define GL_MDCK_EN_TX_PQM_PCI_DUMMY_COMP_M BIT(0)
+#define GL_MDCK_EN_TX_PQM_PCI_UR_COMP_S 1
+#define GL_MDCK_EN_TX_PQM_PCI_UR_COMP_M BIT(1)
+#define GL_MDCK_EN_TX_PQM_RCV_SH_BE_LSO_S 3
+#define GL_MDCK_EN_TX_PQM_RCV_SH_BE_LSO_M BIT(3)
+#define GL_MDCK_EN_TX_PQM_Q_FL_MNG_EPY_CH_S 4
+#define GL_MDCK_EN_TX_PQM_Q_FL_MNG_EPY_CH_M BIT(4)
+#define GL_MDCK_EN_TX_PQM_Q_EPY_MNG_FL_CH_S 5
+#define GL_MDCK_EN_TX_PQM_Q_EPY_MNG_FL_CH_M BIT(5)
+#define GL_MDCK_EN_TX_PQM_LSO_NUMDESCS_ZERO_S 6
+#define GL_MDCK_EN_TX_PQM_LSO_NUMDESCS_ZERO_M BIT(6)
+#define GL_MDCK_EN_TX_PQM_LSO_LENGTH_ZERO_S 7
+#define GL_MDCK_EN_TX_PQM_LSO_LENGTH_ZERO_M BIT(7)
+#define GL_MDCK_EN_TX_PQM_LSO_MSS_BELOW_MIN_S 8
+#define GL_MDCK_EN_TX_PQM_LSO_MSS_BELOW_MIN_M BIT(8)
+#define GL_MDCK_EN_TX_PQM_LSO_MSS_ABOVE_MAX_S 9
+#define GL_MDCK_EN_TX_PQM_LSO_MSS_ABOVE_MAX_M BIT(9)
+#define GL_MDCK_EN_TX_PQM_LSO_HDR_SIZE_ZERO_S 10
+#define GL_MDCK_EN_TX_PQM_LSO_HDR_SIZE_ZERO_M BIT(10)
+#define GL_MDCK_EN_TX_PQM_RCV_CNT_BE_LSO_S 11
+#define GL_MDCK_EN_TX_PQM_RCV_CNT_BE_LSO_M BIT(11)
+#define GL_MDCK_EN_TX_PQM_SKIP_ONE_QT_ONLY_S 12
+#define GL_MDCK_EN_TX_PQM_SKIP_ONE_QT_ONLY_M BIT(12)
+#define GL_MDCK_EN_TX_PQM_LSO_PKTCNT_ZERO_S 13
+#define GL_MDCK_EN_TX_PQM_LSO_PKTCNT_ZERO_M BIT(13)
+#define GL_MDCK_EN_TX_PQM_SSO_LENGTH_ZERO_S 14
+#define GL_MDCK_EN_TX_PQM_SSO_LENGTH_ZERO_M BIT(14)
+#define GL_MDCK_EN_TX_PQM_SSO_LENGTH_EXCEED_S 15
+#define GL_MDCK_EN_TX_PQM_SSO_LENGTH_EXCEED_M BIT(15)
+#define GL_MDCK_EN_TX_PQM_SSO_PKTCNT_ZERO_S 16
+#define GL_MDCK_EN_TX_PQM_SSO_PKTCNT_ZERO_M BIT(16)
+#define GL_MDCK_EN_TX_PQM_SSO_PKTCNT_EXCEED_S 17
+#define GL_MDCK_EN_TX_PQM_SSO_PKTCNT_EXCEED_M BIT(17)
+#define GL_MDCK_EN_TX_PQM_SSO_NUMDESCS_ZERO_S 18
+#define GL_MDCK_EN_TX_PQM_SSO_NUMDESCS_ZERO_M BIT(18)
+#define GL_MDCK_EN_TX_PQM_SSO_NUMDESCS_EXCEED_S 19
+#define GL_MDCK_EN_TX_PQM_SSO_NUMDESCS_EXCEED_M BIT(19)
+#define GL_MDCK_EN_TX_PQM_TAIL_GT_RING_LENGTH_S 20
+#define GL_MDCK_EN_TX_PQM_TAIL_GT_RING_LENGTH_M BIT(20)
+#define GL_MDCK_EN_TX_PQM_RESERVED_DBL_TYPE_S 21
+#define GL_MDCK_EN_TX_PQM_RESERVED_DBL_TYPE_M BIT(21)
+#define GL_MDCK_EN_TX_PQM_ILLEGAL_HEAD_DROP_DBL_S 22
+#define GL_MDCK_EN_TX_PQM_ILLEGAL_HEAD_DROP_DBL_M BIT(22)
+#define GL_MDCK_EN_TX_PQM_LSO_OVER_COMMS_Q_S 23
+#define GL_MDCK_EN_TX_PQM_LSO_OVER_COMMS_Q_M BIT(23)
+#define GL_MDCK_EN_TX_PQM_ILLEGAL_VF_QNUM_S 24
+#define GL_MDCK_EN_TX_PQM_ILLEGAL_VF_QNUM_M BIT(24)
+#define GL_MDCK_EN_TX_PQM_QTAIL_GT_RING_LENGTH_S 25
+#define GL_MDCK_EN_TX_PQM_QTAIL_GT_RING_LENGTH_M BIT(25)
+#define GL_MDCK_EN_TX_PQM_RSVD_S 26
+#define GL_MDCK_EN_TX_PQM_RSVD_M MAKEMASK(0x3F, 26)
+#define GL_MDCK_RX 0x0029422C /* Reset Source: CORER */
+#define GL_MDCK_RX_DESC_ADDR_S 0
+#define GL_MDCK_RX_DESC_ADDR_M BIT(0)
+#define GL_MDET_RX 0x00294C00 /* Reset Source: CORER */
+#define GL_MDET_RX_QNUM_S 0
+#define GL_MDET_RX_QNUM_M MAKEMASK(0x7FFF, 0)
+#define GL_MDET_RX_VF_NUM_S 15
+#define GL_MDET_RX_VF_NUM_M MAKEMASK(0xFF, 15)
+#define GL_MDET_RX_PF_NUM_S 23
+#define GL_MDET_RX_PF_NUM_M MAKEMASK(0x7, 23)
+#define GL_MDET_RX_MAL_TYPE_S 26
+#define GL_MDET_RX_MAL_TYPE_M MAKEMASK(0x1F, 26)
+#define GL_MDET_RX_VALID_S 31
+#define GL_MDET_RX_VALID_M BIT(31)
+#define GL_MDET_TX_PQM 0x002D2E00 /* Reset Source: CORER */
+#define GL_MDET_TX_PQM_PF_NUM_S 0
+#define GL_MDET_TX_PQM_PF_NUM_M MAKEMASK(0x7, 0)
+#define GL_MDET_TX_PQM_VF_NUM_S 4
+#define GL_MDET_TX_PQM_VF_NUM_M MAKEMASK(0xFF, 4)
+#define GL_MDET_TX_PQM_QNUM_S 12
+#define GL_MDET_TX_PQM_QNUM_M MAKEMASK(0x3FFF, 12)
+#define GL_MDET_TX_PQM_MAL_TYPE_S 26
+#define GL_MDET_TX_PQM_MAL_TYPE_M MAKEMASK(0x1F, 26)
+#define GL_MDET_TX_PQM_VALID_S 31
+#define GL_MDET_TX_PQM_VALID_M BIT(31)
+#define GL_MDET_TX_TCLAN 0x000FC068 /* Reset Source: CORER */
+#define GL_MDET_TX_TCLAN_QNUM_S 0
+#define GL_MDET_TX_TCLAN_QNUM_M MAKEMASK(0x7FFF, 0)
+#define GL_MDET_TX_TCLAN_VF_NUM_S 15
+#define GL_MDET_TX_TCLAN_VF_NUM_M MAKEMASK(0xFF, 15)
+#define GL_MDET_TX_TCLAN_PF_NUM_S 23
+#define GL_MDET_TX_TCLAN_PF_NUM_M MAKEMASK(0x7, 23)
+#define GL_MDET_TX_TCLAN_MAL_TYPE_S 26
+#define GL_MDET_TX_TCLAN_MAL_TYPE_M MAKEMASK(0x1F, 26)
+#define GL_MDET_TX_TCLAN_VALID_S 31
+#define GL_MDET_TX_TCLAN_VALID_M BIT(31)
+#define PF_MDET_RX 0x00294280 /* Reset Source: CORER */
+#define PF_MDET_RX_VALID_S 0
+#define PF_MDET_RX_VALID_M BIT(0)
+#define PF_MDET_TX_PQM 0x002D2C80 /* Reset Source: CORER */
+#define PF_MDET_TX_PQM_VALID_S 0
+#define PF_MDET_TX_PQM_VALID_M BIT(0)
+#define PF_MDET_TX_TCLAN 0x000FC000 /* Reset Source: CORER */
+#define PF_MDET_TX_TCLAN_VALID_S 0
+#define PF_MDET_TX_TCLAN_VALID_M BIT(0)
+#define PF_MDET_TX_TDPU 0x00040800 /* Reset Source: CORER */
+#define PF_MDET_TX_TDPU_VALID_S 0
+#define PF_MDET_TX_TDPU_VALID_M BIT(0)
+#define VP_MDET_RX(_VF) (0x00294400 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VP_MDET_RX_MAX_INDEX 255
+#define VP_MDET_RX_VALID_S 0
+#define VP_MDET_RX_VALID_M BIT(0)
+#define VP_MDET_TX_PQM(_VF) (0x002D2000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VP_MDET_TX_PQM_MAX_INDEX 255
+#define VP_MDET_TX_PQM_VALID_S 0
+#define VP_MDET_TX_PQM_VALID_M BIT(0)
+#define VP_MDET_TX_TCLAN(_VF) (0x000FB800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VP_MDET_TX_TCLAN_MAX_INDEX 255
+#define VP_MDET_TX_TCLAN_VALID_S 0
+#define VP_MDET_TX_TCLAN_VALID_M BIT(0)
+#define VP_MDET_TX_TDPU(_VF) (0x00040000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VP_MDET_TX_TDPU_MAX_INDEX 255
+#define VP_MDET_TX_TDPU_VALID_S 0
+#define VP_MDET_TX_TDPU_VALID_M BIT(0)
+#define GENERAL_MNG_FW_DBG_CSR(_i) (0x000B6180 + ((_i) * 4)) /* _i=0...9 */ /* Reset Source: POR */
+#define GENERAL_MNG_FW_DBG_CSR_MAX_INDEX 9
+#define GENERAL_MNG_FW_DBG_CSR_GENERAL_FW_DBG_S 0
+#define GENERAL_MNG_FW_DBG_CSR_GENERAL_FW_DBG_M MAKEMASK(0xFFFFFFFF, 0)
+#define GL_FWRESETCNT 0x00083100 /* Reset Source: POR */
+#define GL_FWRESETCNT_FWRESETCNT_S 0
+#define GL_FWRESETCNT_FWRESETCNT_M MAKEMASK(0xFFFFFFFF, 0)
+#define GL_MNG_FW_RAM_STAT 0x0008309C /* Reset Source: POR */
+#define GL_MNG_FW_RAM_STAT_FW_RAM_RST_STAT_S 0
+#define GL_MNG_FW_RAM_STAT_FW_RAM_RST_STAT_M BIT(0)
+#define GL_MNG_FW_RAM_STAT_MNG_MEM_ECC_ERR_S 1
+#define GL_MNG_FW_RAM_STAT_MNG_MEM_ECC_ERR_M BIT(1)
+#define GL_MNG_FWSM 0x000B6134 /* Reset Source: POR */
+#define GL_MNG_FWSM_FW_MODES_S 0
+#define GL_MNG_FWSM_FW_MODES_M MAKEMASK(0x3, 0)
+#define GL_MNG_FWSM_RSV0_S 2
+#define GL_MNG_FWSM_RSV0_M MAKEMASK(0xFF, 2)
+#define GL_MNG_FWSM_EEP_RELOAD_IND_S 10
+#define GL_MNG_FWSM_EEP_RELOAD_IND_M BIT(10)
+#define GL_MNG_FWSM_RSV1_S 11
+#define GL_MNG_FWSM_RSV1_M MAKEMASK(0xF, 11)
+#define GL_MNG_FWSM_RSV2_S 15
+#define GL_MNG_FWSM_RSV2_M BIT(15)
+#define GL_MNG_FWSM_PCIR_AL_FAILURE_S 16
+#define GL_MNG_FWSM_PCIR_AL_FAILURE_M BIT(16)
+#define GL_MNG_FWSM_POR_AL_FAILURE_S 17
+#define GL_MNG_FWSM_POR_AL_FAILURE_M BIT(17)
+#define GL_MNG_FWSM_RSV3_S 18
+#define GL_MNG_FWSM_RSV3_M BIT(18)
+#define GL_MNG_FWSM_EXT_ERR_IND_S 19
+#define GL_MNG_FWSM_EXT_ERR_IND_M MAKEMASK(0x3F, 19)
+#define GL_MNG_FWSM_RSV4_S 25
+#define GL_MNG_FWSM_RSV4_M BIT(25)
+#define GL_MNG_FWSM_RESERVED_11_S 26
+#define GL_MNG_FWSM_RESERVED_11_M MAKEMASK(0xF, 26)
+#define GL_MNG_FWSM_RSV5_S 30
+#define GL_MNG_FWSM_RSV5_M MAKEMASK(0x3, 30)
+#define GL_MNG_HWARB_CTRL 0x000B6130 /* Reset Source: POR */
+#define GL_MNG_HWARB_CTRL_NCSI_ARB_EN_S 0
+#define GL_MNG_HWARB_CTRL_NCSI_ARB_EN_M BIT(0)
+#define GL_MNG_SHA_EXTEND(_i) (0x00083120 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: EMPR */
+#define GL_MNG_SHA_EXTEND_MAX_INDEX 7
+#define GL_MNG_SHA_EXTEND_GL_MNG_SHA_EXTEND_S 0
+#define GL_MNG_SHA_EXTEND_GL_MNG_SHA_EXTEND_M MAKEMASK(0xFFFFFFFF, 0)
+#define GL_MNG_SHA_EXTEND_ROM(_i) (0x00083160 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: EMPR */
+#define GL_MNG_SHA_EXTEND_ROM_MAX_INDEX 7
+#define GL_MNG_SHA_EXTEND_ROM_GL_MNG_SHA_EXTEND_ROM_S 0
+#define GL_MNG_SHA_EXTEND_ROM_GL_MNG_SHA_EXTEND_ROM_M MAKEMASK(0xFFFFFFFF, 0)
+#define GL_MNG_SHA_EXTEND_STATUS 0x00083148 /* Reset Source: EMPR */
+#define GL_MNG_SHA_EXTEND_STATUS_STAGE_S 0
+#define GL_MNG_SHA_EXTEND_STATUS_STAGE_M MAKEMASK(0x7, 0)
+#define GL_MNG_SHA_EXTEND_STATUS_FW_HALTED_S 30
+#define GL_MNG_SHA_EXTEND_STATUS_FW_HALTED_M BIT(30)
+#define GL_MNG_SHA_EXTEND_STATUS_DONE_S 31
+#define GL_MNG_SHA_EXTEND_STATUS_DONE_M BIT(31)
+#define GL_SWT_PRT2MDEF(_i) (0x00216018 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: POR */
+#define GL_SWT_PRT2MDEF_MAX_INDEX 31
+#define GL_SWT_PRT2MDEF_MDEFIDX_S 0
+#define GL_SWT_PRT2MDEF_MDEFIDX_M MAKEMASK(0x7, 0)
+#define GL_SWT_PRT2MDEF_MDEFENA_S 31
+#define GL_SWT_PRT2MDEF_MDEFENA_M BIT(31)
+#define PRT_MNG_MANC 0x00214720 /* Reset Source: POR */
+#define PRT_MNG_MANC_FLOW_CONTROL_DISCARD_S 0
+#define PRT_MNG_MANC_FLOW_CONTROL_DISCARD_M BIT(0)
+#define PRT_MNG_MANC_NCSI_DISCARD_S 1
+#define PRT_MNG_MANC_NCSI_DISCARD_M BIT(1)
+#define PRT_MNG_MANC_RCV_TCO_EN_S 17
+#define PRT_MNG_MANC_RCV_TCO_EN_M BIT(17)
+#define PRT_MNG_MANC_RCV_ALL_S 19
+#define PRT_MNG_MANC_RCV_ALL_M BIT(19)
+#define PRT_MNG_MANC_FIXED_NET_TYPE_S 25
+#define PRT_MNG_MANC_FIXED_NET_TYPE_M BIT(25)
+#define PRT_MNG_MANC_NET_TYPE_S 26
+#define PRT_MNG_MANC_NET_TYPE_M BIT(26)
+#define PRT_MNG_MANC_EN_BMC2OS_S 28
+#define PRT_MNG_MANC_EN_BMC2OS_M BIT(28)
+#define PRT_MNG_MANC_EN_BMC2NET_S 29
+#define PRT_MNG_MANC_EN_BMC2NET_M BIT(29)
+#define PRT_MNG_MAVTV(_i) (0x00214780 + ((_i) * 32)) /* _i=0...7 */ /* Reset Source: POR */
+#define PRT_MNG_MAVTV_MAX_INDEX 7
+#define PRT_MNG_MAVTV_VID_S 0
+#define PRT_MNG_MAVTV_VID_M MAKEMASK(0xFFF, 0)
+#define PRT_MNG_MDEF(_i) (0x00214880 + ((_i) * 32)) /* _i=0...7 */ /* Reset Source: POR */
+#define PRT_MNG_MDEF_MAX_INDEX 7
+#define PRT_MNG_MDEF_MAC_EXACT_AND_S 0
+#define PRT_MNG_MDEF_MAC_EXACT_AND_M MAKEMASK(0xF, 0)
+#define PRT_MNG_MDEF_BROADCAST_AND_S 4
+#define PRT_MNG_MDEF_BROADCAST_AND_M BIT(4)
+#define PRT_MNG_MDEF_VLAN_AND_S 5
+#define PRT_MNG_MDEF_VLAN_AND_M MAKEMASK(0xFF, 5)
+#define PRT_MNG_MDEF_IPV4_ADDRESS_AND_S 13
+#define PRT_MNG_MDEF_IPV4_ADDRESS_AND_M MAKEMASK(0xF, 13)
+#define PRT_MNG_MDEF_IPV6_ADDRESS_AND_S 17
+#define PRT_MNG_MDEF_IPV6_ADDRESS_AND_M MAKEMASK(0xF, 17)
+#define PRT_MNG_MDEF_MAC_EXACT_OR_S 21
+#define PRT_MNG_MDEF_MAC_EXACT_OR_M MAKEMASK(0xF, 21)
+#define PRT_MNG_MDEF_BROADCAST_OR_S 25
+#define PRT_MNG_MDEF_BROADCAST_OR_M BIT(25)
+#define PRT_MNG_MDEF_MULTICAST_AND_S 26
+#define PRT_MNG_MDEF_MULTICAST_AND_M BIT(26)
+#define PRT_MNG_MDEF_ARP_REQUEST_OR_S 27
+#define PRT_MNG_MDEF_ARP_REQUEST_OR_M BIT(27)
+#define PRT_MNG_MDEF_ARP_RESPONSE_OR_S 28
+#define PRT_MNG_MDEF_ARP_RESPONSE_OR_M BIT(28)
+#define PRT_MNG_MDEF_NEIGHBOR_DISCOVERY_134_OR_S 29
+#define PRT_MNG_MDEF_NEIGHBOR_DISCOVERY_134_OR_M BIT(29)
+#define PRT_MNG_MDEF_PORT_0X298_OR_S 30
+#define PRT_MNG_MDEF_PORT_0X298_OR_M BIT(30)
+#define PRT_MNG_MDEF_PORT_0X26F_OR_S 31
+#define PRT_MNG_MDEF_PORT_0X26F_OR_M BIT(31)
+#define PRT_MNG_MDEF_EXT(_i) (0x00214A00 + ((_i) * 32)) /* _i=0...7 */ /* Reset Source: POR */
+#define PRT_MNG_MDEF_EXT_MAX_INDEX 7
+#define PRT_MNG_MDEF_EXT_L2_ETHERTYPE_AND_S 0
+#define PRT_MNG_MDEF_EXT_L2_ETHERTYPE_AND_M MAKEMASK(0xF, 0)
+#define PRT_MNG_MDEF_EXT_L2_ETHERTYPE_OR_S 4
+#define PRT_MNG_MDEF_EXT_L2_ETHERTYPE_OR_M MAKEMASK(0xF, 4)
+#define PRT_MNG_MDEF_EXT_FLEX_PORT_OR_S 8
+#define PRT_MNG_MDEF_EXT_FLEX_PORT_OR_M MAKEMASK(0xFFFF, 8)
+#define PRT_MNG_MDEF_EXT_FLEX_TCO_S 24
+#define PRT_MNG_MDEF_EXT_FLEX_TCO_M BIT(24)
+#define PRT_MNG_MDEF_EXT_NEIGHBOR_DISCOVERY_135_OR_S 25
+#define PRT_MNG_MDEF_EXT_NEIGHBOR_DISCOVERY_135_OR_M BIT(25)
+#define PRT_MNG_MDEF_EXT_NEIGHBOR_DISCOVERY_136_OR_S 26
+#define PRT_MNG_MDEF_EXT_NEIGHBOR_DISCOVERY_136_OR_M BIT(26)
+#define PRT_MNG_MDEF_EXT_NEIGHBOR_DISCOVERY_137_OR_S 27
+#define PRT_MNG_MDEF_EXT_NEIGHBOR_DISCOVERY_137_OR_M BIT(27)
+#define PRT_MNG_MDEF_EXT_ICMP_OR_S 28
+#define PRT_MNG_MDEF_EXT_ICMP_OR_M BIT(28)
+#define PRT_MNG_MDEF_EXT_MLD_S 29
+#define PRT_MNG_MDEF_EXT_MLD_M BIT(29)
+#define PRT_MNG_MDEF_EXT_APPLY_TO_NETWORK_TRAFFIC_S 30
+#define PRT_MNG_MDEF_EXT_APPLY_TO_NETWORK_TRAFFIC_M BIT(30)
+#define PRT_MNG_MDEF_EXT_APPLY_TO_HOST_TRAFFIC_S 31
+#define PRT_MNG_MDEF_EXT_APPLY_TO_HOST_TRAFFIC_M BIT(31)
+#define PRT_MNG_MDEFVSI(_i) (0x00214980 + ((_i) * 32)) /* _i=0...3 */ /* Reset Source: POR */
+#define PRT_MNG_MDEFVSI_MAX_INDEX 3
+#define PRT_MNG_MDEFVSI_MDEFVSI_2N_S 0
+#define PRT_MNG_MDEFVSI_MDEFVSI_2N_M MAKEMASK(0xFFFF, 0)
+#define PRT_MNG_MDEFVSI_MDEFVSI_2NP1_S 16
+#define PRT_MNG_MDEFVSI_MDEFVSI_2NP1_M MAKEMASK(0xFFFF, 16)
+#define PRT_MNG_METF(_i) (0x00214120 + ((_i) * 32)) /* _i=0...3 */ /* Reset Source: POR */
+#define PRT_MNG_METF_MAX_INDEX 3
+#define PRT_MNG_METF_ETYPE_S 0
+#define PRT_MNG_METF_ETYPE_M MAKEMASK(0xFFFF, 0)
+#define PRT_MNG_METF_POLARITY_S 30
+#define PRT_MNG_METF_POLARITY_M BIT(30)
+#define PRT_MNG_MFUTP(_i) (0x00214320 + ((_i) * 32)) /* _i=0...15 */ /* Reset Source: POR */
+#define PRT_MNG_MFUTP_MAX_INDEX 15
+#define PRT_MNG_MFUTP_MFUTP_N_S 0
+#define PRT_MNG_MFUTP_MFUTP_N_M MAKEMASK(0xFFFF, 0)
+#define PRT_MNG_MFUTP_UDP_S 16
+#define PRT_MNG_MFUTP_UDP_M BIT(16)
+#define PRT_MNG_MFUTP_TCP_S 17
+#define PRT_MNG_MFUTP_TCP_M BIT(17)
+#define PRT_MNG_MFUTP_SOURCE_DESTINATION_S 18
+#define PRT_MNG_MFUTP_SOURCE_DESTINATION_M BIT(18)
+#define PRT_MNG_MIPAF4(_i) (0x002141A0 + ((_i) * 32)) /* _i=0...3 */ /* Reset Source: POR */
+#define PRT_MNG_MIPAF4_MAX_INDEX 3
+#define PRT_MNG_MIPAF4_MIPAF_S 0
+#define PRT_MNG_MIPAF4_MIPAF_M MAKEMASK(0xFFFFFFFF, 0)
+#define PRT_MNG_MIPAF6(_i) (0x00214520 + ((_i) * 32)) /* _i=0...15 */ /* Reset Source: POR */
+#define PRT_MNG_MIPAF6_MAX_INDEX 15
+#define PRT_MNG_MIPAF6_MIPAF_S 0
+#define PRT_MNG_MIPAF6_MIPAF_M MAKEMASK(0xFFFFFFFF, 0)
+#define PRT_MNG_MMAH(_i) (0x00214220 + ((_i) * 32)) /* _i=0...3 */ /* Reset Source: POR */
+#define PRT_MNG_MMAH_MAX_INDEX 3
+#define PRT_MNG_MMAH_MMAH_S 0
+#define PRT_MNG_MMAH_MMAH_M MAKEMASK(0xFFFF, 0)
+#define PRT_MNG_MMAL(_i) (0x002142A0 + ((_i) * 32)) /* _i=0...3 */ /* Reset Source: POR */
+#define PRT_MNG_MMAL_MAX_INDEX 3
+#define PRT_MNG_MMAL_MMAL_S 0
+#define PRT_MNG_MMAL_MMAL_M MAKEMASK(0xFFFFFFFF, 0)
+#define PRT_MNG_MNGONLY 0x00214740 /* Reset Source: POR */
+#define PRT_MNG_MNGONLY_EXCLUSIVE_TO_MANAGEABILITY_S 0
+#define PRT_MNG_MNGONLY_EXCLUSIVE_TO_MANAGEABILITY_M MAKEMASK(0xFF, 0)
+#define PRT_MNG_MSFM 0x00214760 /* Reset Source: POR */
+#define PRT_MNG_MSFM_PORT_26F_UDP_S 0
+#define PRT_MNG_MSFM_PORT_26F_UDP_M BIT(0)
+#define PRT_MNG_MSFM_PORT_26F_TCP_S 1
+#define PRT_MNG_MSFM_PORT_26F_TCP_M BIT(1)
+#define PRT_MNG_MSFM_PORT_298_UDP_S 2
+#define PRT_MNG_MSFM_PORT_298_UDP_M BIT(2)
+#define PRT_MNG_MSFM_PORT_298_TCP_S 3
+#define PRT_MNG_MSFM_PORT_298_TCP_M BIT(3)
+#define PRT_MNG_MSFM_IPV6_0_MASK_S 4
+#define PRT_MNG_MSFM_IPV6_0_MASK_M BIT(4)
+#define PRT_MNG_MSFM_IPV6_1_MASK_S 5
+#define PRT_MNG_MSFM_IPV6_1_MASK_M BIT(5)
+#define PRT_MNG_MSFM_IPV6_2_MASK_S 6
+#define PRT_MNG_MSFM_IPV6_2_MASK_M BIT(6)
+#define PRT_MNG_MSFM_IPV6_3_MASK_S 7
+#define PRT_MNG_MSFM_IPV6_3_MASK_M BIT(7)
+#define MSIX_PBA_PAGE(_i) (0x02E08000 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: FLR */
+#define MSIX_PBA_PAGE_MAX_INDEX 63
+#define MSIX_PBA_PAGE_PENBIT_S 0
+#define MSIX_PBA_PAGE_PENBIT_M MAKEMASK(0xFFFFFFFF, 0)
+#define MSIX_PBA1(_i) (0x00008000 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: FLR */
+#define MSIX_PBA1_MAX_INDEX 63
+#define MSIX_PBA1_PENBIT_S 0
+#define MSIX_PBA1_PENBIT_M MAKEMASK(0xFFFFFFFF, 0)
+#define MSIX_TADD_PAGE(_i) (0x02E00000 + ((_i) * 16)) /* _i=0...2047 */ /* Reset Source: FLR */
+#define MSIX_TADD_PAGE_MAX_INDEX 2047
+#define MSIX_TADD_PAGE_MSIXTADD10_S 0
+#define MSIX_TADD_PAGE_MSIXTADD10_M MAKEMASK(0x3, 0)
+#define MSIX_TADD_PAGE_MSIXTADD_S 2
+#define MSIX_TADD_PAGE_MSIXTADD_M MAKEMASK(0x3FFFFFFF, 2)
+#define MSIX_TADD1(_i) (0x00000000 + ((_i) * 16)) /* _i=0...2047 */ /* Reset Source: FLR */
+#define MSIX_TADD1_MAX_INDEX 2047
+#define MSIX_TADD1_MSIXTADD10_S 0
+#define MSIX_TADD1_MSIXTADD10_M MAKEMASK(0x3, 0)
+#define MSIX_TADD1_MSIXTADD_S 2
+#define MSIX_TADD1_MSIXTADD_M MAKEMASK(0x3FFFFFFF, 2)
+#define MSIX_TMSG(_i) (0x00000008 + ((_i) * 16)) /* _i=0...2047 */ /* Reset Source: FLR */
+#define MSIX_TMSG_MAX_INDEX 2047
+#define MSIX_TMSG_MSIXTMSG_S 0
+#define MSIX_TMSG_MSIXTMSG_M MAKEMASK(0xFFFFFFFF, 0)
+#define MSIX_TMSG_PAGE(_i) (0x02E00008 + ((_i) * 16)) /* _i=0...2047 */ /* Reset Source: FLR */
+#define MSIX_TMSG_PAGE_MAX_INDEX 2047
+#define MSIX_TMSG_PAGE_MSIXTMSG_S 0
+#define MSIX_TMSG_PAGE_MSIXTMSG_M MAKEMASK(0xFFFFFFFF, 0)
+#define MSIX_TUADD_PAGE(_i) (0x02E00004 + ((_i) * 16)) /* _i=0...2047 */ /* Reset Source: FLR */
+#define MSIX_TUADD_PAGE_MAX_INDEX 2047
+#define MSIX_TUADD_PAGE_MSIXTUADD_S 0
+#define MSIX_TUADD_PAGE_MSIXTUADD_M MAKEMASK(0xFFFFFFFF, 0)
+#define MSIX_TUADD1(_i) (0x00000004 + ((_i) * 16)) /* _i=0...2047 */ /* Reset Source: FLR */
+#define MSIX_TUADD1_MAX_INDEX 2047
+#define MSIX_TUADD1_MSIXTUADD_S 0
+#define MSIX_TUADD1_MSIXTUADD_M MAKEMASK(0xFFFFFFFF, 0)
+#define MSIX_TVCTRL_PAGE(_i) (0x02E0000C + ((_i) * 16)) /* _i=0...2047 */ /* Reset Source: FLR */
+#define MSIX_TVCTRL_PAGE_MAX_INDEX 2047
+#define MSIX_TVCTRL_PAGE_MASK_S 0
+#define MSIX_TVCTRL_PAGE_MASK_M BIT(0)
+#define MSIX_TVCTRL1(_i) (0x0000000C + ((_i) * 16)) /* _i=0...2047 */ /* Reset Source: FLR */
+#define MSIX_TVCTRL1_MAX_INDEX 2047
+#define MSIX_TVCTRL1_MASK_S 0
+#define MSIX_TVCTRL1_MASK_M BIT(0)
+#define GLNVM_AL_DONE_HLP 0x000824C4 /* Reset Source: POR */
+#define GLNVM_AL_DONE_HLP_HLP_CORER_S 0
+#define GLNVM_AL_DONE_HLP_HLP_CORER_M BIT(0)
+#define GLNVM_AL_DONE_HLP_HLP_FULLR_S 1
+#define GLNVM_AL_DONE_HLP_HLP_FULLR_M BIT(1)
+#define GLNVM_ALTIMERS 0x000B6140 /* Reset Source: POR */
+#define GLNVM_ALTIMERS_PCI_ALTIMER_S 0
+#define GLNVM_ALTIMERS_PCI_ALTIMER_M MAKEMASK(0xFFF, 0)
+#define GLNVM_ALTIMERS_GEN_ALTIMER_S 12
+#define GLNVM_ALTIMERS_GEN_ALTIMER_M MAKEMASK(0xFFFFF, 12)
+#define GLNVM_FLA 0x000B6108 /* Reset Source: POR */
+#define GLNVM_FLA_LOCKED_S 6
+#define GLNVM_FLA_LOCKED_M BIT(6)
+#define GLNVM_GENS 0x000B6100 /* Reset Source: POR */
+#define GLNVM_GENS_NVM_PRES_S 0
+#define GLNVM_GENS_NVM_PRES_M BIT(0)
+#define GLNVM_GENS_SR_SIZE_S 5
+#define GLNVM_GENS_SR_SIZE_M MAKEMASK(0x7, 5)
+#define GLNVM_GENS_BANK1VAL_S 8
+#define GLNVM_GENS_BANK1VAL_M BIT(8)
+#define GLNVM_GENS_ALT_PRST_S 23
+#define GLNVM_GENS_ALT_PRST_M BIT(23)
+#define GLNVM_GENS_FL_AUTO_RD_S 25
+#define GLNVM_GENS_FL_AUTO_RD_M BIT(25)
+#define GLNVM_PROTCSR(_i) (0x000B6010 + ((_i) * 4)) /* _i=0...59 */ /* Reset Source: POR */
+#define GLNVM_PROTCSR_MAX_INDEX 59
+#define GLNVM_PROTCSR_ADDR_BLOCK_S 0
+#define GLNVM_PROTCSR_ADDR_BLOCK_M MAKEMASK(0xFFFFFF, 0)
+#define GLNVM_ULD 0x000B6008 /* Reset Source: POR */
+#define GLNVM_ULD_PCIER_DONE_S 0
+#define GLNVM_ULD_PCIER_DONE_M BIT(0)
+#define GLNVM_ULD_PCIER_DONE_1_S 1
+#define GLNVM_ULD_PCIER_DONE_1_M BIT(1)
+#define GLNVM_ULD_CORER_DONE_S 3
+#define GLNVM_ULD_CORER_DONE_M BIT(3)
+#define GLNVM_ULD_GLOBR_DONE_S 4
+#define GLNVM_ULD_GLOBR_DONE_M BIT(4)
+#define GLNVM_ULD_POR_DONE_S 5
+#define GLNVM_ULD_POR_DONE_M BIT(5)
+#define GLNVM_ULD_POR_DONE_1_S 8
+#define GLNVM_ULD_POR_DONE_1_M BIT(8)
+#define GLNVM_ULD_PCIER_DONE_2_S 9
+#define GLNVM_ULD_PCIER_DONE_2_M BIT(9)
+#define GLNVM_ULD_PE_DONE_S 10
+#define GLNVM_ULD_PE_DONE_M BIT(10)
+#define GLNVM_ULD_HLP_CORE_DONE_S 11
+#define GLNVM_ULD_HLP_CORE_DONE_M BIT(11)
+#define GLNVM_ULD_HLP_FULL_DONE_S 12
+#define GLNVM_ULD_HLP_FULL_DONE_M BIT(12)
+#define GLNVM_ULT 0x000B6154 /* Reset Source: POR */
+#define GLNVM_ULT_CONF_PCIR_AE_S 0
+#define GLNVM_ULT_CONF_PCIR_AE_M BIT(0)
+#define GLNVM_ULT_CONF_PCIRTL_AE_S 1
+#define GLNVM_ULT_CONF_PCIRTL_AE_M BIT(1)
+#define GLNVM_ULT_RESERVED_1_S 2
+#define GLNVM_ULT_RESERVED_1_M BIT(2)
+#define GLNVM_ULT_CONF_CORE_AE_S 3
+#define GLNVM_ULT_CONF_CORE_AE_M BIT(3)
+#define GLNVM_ULT_CONF_GLOBAL_AE_S 4
+#define GLNVM_ULT_CONF_GLOBAL_AE_M BIT(4)
+#define GLNVM_ULT_CONF_POR_AE_S 5
+#define GLNVM_ULT_CONF_POR_AE_M BIT(5)
+#define GLNVM_ULT_RESERVED_2_S 6
+#define GLNVM_ULT_RESERVED_2_M BIT(6)
+#define GLNVM_ULT_RESERVED_3_S 7
+#define GLNVM_ULT_RESERVED_3_M BIT(7)
+#define GLNVM_ULT_RESERVED_5_S 8
+#define GLNVM_ULT_RESERVED_5_M BIT(8)
+#define GLNVM_ULT_CONF_PCIALT_AE_S 9
+#define GLNVM_ULT_CONF_PCIALT_AE_M BIT(9)
+#define GLNVM_ULT_CONF_PE_AE_S 10
+#define GLNVM_ULT_CONF_PE_AE_M BIT(10)
+#define GLNVM_ULT_RESERVED_4_S 11
+#define GLNVM_ULT_RESERVED_4_M MAKEMASK(0x1FFFFF, 11)
+#define GL_COTF_MARKER_STATUS 0x00200200 /* Reset Source: CORER */
+#define GL_COTF_MARKER_STATUS_MRKR_BUSY_S 0
+#define GL_COTF_MARKER_STATUS_MRKR_BUSY_M MAKEMASK(0xFF, 0)
+#define GL_COTF_MARKER_TRIG_RCU_PRS(_i) (0x002001D4 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GL_COTF_MARKER_TRIG_RCU_PRS_MAX_INDEX 7
+#define GL_COTF_MARKER_TRIG_RCU_PRS_SET_RST_S 0
+#define GL_COTF_MARKER_TRIG_RCU_PRS_SET_RST_M BIT(0)
+#define GL_PRS_MARKER_ERROR 0x00200204 /* Reset Source: CORER */
+#define GL_PRS_MARKER_ERROR_XLR_CFG_ERR_S 0
+#define GL_PRS_MARKER_ERROR_XLR_CFG_ERR_M BIT(0)
+#define GL_PRS_MARKER_ERROR_QH_CFG_ERR_S 1
+#define GL_PRS_MARKER_ERROR_QH_CFG_ERR_M BIT(1)
+#define GL_PRS_MARKER_ERROR_COTF_CFG_ERR_S 2
+#define GL_PRS_MARKER_ERROR_COTF_CFG_ERR_M BIT(2)
+#define GL_PRS_RX_PIPE_INIT0(_i) (0x0020000C + ((_i) * 4)) /* _i=0...6 */ /* Reset Source: CORER */
+#define GL_PRS_RX_PIPE_INIT0_MAX_INDEX 6
+#define GL_PRS_RX_PIPE_INIT0_GPCSR_INIT_S 0
+#define GL_PRS_RX_PIPE_INIT0_GPCSR_INIT_M MAKEMASK(0xFFFF, 0)
+#define GL_PRS_RX_PIPE_INIT1 0x00200028 /* Reset Source: CORER */
+#define GL_PRS_RX_PIPE_INIT1_GPCSR_INIT_S 0
+#define GL_PRS_RX_PIPE_INIT1_GPCSR_INIT_M MAKEMASK(0xFFFF, 0)
+#define GL_PRS_RX_PIPE_INIT2 0x0020002C /* Reset Source: CORER */
+#define GL_PRS_RX_PIPE_INIT2_GPCSR_INIT_S 0
+#define GL_PRS_RX_PIPE_INIT2_GPCSR_INIT_M MAKEMASK(0xFFFF, 0)
+#define GL_PRS_RX_SIZE_CTRL 0x00200004 /* Reset Source: CORER */
+#define GL_PRS_RX_SIZE_CTRL_MIN_SIZE_S 0
+#define GL_PRS_RX_SIZE_CTRL_MIN_SIZE_M MAKEMASK(0x3FF, 0)
+#define GL_PRS_RX_SIZE_CTRL_MIN_SIZE_EN_S 15
+#define GL_PRS_RX_SIZE_CTRL_MIN_SIZE_EN_M BIT(15)
+#define GL_PRS_RX_SIZE_CTRL_MAX_SIZE_S 16
+#define GL_PRS_RX_SIZE_CTRL_MAX_SIZE_M MAKEMASK(0x3FF, 16)
+#define GL_PRS_RX_SIZE_CTRL_MAX_SIZE_EN_S 31
+#define GL_PRS_RX_SIZE_CTRL_MAX_SIZE_EN_M BIT(31)
+#define GL_PRS_TX_PIPE_INIT0(_i) (0x00202018 + ((_i) * 4)) /* _i=0...6 */ /* Reset Source: CORER */
+#define GL_PRS_TX_PIPE_INIT0_MAX_INDEX 6
+#define GL_PRS_TX_PIPE_INIT0_GPCSR_INIT_S 0
+#define GL_PRS_TX_PIPE_INIT0_GPCSR_INIT_M MAKEMASK(0xFFFF, 0)
+#define GL_PRS_TX_PIPE_INIT1 0x00202034 /* Reset Source: CORER */
+#define GL_PRS_TX_PIPE_INIT1_GPCSR_INIT_S 0
+#define GL_PRS_TX_PIPE_INIT1_GPCSR_INIT_M MAKEMASK(0xFFFF, 0)
+#define GL_PRS_TX_PIPE_INIT2 0x00202038 /* Reset Source: CORER */
+#define GL_PRS_TX_PIPE_INIT2_GPCSR_INIT_S 0
+#define GL_PRS_TX_PIPE_INIT2_GPCSR_INIT_M MAKEMASK(0xFFFF, 0)
+#define GL_PRS_TX_SIZE_CTRL 0x00202014 /* Reset Source: CORER */
+#define GL_PRS_TX_SIZE_CTRL_MIN_SIZE_S 0
+#define GL_PRS_TX_SIZE_CTRL_MIN_SIZE_M MAKEMASK(0x3FF, 0)
+#define GL_PRS_TX_SIZE_CTRL_MIN_SIZE_EN_S 15
+#define GL_PRS_TX_SIZE_CTRL_MIN_SIZE_EN_M BIT(15)
+#define GL_PRS_TX_SIZE_CTRL_MAX_SIZE_S 16
+#define GL_PRS_TX_SIZE_CTRL_MAX_SIZE_M MAKEMASK(0x3FF, 16)
+#define GL_PRS_TX_SIZE_CTRL_MAX_SIZE_EN_S 31
+#define GL_PRS_TX_SIZE_CTRL_MAX_SIZE_EN_M BIT(31)
+#define GL_QH_MARKER_STATUS 0x002001FC /* Reset Source: CORER */
+#define GL_QH_MARKER_STATUS_MRKR_BUSY_S 0
+#define GL_QH_MARKER_STATUS_MRKR_BUSY_M MAKEMASK(0xF, 0)
+#define GL_QH_MARKER_TRIG_RCU_PRS(_i) (0x002001C4 + ((_i) * 4)) /* _i=0...3 */ /* Reset Source: CORER */
+#define GL_QH_MARKER_TRIG_RCU_PRS_MAX_INDEX 3
+#define GL_QH_MARKER_TRIG_RCU_PRS_QPID_S 0
+#define GL_QH_MARKER_TRIG_RCU_PRS_QPID_M MAKEMASK(0x3FFFF, 0)
+#define GL_QH_MARKER_TRIG_RCU_PRS_PE_TAG_S 18
+#define GL_QH_MARKER_TRIG_RCU_PRS_PE_TAG_M MAKEMASK(0xFF, 18)
+#define GL_QH_MARKER_TRIG_RCU_PRS_PORT_NUM_S 26
+#define GL_QH_MARKER_TRIG_RCU_PRS_PORT_NUM_M MAKEMASK(0x7, 26)
+#define GL_QH_MARKER_TRIG_RCU_PRS_SET_RST_S 31
+#define GL_QH_MARKER_TRIG_RCU_PRS_SET_RST_M BIT(31)
+#define GL_RPRS_ANA_CSR_CTRL 0x00200708 /* Reset Source: CORER */
+#define GL_RPRS_ANA_CSR_CTRL_SELECT_EN_S 0
+#define GL_RPRS_ANA_CSR_CTRL_SELECT_EN_M BIT(0)
+#define GL_RPRS_ANA_CSR_CTRL_SELECTED_ANA_S 1
+#define GL_RPRS_ANA_CSR_CTRL_SELECTED_ANA_M BIT(1)
+#define GL_TPRS_ANA_CSR_CTRL 0x00202100 /* Reset Source: CORER */
+#define GL_TPRS_ANA_CSR_CTRL_SELECT_EN_S 0
+#define GL_TPRS_ANA_CSR_CTRL_SELECT_EN_M BIT(0)
+#define GL_TPRS_ANA_CSR_CTRL_SELECTED_ANA_S 1
+#define GL_TPRS_ANA_CSR_CTRL_SELECTED_ANA_M BIT(1)
+#define GL_TPRS_MNG_PM_THR 0x00202004 /* Reset Source: CORER */
+#define GL_TPRS_MNG_PM_THR_MNG_PM_THR_S 0
+#define GL_TPRS_MNG_PM_THR_MNG_PM_THR_M MAKEMASK(0x3FFF, 0)
+#define GL_TPRS_PM_CNT(_i) (0x00202008 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GL_TPRS_PM_CNT_MAX_INDEX 1
+#define GL_TPRS_PM_CNT_GL_PRS_PM_CNT_S 0
+#define GL_TPRS_PM_CNT_GL_PRS_PM_CNT_M MAKEMASK(0x3FFF, 0)
+#define GL_TPRS_PM_THR 0x00202000 /* Reset Source: CORER */
+#define GL_TPRS_PM_THR_PM_THR_S 0
+#define GL_TPRS_PM_THR_PM_THR_M MAKEMASK(0x3FFF, 0)
+#define GL_XLR_MARKER_LOG_RCU_PRS(_i) (0x00200208 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GL_XLR_MARKER_LOG_RCU_PRS_MAX_INDEX 63
+#define GL_XLR_MARKER_LOG_RCU_PRS_XLR_TRIG_S 0
+#define GL_XLR_MARKER_LOG_RCU_PRS_XLR_TRIG_M MAKEMASK(0xFFFFFFFF, 0)
+#define GL_XLR_MARKER_STATUS(_i) (0x002001F4 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GL_XLR_MARKER_STATUS_MAX_INDEX 1
+#define GL_XLR_MARKER_STATUS_MRKR_BUSY_S 0
+#define GL_XLR_MARKER_STATUS_MRKR_BUSY_M MAKEMASK(0xFFFFFFFF, 0)
+#define GL_XLR_MARKER_TRIG_PE 0x005008C0 /* Reset Source: CORER */
+#define GL_XLR_MARKER_TRIG_PE_VM_VF_NUM_S 0
+#define GL_XLR_MARKER_TRIG_PE_VM_VF_NUM_M MAKEMASK(0x3FF, 0)
+#define GL_XLR_MARKER_TRIG_PE_VM_VF_TYPE_S 10
+#define GL_XLR_MARKER_TRIG_PE_VM_VF_TYPE_M MAKEMASK(0x3, 10)
+#define GL_XLR_MARKER_TRIG_PE_PF_NUM_S 12
+#define GL_XLR_MARKER_TRIG_PE_PF_NUM_M MAKEMASK(0x7, 12)
+#define GL_XLR_MARKER_TRIG_PE_PORT_NUM_S 16
+#define GL_XLR_MARKER_TRIG_PE_PORT_NUM_M MAKEMASK(0x7, 16)
+#define GL_XLR_MARKER_TRIG_RCU_PRS 0x002001C0 /* Reset Source: CORER */
+#define GL_XLR_MARKER_TRIG_RCU_PRS_VM_VF_NUM_S 0
+#define GL_XLR_MARKER_TRIG_RCU_PRS_VM_VF_NUM_M MAKEMASK(0x3FF, 0)
+#define GL_XLR_MARKER_TRIG_RCU_PRS_VM_VF_TYPE_S 10
+#define GL_XLR_MARKER_TRIG_RCU_PRS_VM_VF_TYPE_M MAKEMASK(0x3, 10)
+#define GL_XLR_MARKER_TRIG_RCU_PRS_PF_NUM_S 12
+#define GL_XLR_MARKER_TRIG_RCU_PRS_PF_NUM_M MAKEMASK(0x7, 12)
+#define GL_XLR_MARKER_TRIG_RCU_PRS_PORT_NUM_S 16
+#define GL_XLR_MARKER_TRIG_RCU_PRS_PORT_NUM_M MAKEMASK(0x7, 16)
+#define GL_CLKGATE_EVENTS 0x0009DE70 /* Reset Source: PERST */
+#define GL_CLKGATE_EVENTS_PRIMARY_CLKGATE_EVENTS_S 0
+#define GL_CLKGATE_EVENTS_PRIMARY_CLKGATE_EVENTS_M MAKEMASK(0xFFFF, 0)
+#define GL_CLKGATE_EVENTS_SIDEBAND_CLKGATE_EVENTS_S 16
+#define GL_CLKGATE_EVENTS_SIDEBAND_CLKGATE_EVENTS_M MAKEMASK(0xFFFF, 16)
+#define GLPCI_BYTCTH_NP_C 0x000BFDA8 /* Reset Source: PCIR */
+#define GLPCI_BYTCTH_NP_C_PCI_COUNT_BW_BCT_S 0
+#define GLPCI_BYTCTH_NP_C_PCI_COUNT_BW_BCT_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLPCI_BYTCTH_P 0x0009E970 /* Reset Source: PCIR */
+#define GLPCI_BYTCTH_P_PCI_COUNT_BW_BCT_S 0
+#define GLPCI_BYTCTH_P_PCI_COUNT_BW_BCT_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLPCI_BYTCTL_NP_C 0x000BFDAC /* Reset Source: PCIR */
+#define GLPCI_BYTCTL_NP_C_PCI_COUNT_BW_BCT_S 0
+#define GLPCI_BYTCTL_NP_C_PCI_COUNT_BW_BCT_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLPCI_BYTCTL_P 0x0009E994 /* Reset Source: PCIR */
+#define GLPCI_BYTCTL_P_PCI_COUNT_BW_BCT_S 0
+#define GLPCI_BYTCTL_P_PCI_COUNT_BW_BCT_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLPCI_CAPCTRL 0x0009DE88 /* Reset Source: PCIR */
+#define GLPCI_CAPCTRL_VPD_EN_S 0
+#define GLPCI_CAPCTRL_VPD_EN_M BIT(0)
+#define GLPCI_CAPSUP 0x0009DE8C /* Reset Source: PCIR */
+#define GLPCI_CAPSUP_PCIE_VER_S 0
+#define GLPCI_CAPSUP_PCIE_VER_M BIT(0)
+#define GLPCI_CAPSUP_RESERVED_2_S 1
+#define GLPCI_CAPSUP_RESERVED_2_M BIT(1)
+#define GLPCI_CAPSUP_LTR_EN_S 2
+#define GLPCI_CAPSUP_LTR_EN_M BIT(2)
+#define GLPCI_CAPSUP_TPH_EN_S 3
+#define GLPCI_CAPSUP_TPH_EN_M BIT(3)
+#define GLPCI_CAPSUP_ARI_EN_S 4
+#define GLPCI_CAPSUP_ARI_EN_M BIT(4)
+#define GLPCI_CAPSUP_IOV_EN_S 5
+#define GLPCI_CAPSUP_IOV_EN_M BIT(5)
+#define GLPCI_CAPSUP_ACS_EN_S 6
+#define GLPCI_CAPSUP_ACS_EN_M BIT(6)
+#define GLPCI_CAPSUP_SEC_EN_S 7
+#define GLPCI_CAPSUP_SEC_EN_M BIT(7)
+#define GLPCI_CAPSUP_PASID_EN_S 8
+#define GLPCI_CAPSUP_PASID_EN_M BIT(8)
+#define GLPCI_CAPSUP_DLFE_EN_S 9
+#define GLPCI_CAPSUP_DLFE_EN_M BIT(9)
+#define GLPCI_CAPSUP_GEN4_EXT_EN_S 10
+#define GLPCI_CAPSUP_GEN4_EXT_EN_M BIT(10)
+#define GLPCI_CAPSUP_GEN4_MARG_EN_S 11
+#define GLPCI_CAPSUP_GEN4_MARG_EN_M BIT(11)
+#define GLPCI_CAPSUP_ECRC_GEN_EN_S 16
+#define GLPCI_CAPSUP_ECRC_GEN_EN_M BIT(16)
+#define GLPCI_CAPSUP_ECRC_CHK_EN_S 17
+#define GLPCI_CAPSUP_ECRC_CHK_EN_M BIT(17)
+#define GLPCI_CAPSUP_IDO_EN_S 18
+#define GLPCI_CAPSUP_IDO_EN_M BIT(18)
+#define GLPCI_CAPSUP_MSI_MASK_S 19
+#define GLPCI_CAPSUP_MSI_MASK_M BIT(19)
+#define GLPCI_CAPSUP_CSR_CONF_EN_S 20
+#define GLPCI_CAPSUP_CSR_CONF_EN_M BIT(20)
+#define GLPCI_CAPSUP_WAKUP_EN_S 21
+#define GLPCI_CAPSUP_WAKUP_EN_M BIT(21)
+#define GLPCI_CAPSUP_LOAD_SUBSYS_ID_S 30
+#define GLPCI_CAPSUP_LOAD_SUBSYS_ID_M BIT(30)
+#define GLPCI_CAPSUP_LOAD_DEV_ID_S 31
+#define GLPCI_CAPSUP_LOAD_DEV_ID_M BIT(31)
+#define GLPCI_CNF 0x0009DEA0 /* Reset Source: POR */
+#define GLPCI_CNF_FLEX10_S 1
+#define GLPCI_CNF_FLEX10_M BIT(1)
+#define GLPCI_CNF_WAKE_PIN_EN_S 2
+#define GLPCI_CNF_WAKE_PIN_EN_M BIT(2)
+#define GLPCI_CNF_MSIX_ECC_BLOCK_DISABLE_S 3
+#define GLPCI_CNF_MSIX_ECC_BLOCK_DISABLE_M BIT(3)
+#define GLPCI_CNF2 0x000BE004 /* Reset Source: PCIR */
+#define GLPCI_CNF2_RO_DIS_S 0
+#define GLPCI_CNF2_RO_DIS_M BIT(0)
+#define GLPCI_CNF2_CACHELINE_SIZE_S 1
+#define GLPCI_CNF2_CACHELINE_SIZE_M BIT(1)
+#define GLPCI_DREVID 0x0009E9AC /* Reset Source: PCIR */
+#define GLPCI_DREVID_DEFAULT_REVID_S 0
+#define GLPCI_DREVID_DEFAULT_REVID_M MAKEMASK(0xFF, 0)
+#define GLPCI_GSCL_1_NP_C 0x000BFDA4 /* Reset Source: PCIR */
+#define GLPCI_GSCL_1_NP_C_RT_MODE_S 8
+#define GLPCI_GSCL_1_NP_C_RT_MODE_M BIT(8)
+#define GLPCI_GSCL_1_NP_C_RT_EVENT_S 9
+#define GLPCI_GSCL_1_NP_C_RT_EVENT_M MAKEMASK(0x1F, 9)
+#define GLPCI_GSCL_1_NP_C_PCI_COUNT_BW_EN_S 14
+#define GLPCI_GSCL_1_NP_C_PCI_COUNT_BW_EN_M BIT(14)
+#define GLPCI_GSCL_1_NP_C_PCI_COUNT_BW_EV_S 15
+#define GLPCI_GSCL_1_NP_C_PCI_COUNT_BW_EV_M MAKEMASK(0x1F, 15)
+#define GLPCI_GSCL_1_NP_C_GIO_COUNT_RESET_S 29
+#define GLPCI_GSCL_1_NP_C_GIO_COUNT_RESET_M BIT(29)
+#define GLPCI_GSCL_1_NP_C_GIO_COUNT_STOP_S 30
+#define GLPCI_GSCL_1_NP_C_GIO_COUNT_STOP_M BIT(30)
+#define GLPCI_GSCL_1_NP_C_GIO_COUNT_START_S 31
+#define GLPCI_GSCL_1_NP_C_GIO_COUNT_START_M BIT(31)
+#define GLPCI_GSCL_1_P 0x0009E9B4 /* Reset Source: PCIR */
+#define GLPCI_GSCL_1_P_GIO_COUNT_EN_0_S 0
+#define GLPCI_GSCL_1_P_GIO_COUNT_EN_0_M BIT(0)
+#define GLPCI_GSCL_1_P_GIO_COUNT_EN_1_S 1
+#define GLPCI_GSCL_1_P_GIO_COUNT_EN_1_M BIT(1)
+#define GLPCI_GSCL_1_P_GIO_COUNT_EN_2_S 2
+#define GLPCI_GSCL_1_P_GIO_COUNT_EN_2_M BIT(2)
+#define GLPCI_GSCL_1_P_GIO_COUNT_EN_3_S 3
+#define GLPCI_GSCL_1_P_GIO_COUNT_EN_3_M BIT(3)
+#define GLPCI_GSCL_1_P_LBC_ENABLE_0_S 4
+#define GLPCI_GSCL_1_P_LBC_ENABLE_0_M BIT(4)
+#define GLPCI_GSCL_1_P_LBC_ENABLE_1_S 5
+#define GLPCI_GSCL_1_P_LBC_ENABLE_1_M BIT(5)
+#define GLPCI_GSCL_1_P_LBC_ENABLE_2_S 6
+#define GLPCI_GSCL_1_P_LBC_ENABLE_2_M BIT(6)
+#define GLPCI_GSCL_1_P_LBC_ENABLE_3_S 7
+#define GLPCI_GSCL_1_P_LBC_ENABLE_3_M BIT(7)
+#define GLPCI_GSCL_1_P_PCI_COUNT_BW_EN_S 14
+#define GLPCI_GSCL_1_P_PCI_COUNT_BW_EN_M BIT(14)
+#define GLPCI_GSCL_1_P_GIO_64_BIT_EN_S 28
+#define GLPCI_GSCL_1_P_GIO_64_BIT_EN_M BIT(28)
+#define GLPCI_GSCL_1_P_GIO_COUNT_RESET_S 29
+#define GLPCI_GSCL_1_P_GIO_COUNT_RESET_M BIT(29)
+#define GLPCI_GSCL_1_P_GIO_COUNT_STOP_S 30
+#define GLPCI_GSCL_1_P_GIO_COUNT_STOP_M BIT(30)
+#define GLPCI_GSCL_1_P_GIO_COUNT_START_S 31
+#define GLPCI_GSCL_1_P_GIO_COUNT_START_M BIT(31)
+#define GLPCI_GSCL_2 0x0009E998 /* Reset Source: PCIR */
+#define GLPCI_GSCL_2_GIO_EVENT_NUM_0_S 0
+#define GLPCI_GSCL_2_GIO_EVENT_NUM_0_M MAKEMASK(0xFF, 0)
+#define GLPCI_GSCL_2_GIO_EVENT_NUM_1_S 8
+#define GLPCI_GSCL_2_GIO_EVENT_NUM_1_M MAKEMASK(0xFF, 8)
+#define GLPCI_GSCL_2_GIO_EVENT_NUM_2_S 16
+#define GLPCI_GSCL_2_GIO_EVENT_NUM_2_M MAKEMASK(0xFF, 16)
+#define GLPCI_GSCL_2_GIO_EVENT_NUM_3_S 24
+#define GLPCI_GSCL_2_GIO_EVENT_NUM_3_M MAKEMASK(0xFF, 24)
+#define GLPCI_GSCL_5_8(_i) (0x0009E954 + ((_i) * 4)) /* _i=0...3 */ /* Reset Source: PCIR */
+#define GLPCI_GSCL_5_8_MAX_INDEX 3
+#define GLPCI_GSCL_5_8_LBC_THRESHOLD_N_S 0
+#define GLPCI_GSCL_5_8_LBC_THRESHOLD_N_M MAKEMASK(0xFFFF, 0)
+#define GLPCI_GSCL_5_8_LBC_TIMER_N_S 16
+#define GLPCI_GSCL_5_8_LBC_TIMER_N_M MAKEMASK(0xFFFF, 16)
+#define GLPCI_GSCN_0_3(_i) (0x0009E99C + ((_i) * 4)) /* _i=0...3 */ /* Reset Source: PCIR */
+#define GLPCI_GSCN_0_3_MAX_INDEX 3
+#define GLPCI_GSCN_0_3_EVENT_COUNTER_S 0
+#define GLPCI_GSCN_0_3_EVENT_COUNTER_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLPCI_LATCT_NP_C 0x000BFDA0 /* Reset Source: PCIR */
+#define GLPCI_LATCT_NP_C_PCI_LATENCY_COUNT_S 0
+#define GLPCI_LATCT_NP_C_PCI_LATENCY_COUNT_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLPCI_LBARCTRL 0x0009DE74 /* Reset Source: POR */
+#define GLPCI_LBARCTRL_PREFBAR_S 0
+#define GLPCI_LBARCTRL_PREFBAR_M BIT(0)
+#define GLPCI_LBARCTRL_BAR32_S 1
+#define GLPCI_LBARCTRL_BAR32_M BIT(1)
+#define GLPCI_LBARCTRL_PAGES_SPACE_EN_PF_S 2
+#define GLPCI_LBARCTRL_PAGES_SPACE_EN_PF_M BIT(2)
+#define GLPCI_LBARCTRL_FLASH_EXPOSE_S 3
+#define GLPCI_LBARCTRL_FLASH_EXPOSE_M BIT(3)
+#define GLPCI_LBARCTRL_PE_DB_SIZE_S 4
+#define GLPCI_LBARCTRL_PE_DB_SIZE_M MAKEMASK(0x3, 4)
+#define GLPCI_LBARCTRL_PAGES_SPACE_EN_VF_S 9
+#define GLPCI_LBARCTRL_PAGES_SPACE_EN_VF_M BIT(9)
+#define GLPCI_LBARCTRL_EXROM_SIZE_S 11
+#define GLPCI_LBARCTRL_EXROM_SIZE_M MAKEMASK(0x7, 11)
+#define GLPCI_LBARCTRL_VF_PE_DB_SIZE_S 14
+#define GLPCI_LBARCTRL_VF_PE_DB_SIZE_M MAKEMASK(0x3, 14)
+#define GLPCI_LINKCAP 0x0009DE90 /* Reset Source: PCIR */
+#define GLPCI_LINKCAP_LINK_SPEEDS_VECTOR_S 0
+#define GLPCI_LINKCAP_LINK_SPEEDS_VECTOR_M MAKEMASK(0x3F, 0)
+#define GLPCI_LINKCAP_MAX_LINK_WIDTH_S 9
+#define GLPCI_LINKCAP_MAX_LINK_WIDTH_M MAKEMASK(0xF, 9)
+#define GLPCI_NPQ_CFG 0x000BFD80 /* Reset Source: PCIR */
+#define GLPCI_NPQ_CFG_EXTEND_TO_S 0
+#define GLPCI_NPQ_CFG_EXTEND_TO_M BIT(0)
+#define GLPCI_NPQ_CFG_SMALL_TO_S 1
+#define GLPCI_NPQ_CFG_SMALL_TO_M BIT(1)
+#define GLPCI_NPQ_CFG_WEIGHT_AVG_S 2
+#define GLPCI_NPQ_CFG_WEIGHT_AVG_M MAKEMASK(0xF, 2)
+#define GLPCI_NPQ_CFG_NPQ_SPARE_S 6
+#define GLPCI_NPQ_CFG_NPQ_SPARE_M MAKEMASK(0x3FF, 6)
+#define GLPCI_NPQ_CFG_NPQ_ERR_STAT_S 16
+#define GLPCI_NPQ_CFG_NPQ_ERR_STAT_M MAKEMASK(0xF, 16)
+#define GLPCI_PKTCT_NP_C 0x000BFD9C /* Reset Source: PCIR */
+#define GLPCI_PKTCT_NP_C_PCI_COUNT_BW_PCT_S 0
+#define GLPCI_PKTCT_NP_C_PCI_COUNT_BW_PCT_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLPCI_PKTCT_P 0x0009E9B0 /* Reset Source: PCIR */
+#define GLPCI_PKTCT_P_PCI_COUNT_BW_PCT_S 0
+#define GLPCI_PKTCT_P_PCI_COUNT_BW_PCT_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLPCI_PMSUP 0x0009DE94 /* Reset Source: PCIR */
+#define GLPCI_PMSUP_RESERVED_0_S 0
+#define GLPCI_PMSUP_RESERVED_0_M MAKEMASK(0x3, 0)
+#define GLPCI_PMSUP_RESERVED_1_S 2
+#define GLPCI_PMSUP_RESERVED_1_M MAKEMASK(0x7, 2)
+#define GLPCI_PMSUP_RESERVED_2_S 5
+#define GLPCI_PMSUP_RESERVED_2_M MAKEMASK(0x7, 5)
+#define GLPCI_PMSUP_L0S_ACC_LAT_S 8
+#define GLPCI_PMSUP_L0S_ACC_LAT_M MAKEMASK(0x7, 8)
+#define GLPCI_PMSUP_L1_ACC_LAT_S 11
+#define GLPCI_PMSUP_L1_ACC_LAT_M MAKEMASK(0x7, 11)
+#define GLPCI_PMSUP_RESERVED_3_S 14
+#define GLPCI_PMSUP_RESERVED_3_M BIT(14)
+#define GLPCI_PMSUP_OBFF_SUP_S 15
+#define GLPCI_PMSUP_OBFF_SUP_M MAKEMASK(0x3, 15)
+#define GLPCI_PUSH_PE_IF_TO_STATUS 0x0009DF44 /* Reset Source: PCIR */
+#define GLPCI_PUSH_PE_IF_TO_STATUS_GLPCI_PUSH_PE_IF_TO_STATUS_S 0
+#define GLPCI_PUSH_PE_IF_TO_STATUS_GLPCI_PUSH_PE_IF_TO_STATUS_M BIT(0)
+#define GLPCI_PWRDATA 0x0009DE7C /* Reset Source: PCIR */
+#define GLPCI_PWRDATA_D0_POWER_S 0
+#define GLPCI_PWRDATA_D0_POWER_M MAKEMASK(0xFF, 0)
+#define GLPCI_PWRDATA_COMM_POWER_S 8
+#define GLPCI_PWRDATA_COMM_POWER_M MAKEMASK(0xFF, 8)
+#define GLPCI_PWRDATA_D3_POWER_S 16
+#define GLPCI_PWRDATA_D3_POWER_M MAKEMASK(0xFF, 16)
+#define GLPCI_PWRDATA_DATA_SCALE_S 24
+#define GLPCI_PWRDATA_DATA_SCALE_M MAKEMASK(0x3, 24)
+#define GLPCI_REVID 0x0009DE98 /* Reset Source: PCIR */
+#define GLPCI_REVID_NVM_REVID_S 0
+#define GLPCI_REVID_NVM_REVID_M MAKEMASK(0xFF, 0)
+#define GLPCI_SERH 0x0009DE84 /* Reset Source: PCIR */
+#define GLPCI_SERH_SER_NUM_H_S 0
+#define GLPCI_SERH_SER_NUM_H_M MAKEMASK(0xFFFF, 0)
+#define GLPCI_SERL 0x0009DE80 /* Reset Source: PCIR */
+#define GLPCI_SERL_SER_NUM_L_S 0
+#define GLPCI_SERL_SER_NUM_L_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLPCI_SUBVENID 0x0009DEE8 /* Reset Source: PCIR */
+#define GLPCI_SUBVENID_SUB_VEN_ID_S 0
+#define GLPCI_SUBVENID_SUB_VEN_ID_M MAKEMASK(0xFFFF, 0)
+#define GLPCI_UPADD 0x000BE0D4 /* Reset Source: PCIR */
+#define GLPCI_UPADD_ADDRESS_S 1
+#define GLPCI_UPADD_ADDRESS_M MAKEMASK(0x7FFFFFFF, 1)
+#define GLPCI_VENDORID 0x0009DEC8 /* Reset Source: PCIR */
+#define GLPCI_VENDORID_VENDORID_S 0
+#define GLPCI_VENDORID_VENDORID_M MAKEMASK(0xFFFF, 0)
+#define GLPCI_VFSUP 0x0009DE9C /* Reset Source: PCIR */
+#define GLPCI_VFSUP_VF_PREFETCH_S 0
+#define GLPCI_VFSUP_VF_PREFETCH_M BIT(0)
+#define GLPCI_VFSUP_VR_BAR_TYPE_S 1
+#define GLPCI_VFSUP_VR_BAR_TYPE_M BIT(1)
+#define GLPCI_WATMK_CLNT_PIPEMON 0x000BFD90 /* Reset Source: PCIR */
+#define GLPCI_WATMK_CLNT_PIPEMON_DATA_LINES_S 0
+#define GLPCI_WATMK_CLNT_PIPEMON_DATA_LINES_M MAKEMASK(0xFFFF, 0)
+#define PF_FUNC_RID 0x0009E880 /* Reset Source: PCIR */
+#define PF_FUNC_RID_FUNCTION_NUMBER_S 0
+#define PF_FUNC_RID_FUNCTION_NUMBER_M MAKEMASK(0x7, 0)
+#define PF_FUNC_RID_DEVICE_NUMBER_S 3
+#define PF_FUNC_RID_DEVICE_NUMBER_M MAKEMASK(0x1F, 3)
+#define PF_FUNC_RID_BUS_NUMBER_S 8
+#define PF_FUNC_RID_BUS_NUMBER_M MAKEMASK(0xFF, 8)
+#define PF_PCI_CIAA 0x0009E580 /* Reset Source: FLR */
+#define PF_PCI_CIAA_ADDRESS_S 0
+#define PF_PCI_CIAA_ADDRESS_M MAKEMASK(0xFFF, 0)
+#define PF_PCI_CIAA_VF_NUM_S 12
+#define PF_PCI_CIAA_VF_NUM_M MAKEMASK(0xFF, 12)
+#define PF_PCI_CIAD 0x0009E500 /* Reset Source: FLR */
+#define PF_PCI_CIAD_DATA_S 0
+#define PF_PCI_CIAD_DATA_M MAKEMASK(0xFFFFFFFF, 0)
+#define PFPCI_CLASS 0x0009DB00 /* Reset Source: PCIR */
+#define PFPCI_CLASS_STORAGE_CLASS_S 0
+#define PFPCI_CLASS_STORAGE_CLASS_M BIT(0)
+#define PFPCI_CLASS_PF_IS_LAN_S 2
+#define PFPCI_CLASS_PF_IS_LAN_M BIT(2)
+#define PFPCI_CNF 0x0009DF00 /* Reset Source: PCIR */
+#define PFPCI_CNF_MSI_EN_S 2
+#define PFPCI_CNF_MSI_EN_M BIT(2)
+#define PFPCI_CNF_EXROM_DIS_S 3
+#define PFPCI_CNF_EXROM_DIS_M BIT(3)
+#define PFPCI_CNF_IO_BAR_S 4
+#define PFPCI_CNF_IO_BAR_M BIT(4)
+#define PFPCI_CNF_INT_PIN_S 5
+#define PFPCI_CNF_INT_PIN_M MAKEMASK(0x3, 5)
+#define PFPCI_DEVID 0x0009DE00 /* Reset Source: PCIR */
+#define PFPCI_DEVID_PF_DEV_ID_S 0
+#define PFPCI_DEVID_PF_DEV_ID_M MAKEMASK(0xFFFF, 0)
+#define PFPCI_DEVID_VF_DEV_ID_S 16
+#define PFPCI_DEVID_VF_DEV_ID_M MAKEMASK(0xFFFF, 16)
+#define PFPCI_FACTPS 0x0009E900 /* Reset Source: FLR */
+#define PFPCI_FACTPS_FUNC_POWER_STATE_S 0
+#define PFPCI_FACTPS_FUNC_POWER_STATE_M MAKEMASK(0x3, 0)
+#define PFPCI_FACTPS_FUNC_AUX_EN_S 3
+#define PFPCI_FACTPS_FUNC_AUX_EN_M BIT(3)
+#define PFPCI_FUNC 0x0009D980 /* Reset Source: POR */
+#define PFPCI_FUNC_FUNC_DIS_S 0
+#define PFPCI_FUNC_FUNC_DIS_M BIT(0)
+#define PFPCI_FUNC_ALLOW_FUNC_DIS_S 1
+#define PFPCI_FUNC_ALLOW_FUNC_DIS_M BIT(1)
+#define PFPCI_FUNC_DIS_FUNC_ON_PORT_DIS_S 2
+#define PFPCI_FUNC_DIS_FUNC_ON_PORT_DIS_M BIT(2)
+#define PFPCI_PF_FLUSH_DONE 0x0009E400 /* Reset Source: PCIR */
+#define PFPCI_PF_FLUSH_DONE_FLUSH_DONE_S 0
+#define PFPCI_PF_FLUSH_DONE_FLUSH_DONE_M BIT(0)
+#define PFPCI_PM 0x0009DA80 /* Reset Source: POR */
+#define PFPCI_PM_PME_EN_S 0
+#define PFPCI_PM_PME_EN_M BIT(0)
+#define PFPCI_STATUS1 0x0009DA00 /* Reset Source: POR */
+#define PFPCI_STATUS1_FUNC_VALID_S 0
+#define PFPCI_STATUS1_FUNC_VALID_M BIT(0)
+#define PFPCI_SUBSYSID 0x0009D880 /* Reset Source: PCIR */
+#define PFPCI_SUBSYSID_PF_SUBSYS_ID_S 0
+#define PFPCI_SUBSYSID_PF_SUBSYS_ID_M MAKEMASK(0xFFFF, 0)
+#define PFPCI_SUBSYSID_VF_SUBSYS_ID_S 16
+#define PFPCI_SUBSYSID_VF_SUBSYS_ID_M MAKEMASK(0xFFFF, 16)
+#define PFPCI_VF_FLUSH_DONE(_VF) (0x0009E000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PCIR */
+#define PFPCI_VF_FLUSH_DONE_MAX_INDEX 255
+#define PFPCI_VF_FLUSH_DONE_FLUSH_DONE_S 0
+#define PFPCI_VF_FLUSH_DONE_FLUSH_DONE_M BIT(0)
+#define PFPCI_VM_FLUSH_DONE 0x0009E480 /* Reset Source: PCIR */
+#define PFPCI_VM_FLUSH_DONE_FLUSH_DONE_S 0
+#define PFPCI_VM_FLUSH_DONE_FLUSH_DONE_M BIT(0)
+#define PFPCI_VMINDEX 0x0009E600 /* Reset Source: PCIR */
+#define PFPCI_VMINDEX_VMINDEX_S 0
+#define PFPCI_VMINDEX_VMINDEX_M MAKEMASK(0x3FF, 0)
+#define PFPCI_VMPEND 0x0009E800 /* Reset Source: PCIR */
+#define PFPCI_VMPEND_PENDING_S 0
+#define PFPCI_VMPEND_PENDING_M BIT(0)
+#define PQ_FIFO_STATUS 0x0009DF40 /* Reset Source: PCIR */
+#define PQ_FIFO_STATUS_PQ_FIFO_COUNT_S 0
+#define PQ_FIFO_STATUS_PQ_FIFO_COUNT_M MAKEMASK(0x7FFFFFFF, 0)
+#define PQ_FIFO_STATUS_PQ_FIFO_EMPTY_S 31
+#define PQ_FIFO_STATUS_PQ_FIFO_EMPTY_M BIT(31)
+#define GLPE_CPUSTATUS0 0x0050BA5C /* Reset Source: CORER */
+#define GLPE_CPUSTATUS0_PECPUSTATUS0_S 0
+#define GLPE_CPUSTATUS0_PECPUSTATUS0_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLPE_CPUSTATUS1 0x0050BA60 /* Reset Source: CORER */
+#define GLPE_CPUSTATUS1_PECPUSTATUS1_S 0
+#define GLPE_CPUSTATUS1_PECPUSTATUS1_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLPE_CPUSTATUS2 0x0050BA64 /* Reset Source: CORER */
+#define GLPE_CPUSTATUS2_PECPUSTATUS2_S 0
+#define GLPE_CPUSTATUS2_PECPUSTATUS2_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLPE_MDQ_BASE(_i) (0x00536000 + ((_i) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLPE_MDQ_BASE_MAX_INDEX 511
+#define GLPE_MDQ_BASE_MDOC_INDEX_S 0
+#define GLPE_MDQ_BASE_MDOC_INDEX_M MAKEMASK(0xFFFFFFF, 0)
+#define GLPE_MDQ_PTR(_i) (0x00537000 + ((_i) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLPE_MDQ_PTR_MAX_INDEX 511
+#define GLPE_MDQ_PTR_MDQ_HEAD_S 0
+#define GLPE_MDQ_PTR_MDQ_HEAD_M MAKEMASK(0x3FFF, 0)
+#define GLPE_MDQ_PTR_MDQ_TAIL_S 16
+#define GLPE_MDQ_PTR_MDQ_TAIL_M MAKEMASK(0x3FFF, 16)
+#define GLPE_MDQ_SIZE(_i) (0x00536800 + ((_i) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLPE_MDQ_SIZE_MAX_INDEX 511
+#define GLPE_MDQ_SIZE_MDQ_SIZE_S 0
+#define GLPE_MDQ_SIZE_MDQ_SIZE_M MAKEMASK(0x3FFF, 0)
+#define GLPE_PEPM_CTRL 0x0050C000 /* Reset Source: PERST */
+#define GLPE_PEPM_CTRL_PEPM_ENABLE_S 0
+#define GLPE_PEPM_CTRL_PEPM_ENABLE_M BIT(0)
+#define GLPE_PEPM_CTRL_PEPM_HALT_S 8
+#define GLPE_PEPM_CTRL_PEPM_HALT_M BIT(8)
+#define GLPE_PEPM_CTRL_PEPM_PUSH_MARGIN_S 16
+#define GLPE_PEPM_CTRL_PEPM_PUSH_MARGIN_M MAKEMASK(0xFF, 16)
+#define GLPE_PEPM_DEALLOC 0x0050C004 /* Reset Source: PERST */
+#define GLPE_PEPM_DEALLOC_MDQ_CREDITS_S 0
+#define GLPE_PEPM_DEALLOC_MDQ_CREDITS_M MAKEMASK(0x3FFF, 0)
+#define GLPE_PEPM_DEALLOC_PSQ_CREDITS_S 14
+#define GLPE_PEPM_DEALLOC_PSQ_CREDITS_M MAKEMASK(0x1F, 14)
+#define GLPE_PEPM_DEALLOC_PQID_S 19
+#define GLPE_PEPM_DEALLOC_PQID_M MAKEMASK(0x1FF, 19)
+#define GLPE_PEPM_DEALLOC_PORT_S 28
+#define GLPE_PEPM_DEALLOC_PORT_M MAKEMASK(0x7, 28)
+#define GLPE_PEPM_DEALLOC_DEALLOC_RDY_S 31
+#define GLPE_PEPM_DEALLOC_DEALLOC_RDY_M BIT(31)
+#define GLPE_PEPM_PSQ_COUNT 0x0050C020 /* Reset Source: PERST */
+#define GLPE_PEPM_PSQ_COUNT_PEPM_PSQ_COUNT_S 0
+#define GLPE_PEPM_PSQ_COUNT_PEPM_PSQ_COUNT_M MAKEMASK(0xFFFF, 0)
+#define GLPE_PEPM_THRESH(_i) (0x0050C840 + ((_i) * 4)) /* _i=0...511 */ /* Reset Source: PERST */
+#define GLPE_PEPM_THRESH_MAX_INDEX 511
+#define GLPE_PEPM_THRESH_PEPM_PSQ_THRESH_S 0
+#define GLPE_PEPM_THRESH_PEPM_PSQ_THRESH_M MAKEMASK(0x1F, 0)
+#define GLPE_PEPM_THRESH_PEPM_MDQ_THRESH_S 16
+#define GLPE_PEPM_THRESH_PEPM_MDQ_THRESH_M MAKEMASK(0x3FFF, 16)
+#define GLPE_PFAEQEDROPCNT(_i) (0x00503240 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPE_PFAEQEDROPCNT_MAX_INDEX 7
+#define GLPE_PFAEQEDROPCNT_AEQEDROPCNT_S 0
+#define GLPE_PFAEQEDROPCNT_AEQEDROPCNT_M MAKEMASK(0xFFFF, 0)
+#define GLPE_PFCEQEDROPCNT(_i) (0x00503220 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPE_PFCEQEDROPCNT_MAX_INDEX 7
+#define GLPE_PFCEQEDROPCNT_CEQEDROPCNT_S 0
+#define GLPE_PFCEQEDROPCNT_CEQEDROPCNT_M MAKEMASK(0xFFFF, 0)
+#define GLPE_PFCQEDROPCNT(_i) (0x00503200 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPE_PFCQEDROPCNT_MAX_INDEX 7
+#define GLPE_PFCQEDROPCNT_CQEDROPCNT_S 0
+#define GLPE_PFCQEDROPCNT_CQEDROPCNT_M MAKEMASK(0xFFFF, 0)
+#define GLPE_PFFLMOOISCALLOCERR(_i) (0x0050B960 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPE_PFFLMOOISCALLOCERR_MAX_INDEX 7
+#define GLPE_PFFLMOOISCALLOCERR_ERROR_COUNT_S 0
+#define GLPE_PFFLMOOISCALLOCERR_ERROR_COUNT_M MAKEMASK(0xFFFF, 0)
+#define GLPE_PFFLMQ1ALLOCERR(_i) (0x0050B920 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPE_PFFLMQ1ALLOCERR_MAX_INDEX 7
+#define GLPE_PFFLMQ1ALLOCERR_ERROR_COUNT_S 0
+#define GLPE_PFFLMQ1ALLOCERR_ERROR_COUNT_M MAKEMASK(0xFFFF, 0)
+#define GLPE_PFFLMRRFALLOCERR(_i) (0x0050B940 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPE_PFFLMRRFALLOCERR_MAX_INDEX 7
+#define GLPE_PFFLMRRFALLOCERR_ERROR_COUNT_S 0
+#define GLPE_PFFLMRRFALLOCERR_ERROR_COUNT_M MAKEMASK(0xFFFF, 0)
+#define GLPE_PFFLMXMITALLOCERR(_i) (0x0050B900 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPE_PFFLMXMITALLOCERR_MAX_INDEX 7
+#define GLPE_PFFLMXMITALLOCERR_ERROR_COUNT_S 0
+#define GLPE_PFFLMXMITALLOCERR_ERROR_COUNT_M MAKEMASK(0xFFFF, 0)
+#define GLPE_PFTCPNOW50USCNT(_i) (0x0050B8C0 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPE_PFTCPNOW50USCNT_MAX_INDEX 7
+#define GLPE_PFTCPNOW50USCNT_CNT_S 0
+#define GLPE_PFTCPNOW50USCNT_CNT_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLPE_PUSH_PEPM 0x0053241C /* Reset Source: CORER */
+#define GLPE_PUSH_PEPM_MDQ_CREDITS_S 0
+#define GLPE_PUSH_PEPM_MDQ_CREDITS_M MAKEMASK(0xFF, 0)
+#define GLPE_VFAEQEDROPCNT(_i) (0x00503100 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLPE_VFAEQEDROPCNT_MAX_INDEX 31
+#define GLPE_VFAEQEDROPCNT_AEQEDROPCNT_S 0
+#define GLPE_VFAEQEDROPCNT_AEQEDROPCNT_M MAKEMASK(0xFFFF, 0)
+#define GLPE_VFCEQEDROPCNT(_i) (0x00503080 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLPE_VFCEQEDROPCNT_MAX_INDEX 31
+#define GLPE_VFCEQEDROPCNT_CEQEDROPCNT_S 0
+#define GLPE_VFCEQEDROPCNT_CEQEDROPCNT_M MAKEMASK(0xFFFF, 0)
+#define GLPE_VFCQEDROPCNT(_i) (0x00503000 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLPE_VFCQEDROPCNT_MAX_INDEX 31
+#define GLPE_VFCQEDROPCNT_CQEDROPCNT_S 0
+#define GLPE_VFCQEDROPCNT_CQEDROPCNT_M MAKEMASK(0xFFFF, 0)
+#define GLPE_VFFLMOOISCALLOCERR(_i) (0x0050B580 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLPE_VFFLMOOISCALLOCERR_MAX_INDEX 31
+#define GLPE_VFFLMOOISCALLOCERR_ERROR_COUNT_S 0
+#define GLPE_VFFLMOOISCALLOCERR_ERROR_COUNT_M MAKEMASK(0xFFFF, 0)
+#define GLPE_VFFLMQ1ALLOCERR(_i) (0x0050B480 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLPE_VFFLMQ1ALLOCERR_MAX_INDEX 31
+#define GLPE_VFFLMQ1ALLOCERR_ERROR_COUNT_S 0
+#define GLPE_VFFLMQ1ALLOCERR_ERROR_COUNT_M MAKEMASK(0xFFFF, 0)
+#define GLPE_VFFLMRRFALLOCERR(_i) (0x0050B500 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLPE_VFFLMRRFALLOCERR_MAX_INDEX 31
+#define GLPE_VFFLMRRFALLOCERR_ERROR_COUNT_S 0
+#define GLPE_VFFLMRRFALLOCERR_ERROR_COUNT_M MAKEMASK(0xFFFF, 0)
+#define GLPE_VFFLMXMITALLOCERR(_i) (0x0050B400 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLPE_VFFLMXMITALLOCERR_MAX_INDEX 31
+#define GLPE_VFFLMXMITALLOCERR_ERROR_COUNT_S 0
+#define GLPE_VFFLMXMITALLOCERR_ERROR_COUNT_M MAKEMASK(0xFFFF, 0)
+#define GLPE_VFTCPNOW50USCNT(_i) (0x0050B300 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: PE_CORER */
+#define GLPE_VFTCPNOW50USCNT_MAX_INDEX 31
+#define GLPE_VFTCPNOW50USCNT_CNT_S 0
+#define GLPE_VFTCPNOW50USCNT_CNT_M MAKEMASK(0xFFFFFFFF, 0)
+#define PFPE_AEQALLOC 0x00502D00 /* Reset Source: PFR */
+#define PFPE_AEQALLOC_AECOUNT_S 0
+#define PFPE_AEQALLOC_AECOUNT_M MAKEMASK(0xFFFFFFFF, 0)
+#define PFPE_CCQPHIGH 0x0050A100 /* Reset Source: PFR */
+#define PFPE_CCQPHIGH_PECCQPHIGH_S 0
+#define PFPE_CCQPHIGH_PECCQPHIGH_M MAKEMASK(0xFFFFFFFF, 0)
+#define PFPE_CCQPLOW 0x0050A080 /* Reset Source: PFR */
+#define PFPE_CCQPLOW_PECCQPLOW_S 0
+#define PFPE_CCQPLOW_PECCQPLOW_M MAKEMASK(0xFFFFFFFF, 0)
+#define PFPE_CCQPSTATUS 0x0050A000 /* Reset Source: PFR */
+#define PFPE_CCQPSTATUS_CCQP_DONE_S 0
+#define PFPE_CCQPSTATUS_CCQP_DONE_M BIT(0)
+#define PFPE_CCQPSTATUS_HMC_PROFILE_S 4
+#define PFPE_CCQPSTATUS_HMC_PROFILE_M MAKEMASK(0x7, 4)
+#define PFPE_CCQPSTATUS_RDMA_EN_VFS_S 16
+#define PFPE_CCQPSTATUS_RDMA_EN_VFS_M MAKEMASK(0x3F, 16)
+#define PFPE_CCQPSTATUS_CCQP_ERR_S 31
+#define PFPE_CCQPSTATUS_CCQP_ERR_M BIT(31)
+#define PFPE_CQACK 0x00502C80 /* Reset Source: PFR */
+#define PFPE_CQACK_PECQID_S 0
+#define PFPE_CQACK_PECQID_M MAKEMASK(0x7FFFF, 0)
+#define PFPE_CQARM 0x00502C00 /* Reset Source: PFR */
+#define PFPE_CQARM_PECQID_S 0
+#define PFPE_CQARM_PECQID_M MAKEMASK(0x7FFFF, 0)
+#define PFPE_CQPDB 0x00500800 /* Reset Source: PFR */
+#define PFPE_CQPDB_WQHEAD_S 0
+#define PFPE_CQPDB_WQHEAD_M MAKEMASK(0x7FF, 0)
+#define PFPE_CQPERRCODES 0x0050A200 /* Reset Source: PFR */
+#define PFPE_CQPERRCODES_CQP_MINOR_CODE_S 0
+#define PFPE_CQPERRCODES_CQP_MINOR_CODE_M MAKEMASK(0xFFFF, 0)
+#define PFPE_CQPERRCODES_CQP_MAJOR_CODE_S 16
+#define PFPE_CQPERRCODES_CQP_MAJOR_CODE_M MAKEMASK(0xFFFF, 16)
+#define PFPE_CQPTAIL 0x00500880 /* Reset Source: PFR */
+#define PFPE_CQPTAIL_WQTAIL_S 0
+#define PFPE_CQPTAIL_WQTAIL_M MAKEMASK(0x7FF, 0)
+#define PFPE_CQPTAIL_CQP_OP_ERR_S 31
+#define PFPE_CQPTAIL_CQP_OP_ERR_M BIT(31)
+#define PFPE_IPCONFIG0 0x0050A180 /* Reset Source: PFR */
+#define PFPE_IPCONFIG0_PEIPID_S 0
+#define PFPE_IPCONFIG0_PEIPID_M MAKEMASK(0xFFFF, 0)
+#define PFPE_IPCONFIG0_USEENTIREIDRANGE_S 16
+#define PFPE_IPCONFIG0_USEENTIREIDRANGE_M BIT(16)
+#define PFPE_IPCONFIG0_UDP_SRC_PORT_MASK_EN_S 17
+#define PFPE_IPCONFIG0_UDP_SRC_PORT_MASK_EN_M BIT(17)
+#define PFPE_MRTEIDXMASK 0x0050A300 /* Reset Source: PFR */
+#define PFPE_MRTEIDXMASK_MRTEIDXMASKBITS_S 0
+#define PFPE_MRTEIDXMASK_MRTEIDXMASKBITS_M MAKEMASK(0x1F, 0)
+#define PFPE_RCVUNEXPECTEDERROR 0x0050A380 /* Reset Source: PFR */
+#define PFPE_RCVUNEXPECTEDERROR_TCP_RX_UNEXP_ERR_S 0
+#define PFPE_RCVUNEXPECTEDERROR_TCP_RX_UNEXP_ERR_M MAKEMASK(0xFFFFFF, 0)
+#define PFPE_TCPNOWTIMER 0x0050A280 /* Reset Source: PFR */
+#define PFPE_TCPNOWTIMER_TCP_NOW_S 0
+#define PFPE_TCPNOWTIMER_TCP_NOW_M MAKEMASK(0xFFFFFFFF, 0)
+#define PFPE_WQEALLOC 0x00504400 /* Reset Source: PFR */
+#define PFPE_WQEALLOC_PEQPID_S 0
+#define PFPE_WQEALLOC_PEQPID_M MAKEMASK(0x3FFFF, 0)
+#define PFPE_WQEALLOC_WQE_DESC_INDEX_S 20
+#define PFPE_WQEALLOC_WQE_DESC_INDEX_M MAKEMASK(0xFFF, 20)
+#define PRT_PEPM_COUNT(_i) (0x0050C040 + ((_i) * 4)) /* _i=0...511 */ /* Reset Source: PERST */
+#define PRT_PEPM_COUNT_MAX_INDEX 511
+#define PRT_PEPM_COUNT_PEPM_PSQ_COUNT_S 0
+#define PRT_PEPM_COUNT_PEPM_PSQ_COUNT_M MAKEMASK(0x1F, 0)
+#define PRT_PEPM_COUNT_PEPM_MDQ_COUNT_S 16
+#define PRT_PEPM_COUNT_PEPM_MDQ_COUNT_M MAKEMASK(0x3FFF, 16)
+#define VFPE_AEQALLOC(_VF) (0x00502800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_AEQALLOC_MAX_INDEX 255
+#define VFPE_AEQALLOC_AECOUNT_S 0
+#define VFPE_AEQALLOC_AECOUNT_M MAKEMASK(0xFFFFFFFF, 0)
+#define VFPE_CCQPHIGH(_VF) (0x00508800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_CCQPHIGH_MAX_INDEX 255
+#define VFPE_CCQPHIGH_PECCQPHIGH_S 0
+#define VFPE_CCQPHIGH_PECCQPHIGH_M MAKEMASK(0xFFFFFFFF, 0)
+#define VFPE_CCQPLOW(_VF) (0x00508400 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_CCQPLOW_MAX_INDEX 255
+#define VFPE_CCQPLOW_PECCQPLOW_S 0
+#define VFPE_CCQPLOW_PECCQPLOW_M MAKEMASK(0xFFFFFFFF, 0)
+#define VFPE_CCQPSTATUS(_VF) (0x00508000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_CCQPSTATUS_MAX_INDEX 255
+#define VFPE_CCQPSTATUS_CCQP_DONE_S 0
+#define VFPE_CCQPSTATUS_CCQP_DONE_M BIT(0)
+#define VFPE_CCQPSTATUS_HMC_PROFILE_S 4
+#define VFPE_CCQPSTATUS_HMC_PROFILE_M MAKEMASK(0x7, 4)
+#define VFPE_CCQPSTATUS_RDMA_EN_VFS_S 16
+#define VFPE_CCQPSTATUS_RDMA_EN_VFS_M MAKEMASK(0x3F, 16)
+#define VFPE_CCQPSTATUS_CCQP_ERR_S 31
+#define VFPE_CCQPSTATUS_CCQP_ERR_M BIT(31)
+#define VFPE_CQACK(_VF) (0x00502400 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_CQACK_MAX_INDEX 255
+#define VFPE_CQACK_PECQID_S 0
+#define VFPE_CQACK_PECQID_M MAKEMASK(0x7FFFF, 0)
+#define VFPE_CQARM(_VF) (0x00502000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_CQARM_MAX_INDEX 255
+#define VFPE_CQARM_PECQID_S 0
+#define VFPE_CQARM_PECQID_M MAKEMASK(0x7FFFF, 0)
+#define VFPE_CQPDB(_VF) (0x00500000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_CQPDB_MAX_INDEX 255
+#define VFPE_CQPDB_WQHEAD_S 0
+#define VFPE_CQPDB_WQHEAD_M MAKEMASK(0x7FF, 0)
+#define VFPE_CQPERRCODES(_VF) (0x00509000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_CQPERRCODES_MAX_INDEX 255
+#define VFPE_CQPERRCODES_CQP_MINOR_CODE_S 0
+#define VFPE_CQPERRCODES_CQP_MINOR_CODE_M MAKEMASK(0xFFFF, 0)
+#define VFPE_CQPERRCODES_CQP_MAJOR_CODE_S 16
+#define VFPE_CQPERRCODES_CQP_MAJOR_CODE_M MAKEMASK(0xFFFF, 16)
+#define VFPE_CQPTAIL(_VF) (0x00500400 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_CQPTAIL_MAX_INDEX 255
+#define VFPE_CQPTAIL_WQTAIL_S 0
+#define VFPE_CQPTAIL_WQTAIL_M MAKEMASK(0x7FF, 0)
+#define VFPE_CQPTAIL_CQP_OP_ERR_S 31
+#define VFPE_CQPTAIL_CQP_OP_ERR_M BIT(31)
+#define VFPE_IPCONFIG0(_VF) (0x00508C00 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_IPCONFIG0_MAX_INDEX 255
+#define VFPE_IPCONFIG0_PEIPID_S 0
+#define VFPE_IPCONFIG0_PEIPID_M MAKEMASK(0xFFFF, 0)
+#define VFPE_IPCONFIG0_USEENTIREIDRANGE_S 16
+#define VFPE_IPCONFIG0_USEENTIREIDRANGE_M BIT(16)
+#define VFPE_IPCONFIG0_UDP_SRC_PORT_MASK_EN_S 17
+#define VFPE_IPCONFIG0_UDP_SRC_PORT_MASK_EN_M BIT(17)
+#define VFPE_RCVUNEXPECTEDERROR(_VF) (0x00509C00 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_RCVUNEXPECTEDERROR_MAX_INDEX 255
+#define VFPE_RCVUNEXPECTEDERROR_TCP_RX_UNEXP_ERR_S 0
+#define VFPE_RCVUNEXPECTEDERROR_TCP_RX_UNEXP_ERR_M MAKEMASK(0xFFFFFF, 0)
+#define VFPE_TCPNOWTIMER(_VF) (0x00509400 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_TCPNOWTIMER_MAX_INDEX 255
+#define VFPE_TCPNOWTIMER_TCP_NOW_S 0
+#define VFPE_TCPNOWTIMER_TCP_NOW_M MAKEMASK(0xFFFFFFFF, 0)
+#define VFPE_WQEALLOC(_VF) (0x00504000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_WQEALLOC_MAX_INDEX 255
+#define VFPE_WQEALLOC_PEQPID_S 0
+#define VFPE_WQEALLOC_PEQPID_M MAKEMASK(0x3FFFF, 0)
+#define VFPE_WQEALLOC_WQE_DESC_INDEX_S 20
+#define VFPE_WQEALLOC_WQE_DESC_INDEX_M MAKEMASK(0xFFF, 20)
+#define GLPES_PFIP4RXDISCARD(_i) (0x00541400 + ((_i) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4RXDISCARD_MAX_INDEX 127
+#define GLPES_PFIP4RXDISCARD_IP4RXDISCARD_S 0
+#define GLPES_PFIP4RXDISCARD_IP4RXDISCARD_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP4RXFRAGSHI(_i) (0x00541C04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4RXFRAGSHI_MAX_INDEX 127
+#define GLPES_PFIP4RXFRAGSHI_IP4RXFRAGSHI_S 0
+#define GLPES_PFIP4RXFRAGSHI_IP4RXFRAGSHI_M MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP4RXFRAGSLO(_i) (0x00541C00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4RXFRAGSLO_MAX_INDEX 127
+#define GLPES_PFIP4RXFRAGSLO_IP4RXFRAGSLO_S 0
+#define GLPES_PFIP4RXFRAGSLO_IP4RXFRAGSLO_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP4RXMCOCTSHI(_i) (0x00542404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4RXMCOCTSHI_MAX_INDEX 127
+#define GLPES_PFIP4RXMCOCTSHI_IP4RXMCOCTSHI_S 0
+#define GLPES_PFIP4RXMCOCTSHI_IP4RXMCOCTSHI_M MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP4RXMCOCTSLO(_i) (0x00542400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4RXMCOCTSLO_MAX_INDEX 127
+#define GLPES_PFIP4RXMCOCTSLO_IP4RXMCOCTSLO_S 0
+#define GLPES_PFIP4RXMCOCTSLO_IP4RXMCOCTSLO_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP4RXMCPKTSHI(_i) (0x00542C04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4RXMCPKTSHI_MAX_INDEX 127
+#define GLPES_PFIP4RXMCPKTSHI_IP4RXMCPKTSHI_S 0
+#define GLPES_PFIP4RXMCPKTSHI_IP4RXMCPKTSHI_M MAKEMASK(0xF