Discussion:
[dpdk-dev] [PATCH 00/17] unified packet type
(too old to reply)
Helin Zhang
2015-01-29 03:15:48 UTC
Permalink
Currently only 6 bits which are stored in ol_flags are used to indicate
the packet types. This is not enough, as some NIC hardware can recognize
quite a lot of packet types, e.g i40e hardware can recognize more than 150
packet types. Hiding those packet types hides hardware offload capabilities
which could be quite useful for improving performance and for end users.
So an unified packet types are needed to support all possible PMDs. Recently
a 16 bits packet_type field has been added in mbuf header which can be used
for this purpose. In addition, all packet types stored in ol_flag field
should be deleted at all, and 6 bits of ol_flags can be save as the benifit.

Initially, 16 bits of packet_type can be divided into several sub fields to
indicate different packet type information of a packet. The initial design
is to divide those bits into 4 fields for L3 types, tunnel types, inner L3
types and L4 types. All PMDs should translate the offloaded packet types
into this 4 fields of information, for user applications.

Helin Zhang (17):
mbuf: add definitions of unified packet types
e1000: support of unified packet type
ixgbe: support of unified packet type
ixgbe: support of unified packet type
i40e: support of unified packet type
bond: support of unified packet type
enic: support of unified packet type
vmxnet3: support of unified packet type
app/test-pipeline: support of unified packet type
app/test-pmd: support of unified packet type
app/test: support of unified packet type
examples/ip_fragmentation: support of unified packet type
examples/ip_reassembly: support of unified packet type
examples/l3fwd-acl: support of unified packet type
examples/l3fwd-power: support of unified packet type
examples/l3fwd: support of unified packet type
mbuf: remove old packet type bit masks for ol_flags

app/test-pipeline/pipeline_hash.c | 4 +-
app/test-pmd/csumonly.c | 6 +-
app/test-pmd/rxonly.c | 9 +-
app/test/packet_burst_generator.c | 10 +-
examples/ip_fragmentation/main.c | 7 +-
examples/ip_reassembly/main.c | 7 +-
examples/l3fwd-acl/main.c | 19 +-
examples/l3fwd-power/main.c | 5 +-
examples/l3fwd/main.c | 64 +--
lib/librte_mbuf/rte_mbuf.c | 6 -
lib/librte_mbuf/rte_mbuf.h | 84 +++-
lib/librte_pmd_bond/rte_eth_bond_pmd.c | 9 +-
lib/librte_pmd_e1000/igb_rxtx.c | 95 +++-
lib/librte_pmd_enic/enic_main.c | 14 +-
lib/librte_pmd_i40e/i40e_rxtx.c | 778 +++++++++++++++++++++------------
lib/librte_pmd_ixgbe/ixgbe_rxtx.c | 141 ++++--
lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c | 39 +-
lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c | 4 +-
18 files changed, 865 insertions(+), 436 deletions(-)
--
1.8.1.4
Helin Zhang
2015-01-29 03:15:49 UTC
Permalink
As there are only 6 bit flags in ol_flags for indicating packet types,
which is not enough to describe all the possible packet types hardware
can recognize. For example, i40e hardware can recognize more than 150
packet types. Unified packet type is composed of tunnel type, L3 type,
L4 type and inner L3 type fields, and can be stored in 16 bits mbuf
field of 'packet_type'.

Signed-off-by: Helin Zhang <***@intel.com>
Signed-off-by: Cunming Liang <***@intel.com>
Signed-off-by: Jijiang Liu <***@intel.com>
---
lib/librte_mbuf/rte_mbuf.h | 74 ++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 74 insertions(+)

diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index 16059c6..94ae344 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -165,6 +165,80 @@ extern "C" {
/* Use final bit of flags to indicate a control mbuf */
#define CTRL_MBUF_FLAG (1ULL << 63) /**< Mbuf contains control data */

+/*
+ * Sixteen bits are divided into several fields to mark packet types. Note that
+ * each field is indexical.
+ * - Bit 3:0 is for tunnel types.
+ * - Bit 7:4 is for L3 or outer L3 (for tunneling case) types.
+ * - Bit 10:8 is for L4 types. It can also be used for inner L4 types for
+ * tunneling packets.
+ * - Bit 13:11 is for inner L3 types.
+ * - Bit 15:14 is reserved.
+ *
+ * To be compitable with Vector PMD, RTE_PTYPE_L3_IPV4, RTE_PTYPE_L3_IPV4_EXT,
+ * RTE_PTYPE_L3_IPV6, RTE_PTYPE_L3_IPV6_EXT, RTE_PTYPE_L4_TCP, RTE_PTYPE_L4_UDP
+ * and RTE_PTYPE_L4_SCTP should be kept as below in a contiguous 7 bits.
+ *
+ * Note that L3 types values are selected for checking IPV4/IPV6 header from
+ * performance point of view. Reading annotations of RTE_ETH_IS_IPV4_HDR and
+ * RTE_ETH_IS_IPV6_HDR is needed for any future changes of L3 type values.
+ */
+#define RTE_PTYPE_UNKNOWN 0x0000 /* 0b0000000000000000 */
+/* bit 3:0 for tunnel types */
+#define RTE_PTYPE_TUNNEL_IP 0x0001 /* 0b0000000000000001 */
+#define RTE_PTYPE_TUNNEL_TCP 0x0002 /* 0b0000000000000010 */
+#define RTE_PTYPE_TUNNEL_UDP 0x0003 /* 0b0000000000000011 */
+#define RTE_PTYPE_TUNNEL_GRE 0x0004 /* 0b0000000000000100 */
+#define RTE_PTYPE_TUNNEL_VXLAN 0x0005 /* 0b0000000000000101 */
+#define RTE_PTYPE_TUNNEL_NVGRE 0x0006 /* 0b0000000000000110 */
+#define RTE_PTYPE_TUNNEL_GENEVE 0x0007 /* 0b0000000000000111 */
+#define RTE_PTYPE_TUNNEL_GRENAT 0x0008 /* 0b0000000000001000 */
+#define RTE_PTYPE_TUNNEL_GRENAT_MAC 0x0009 /* 0b0000000000001001 */
+#define RTE_PTYPE_TUNNEL_GRENAT_MACVLAN 0x000a /* 0b0000000000001010 */
+#define RTE_PTYPE_TUNNEL_MASK 0x000f /* 0b0000000000001111 */
+/* bit 7:4 for L3 types */
+#define RTE_PTYPE_L3_IPV4 0x0010 /* 0b0000000000010000 */
+#define RTE_PTYPE_L3_IPV4_EXT 0x0030 /* 0b0000000000110000 */
+#define RTE_PTYPE_L3_IPV6 0x0040 /* 0b0000000001000000 */
+#define RTE_PTYPE_L3_IPV4_EXT_UNKNOWN 0x0090 /* 0b0000000010010000 */
+#define RTE_PTYPE_L3_IPV6_EXT 0x00c0 /* 0b0000000011000000 */
+#define RTE_PTYPE_L3_IPV6_EXT_UNKNOWN 0x00e0 /* 0b0000000011100000 */
+#define RTE_PTYPE_L3_MASK 0x00f0 /* 0b0000000011110000 */
+/* bit 10:8 for L4 types */
+#define RTE_PTYPE_L4_TCP 0x0100 /* 0b0000000100000000 */
+#define RTE_PTYPE_L4_UDP 0x0200 /* 0b0000001000000000 */
+#define RTE_PTYPE_L4_FRAG 0x0300 /* 0b0000001100000000 */
+#define RTE_PTYPE_L4_SCTP 0x0400 /* 0b0000010000000000 */
+#define RTE_PTYPE_L4_ICMP 0x0500 /* 0b0000010100000000 */
+#define RTE_PTYPE_L4_NONFRAG 0x0600 /* 0b0000011000000000 */
+#define RTE_PTYPE_L4_MASK 0x0700 /* 0b0000011100000000 */
+/* bit 13:11 for inner L3 types */
+#define RTE_PTYPE_INNER_L3_IPV4 0x0800 /* 0b0000100000000000 */
+#define RTE_PTYPE_INNER_L3_IPV4_EXT 0x1000 /* 0b0001000000000000 */
+#define RTE_PTYPE_INNER_L3_IPV6 0x1800 /* 0b0001100000000000 */
+#define RTE_PTYPE_INNER_L3_IPV6_EXT 0x2000 /* 0b0010000000000000 */
+#define RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN 0x2800 /* 0b0010100000000000 */
+#define RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN 0x3000 /* 0b0011000000000000 */
+#define RTE_PTYPE_INNER_L3_MASK 0x3800 /* 0b0011100000000000 */
+/* bit 15:14 reserved */
+
+/**
+ * Check if the (outer) L3 header is IPv4. To avoid comparing IPv4 types one by
+ * one, bit 4 is selected to be used for IPv4 only. Then checking bit 4 can
+ * determin if it is an IPV4 packet.
+ */
+#define RTE_ETH_IS_IPV4_HDR(ptype) ((ptype) & RTE_PTYPE_L3_IPV4)
+
+/**
+ * Check if the (outer) L3 header is IPv4. To avoid comparing IPv4 types one by
+ * one, bit 6 is selected to be used for IPv4 only. Then checking bit 6 can
+ * determin if it is an IPV4 packet.
+ */
+#define RTE_ETH_IS_IPV6_HDR(ptype) ((ptype) & RTE_PTYPE_L3_IPV6)
+
+/* Check if it is a tunneling packet */
+#define RTE_ETH_IS_TUNNEL_PKT(ptype) ((ptype) & RTE_PTYPE_TUNNEL_MASK)
+
/**
* Get the name of a RX offload flag
*
--
1.8.1.4
Olivier MATZ
2015-01-30 13:56:17 UTC
Permalink
Hi Helin,
Post by Helin Zhang
As there are only 6 bit flags in ol_flags for indicating packet types,
which is not enough to describe all the possible packet types hardware
can recognize. For example, i40e hardware can recognize more than 150
packet types. Unified packet type is composed of tunnel type, L3 type,
L4 type and inner L3 type fields, and can be stored in 16 bits mbuf
field of 'packet_type'.
---
lib/librte_mbuf/rte_mbuf.h | 74 ++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 74 insertions(+)
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index 16059c6..94ae344 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -165,6 +165,80 @@ extern "C" {
/* Use final bit of flags to indicate a control mbuf */
#define CTRL_MBUF_FLAG (1ULL << 63) /**< Mbuf contains control data */
+/*
+ * Sixteen bits are divided into several fields to mark packet types. Note that
+ * each field is indexical.
+ * - Bit 3:0 is for tunnel types.
+ * - Bit 7:4 is for L3 or outer L3 (for tunneling case) types.
+ * - Bit 10:8 is for L4 types. It can also be used for inner L4 types for
+ * tunneling packets.
+ * - Bit 13:11 is for inner L3 types.
+ * - Bit 15:14 is reserved.
Is there a reason why using this specific order?

Also, there are 4 bits for outer L3 types and 3 bits for inner L3
types, but both of them have 6 different supported types. Is it
intentional?
Post by Helin Zhang
+ *
+ * To be compitable with Vector PMD, RTE_PTYPE_L3_IPV4, RTE_PTYPE_L3_IPV4_EXT,
compitable -> compatible
Post by Helin Zhang
+ * RTE_PTYPE_L3_IPV6, RTE_PTYPE_L3_IPV6_EXT, RTE_PTYPE_L4_TCP, RTE_PTYPE_L4_UDP
+ * and RTE_PTYPE_L4_SCTP should be kept as below in a contiguous 7 bits.
+ *
+ * Note that L3 types values are selected for checking IPV4/IPV6 header from
+ * performance point of view. Reading annotations of RTE_ETH_IS_IPV4_HDR and
+ * RTE_ETH_IS_IPV6_HDR is needed for any future changes of L3 type values.
+ */
+#define RTE_PTYPE_UNKNOWN 0x0000 /* 0b0000000000000000 */
+/* bit 3:0 for tunnel types */
+#define RTE_PTYPE_TUNNEL_IP 0x0001 /* 0b0000000000000001 */
+#define RTE_PTYPE_TUNNEL_TCP 0x0002 /* 0b0000000000000010 */
+#define RTE_PTYPE_TUNNEL_UDP 0x0003 /* 0b0000000000000011 */
+#define RTE_PTYPE_TUNNEL_GRE 0x0004 /* 0b0000000000000100 */
+#define RTE_PTYPE_TUNNEL_VXLAN 0x0005 /* 0b0000000000000101 */
+#define RTE_PTYPE_TUNNEL_NVGRE 0x0006 /* 0b0000000000000110 */
+#define RTE_PTYPE_TUNNEL_GENEVE 0x0007 /* 0b0000000000000111 */
+#define RTE_PTYPE_TUNNEL_GRENAT 0x0008 /* 0b0000000000001000 */
+#define RTE_PTYPE_TUNNEL_GRENAT_MAC 0x0009 /* 0b0000000000001001 */
+#define RTE_PTYPE_TUNNEL_GRENAT_MACVLAN 0x000a /* 0b0000000000001010 */
+#define RTE_PTYPE_TUNNEL_MASK 0x000f /* 0b0000000000001111 */
+/* bit 7:4 for L3 types */
+#define RTE_PTYPE_L3_IPV4 0x0010 /* 0b0000000000010000 */
+#define RTE_PTYPE_L3_IPV4_EXT 0x0030 /* 0b0000000000110000 */
+#define RTE_PTYPE_L3_IPV6 0x0040 /* 0b0000000001000000 */
+#define RTE_PTYPE_L3_IPV4_EXT_UNKNOWN 0x0090 /* 0b0000000010010000 */
+#define RTE_PTYPE_L3_IPV6_EXT 0x00c0 /* 0b0000000011000000 */
+#define RTE_PTYPE_L3_IPV6_EXT_UNKNOWN 0x00e0 /* 0b0000000011100000 */
+#define RTE_PTYPE_L3_MASK 0x00f0 /* 0b0000000011110000 */
can we expect that when RTE_PTYPE_L3_IPV4, RTE_PTYPE_L3_IPV4_EXT or
RTE_PTYPE_L3_IPV4_EXT_UNKNOWN is set, the hardware also verified the
L3 checksum?

My understanding is:

- if packet_type is IPv4* and PKT_RX_IP_CKSUM_BAD is 0
-> checksum was checked by hw and is good
- if packet_type is IPv4* and PKT_RX_IP_CKSUM_BAD is 1
-> checksum was checked by hw and is bad
- if packet_type is not IPv4*
-> checksum was not checked by hw

I think it would solve the problem asked by Stephen
http://dpdk.org/ml/archives/dev/2015-January/011550.html
Post by Helin Zhang
+/* bit 10:8 for L4 types */
+#define RTE_PTYPE_L4_TCP 0x0100 /* 0b0000000100000000 */
+#define RTE_PTYPE_L4_UDP 0x0200 /* 0b0000001000000000 */
+#define RTE_PTYPE_L4_FRAG 0x0300 /* 0b0000001100000000 */
+#define RTE_PTYPE_L4_SCTP 0x0400 /* 0b0000010000000000 */
+#define RTE_PTYPE_L4_ICMP 0x0500 /* 0b0000010100000000 */
+#define RTE_PTYPE_L4_NONFRAG 0x0600 /* 0b0000011000000000 */
+#define RTE_PTYPE_L4_MASK 0x0700 /* 0b0000011100000000 */
Same question for L4.

Note: it would means that if a hardware is able to recognize a TCP
packet but not to verify the checksum, it has to set RTE_PTYPE_L4 to
unknown.
Post by Helin Zhang
+/* bit 13:11 for inner L3 types */
+#define RTE_PTYPE_INNER_L3_IPV4 0x0800 /* 0b0000100000000000 */
+#define RTE_PTYPE_INNER_L3_IPV4_EXT 0x1000 /* 0b0001000000000000 */
+#define RTE_PTYPE_INNER_L3_IPV6 0x1800 /* 0b0001100000000000 */
+#define RTE_PTYPE_INNER_L3_IPV6_EXT 0x2000 /* 0b0010000000000000 */
+#define RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN 0x2800 /* 0b0010100000000000 */
+#define RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN 0x3000 /* 0b0011000000000000 */
+#define RTE_PTYPE_INNER_L3_MASK 0x3800 /* 0b0011100000000000 */
+/* bit 15:14 reserved */
+
+/**
+ * Check if the (outer) L3 header is IPv4. To avoid comparing IPv4 types one by
+ * one, bit 4 is selected to be used for IPv4 only. Then checking bit 4 can
+ * determin if it is an IPV4 packet.
+ */
+#define RTE_ETH_IS_IPV4_HDR(ptype) ((ptype) & RTE_PTYPE_L3_IPV4)
+
+/**
+ * Check if the (outer) L3 header is IPv4. To avoid comparing IPv4 types one by
+ * one, bit 6 is selected to be used for IPv4 only. Then checking bit 6 can
+ * determin if it is an IPV4 packet.
+ */
+#define RTE_ETH_IS_IPV6_HDR(ptype) ((ptype) & RTE_PTYPE_L3_IPV6)
+
+/* Check if it is a tunneling packet */
+#define RTE_ETH_IS_TUNNEL_PKT(ptype) ((ptype) & RTE_PTYPE_TUNNEL_MASK)
+
/**
* Get the name of a RX offload flag
*
Thanks,
Olivier
Zhang, Helin
2015-02-02 01:43:25 UTC
Permalink
Hi Olivier
-----Original Message-----
Sent: Friday, January 30, 2015 9:56 PM
Cc: Stephen Hemminger
Subject: Re: [dpdk-dev] [PATCH 01/17] mbuf: add definitions of unified packet
types
Hi Helin,
Post by Helin Zhang
As there are only 6 bit flags in ol_flags for indicating packet types,
which is not enough to describe all the possible packet types hardware
can recognize. For example, i40e hardware can recognize more than 150
packet types. Unified packet type is composed of tunnel type, L3 type,
L4 type and inner L3 type fields, and can be stored in 16 bits mbuf
field of 'packet_type'.
---
lib/librte_mbuf/rte_mbuf.h | 74
++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 74 insertions(+)
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index 16059c6..94ae344 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -165,6 +165,80 @@ extern "C" {
/* Use final bit of flags to indicate a control mbuf */
#define CTRL_MBUF_FLAG (1ULL << 63) /**< Mbuf contains control
data */
Post by Helin Zhang
+/*
+ * Sixteen bits are divided into several fields to mark packet types.
+Note that
+ * each field is indexical.
+ * - Bit 3:0 is for tunnel types.
+ * - Bit 7:4 is for L3 or outer L3 (for tunneling case) types.
+ * - Bit 10:8 is for L4 types. It can also be used for inner L4 types for
+ * tunneling packets.
+ * - Bit 13:11 is for inner L3 types.
+ * - Bit 15:14 is reserved.
Is there a reason why using this specific order?
Yes, to support ixgbe Vector PMD, outer L3 types and L4 types need to be contiguous
and in this order.
Also, there are 4 bits for outer L3 types and 3 bits for inner L3 types, but both of
them have 6 different supported types. Is it intentional?
Yes, it is to support ixgbe Vector PMD. Contiguous 7 bits are needed, though 1 bit wasted.
Post by Helin Zhang
+ *
+ * To be compitable with Vector PMD, RTE_PTYPE_L3_IPV4,
+ RTE_PTYPE_L3_IPV4_EXT,
compitable -> compatible
Good catch! It will be fixed in next version. Thanks!
Post by Helin Zhang
+ * RTE_PTYPE_L3_IPV6, RTE_PTYPE_L3_IPV6_EXT, RTE_PTYPE_L4_TCP,
+RTE_PTYPE_L4_UDP
+ * and RTE_PTYPE_L4_SCTP should be kept as below in a contiguous 7 bits.
+ *
+ * Note that L3 types values are selected for checking IPV4/IPV6
+header from
+ * performance point of view. Reading annotations of
+RTE_ETH_IS_IPV4_HDR and
+ * RTE_ETH_IS_IPV6_HDR is needed for any future changes of L3 type
values.
Post by Helin Zhang
+ */
+#define RTE_PTYPE_UNKNOWN 0x0000 /*
0b0000000000000000 */
Post by Helin Zhang
+/* bit 3:0 for tunnel types */
+#define RTE_PTYPE_TUNNEL_IP 0x0001 /*
0b0000000000000001 */
Post by Helin Zhang
+#define RTE_PTYPE_TUNNEL_TCP 0x0002 /*
0b0000000000000010 */
Post by Helin Zhang
+#define RTE_PTYPE_TUNNEL_UDP 0x0003 /*
0b0000000000000011 */
Post by Helin Zhang
+#define RTE_PTYPE_TUNNEL_GRE 0x0004 /*
0b0000000000000100 */
Post by Helin Zhang
+#define RTE_PTYPE_TUNNEL_VXLAN 0x0005 /*
0b0000000000000101 */
Post by Helin Zhang
+#define RTE_PTYPE_TUNNEL_NVGRE 0x0006 /*
0b0000000000000110 */
Post by Helin Zhang
+#define RTE_PTYPE_TUNNEL_GENEVE 0x0007 /*
0b0000000000000111 */
Post by Helin Zhang
+#define RTE_PTYPE_TUNNEL_GRENAT 0x0008 /*
0b0000000000001000 */
Post by Helin Zhang
+#define RTE_PTYPE_TUNNEL_GRENAT_MAC 0x0009 /*
0b0000000000001001 */
Post by Helin Zhang
+#define RTE_PTYPE_TUNNEL_GRENAT_MACVLAN 0x000a /*
0b0000000000001010 */
Post by Helin Zhang
+#define RTE_PTYPE_TUNNEL_MASK 0x000f /*
0b0000000000001111 */
Post by Helin Zhang
+/* bit 7:4 for L3 types */
+#define RTE_PTYPE_L3_IPV4 0x0010 /*
0b0000000000010000 */
Post by Helin Zhang
+#define RTE_PTYPE_L3_IPV4_EXT 0x0030 /*
0b0000000000110000 */
Post by Helin Zhang
+#define RTE_PTYPE_L3_IPV6 0x0040 /*
0b0000000001000000 */
Post by Helin Zhang
+#define RTE_PTYPE_L3_IPV4_EXT_UNKNOWN 0x0090 /*
0b0000000010010000 */
Post by Helin Zhang
+#define RTE_PTYPE_L3_IPV6_EXT 0x00c0 /*
0b0000000011000000 */
Post by Helin Zhang
+#define RTE_PTYPE_L3_IPV6_EXT_UNKNOWN 0x00e0 /*
0b0000000011100000 */
Post by Helin Zhang
+#define RTE_PTYPE_L3_MASK 0x00f0 /*
0b0000000011110000 */
can we expect that when RTE_PTYPE_L3_IPV4, RTE_PTYPE_L3_IPV4_EXT or
RTE_PTYPE_L3_IPV4_EXT_UNKNOWN is set, the hardware also verified the
L3 checksum?
RTE_PTYPE_L3_IPV4 means there is NONE-EXT. Each time only one of above 3 can be set.
These bits don't indicate any checksum, checksum should be indicated by other flags.
They are just for packet types hardware can recognized.
- if packet_type is IPv4* and PKT_RX_IP_CKSUM_BAD is 0
-> checksum was checked by hw and is good
- if packet_type is IPv4* and PKT_RX_IP_CKSUM_BAD is 1
-> checksum was checked by hw and is bad
- if packet_type is not IPv4*
-> checksum was not checked by hw
I think it would solve the problem asked by Stephen
http://dpdk.org/ml/archives/dev/2015-January/011550.html
Post by Helin Zhang
+/* bit 10:8 for L4 types */
+#define RTE_PTYPE_L4_TCP 0x0100 /*
0b0000000100000000 */
Post by Helin Zhang
+#define RTE_PTYPE_L4_UDP 0x0200 /*
0b0000001000000000 */
Post by Helin Zhang
+#define RTE_PTYPE_L4_FRAG 0x0300 /*
0b0000001100000000 */
Post by Helin Zhang
+#define RTE_PTYPE_L4_SCTP 0x0400 /*
0b0000010000000000 */
Post by Helin Zhang
+#define RTE_PTYPE_L4_ICMP 0x0500 /*
0b0000010100000000 */
Post by Helin Zhang
+#define RTE_PTYPE_L4_NONFRAG 0x0600 /*
0b0000011000000000 */
Post by Helin Zhang
+#define RTE_PTYPE_L4_MASK 0x0700 /*
0b0000011100000000 */
Same question for L4.
Note: it would means that if a hardware is able to recognize a TCP packet but
not to verify the checksum, it has to set RTE_PTYPE_L4 to unknown.
Post by Helin Zhang
+/* bit 13:11 for inner L3 types */
+#define RTE_PTYPE_INNER_L3_IPV4 0x0800 /*
0b0000100000000000 */
Post by Helin Zhang
+#define RTE_PTYPE_INNER_L3_IPV4_EXT 0x1000 /*
0b0001000000000000 */
Post by Helin Zhang
+#define RTE_PTYPE_INNER_L3_IPV6 0x1800 /*
0b0001100000000000 */
Post by Helin Zhang
+#define RTE_PTYPE_INNER_L3_IPV6_EXT 0x2000 /*
0b0010000000000000 */
Post by Helin Zhang
+#define RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN 0x2800 /*
+0b0010100000000000 */ #define
RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN 0x3000 /* 0b0011000000000000
*/
We cannot define the hardware behaviors, it just reports the hardware recognized
packet information directly to the mbuf.
Based on my experiment on i40e hardware, if a IPV4 packet with wrong checksum,
by default, the PMD driver cannot see the packet at all. So we don't need to care
about it too much!
Thanks for the good comments!
Post by Helin Zhang
+#define RTE_PTYPE_INNER_L3_MASK 0x3800 /*
0b0011100000000000 */
Post by Helin Zhang
+/* bit 15:14 reserved */
+
+/**
+ * Check if the (outer) L3 header is IPv4. To avoid comparing IPv4
+types one by
+ * one, bit 4 is selected to be used for IPv4 only. Then checking bit
+4 can
+ * determin if it is an IPV4 packet.
+ */
+#define RTE_ETH_IS_IPV4_HDR(ptype) ((ptype) & RTE_PTYPE_L3_IPV4)
+
+/**
+ * Check if the (outer) L3 header is IPv4. To avoid comparing IPv4
+types one by
+ * one, bit 6 is selected to be used for IPv4 only. Then checking bit
+6 can
+ * determin if it is an IPV4 packet.
+ */
+#define RTE_ETH_IS_IPV6_HDR(ptype) ((ptype) & RTE_PTYPE_L3_IPV6)
+
+/* Check if it is a tunneling packet */ #define
+RTE_ETH_IS_TUNNEL_PKT(ptype) ((ptype) & RTE_PTYPE_TUNNEL_MASK)
+
/**
* Get the name of a RX offload flag
*
Thanks,
Olivier
Olivier MATZ
2015-02-02 11:18:16 UTC
Permalink
Hi Helin,
Post by Zhang, Helin
Post by Olivier MATZ
Post by Helin Zhang
+/*
+ * Sixteen bits are divided into several fields to mark packet types.
+Note that
+ * each field is indexical.
+ * - Bit 3:0 is for tunnel types.
+ * - Bit 7:4 is for L3 or outer L3 (for tunneling case) types.
+ * - Bit 10:8 is for L4 types. It can also be used for inner L4 types for
+ * tunneling packets.
+ * - Bit 13:11 is for inner L3 types.
+ * - Bit 15:14 is reserved.
Is there a reason why using this specific order?
Yes, to support ixgbe Vector PMD, outer L3 types and L4 types need to be contiguous
and in this order.
When you say "need to be", do you mean it's impossible to do in another
manner or just that it would be slower?
Post by Zhang, Helin
Post by Olivier MATZ
Also, there are 4 bits for outer L3 types and 3 bits for inner L3 types, but both of
them have 6 different supported types. Is it intentional?
Yes, it is to support ixgbe Vector PMD. Contiguous 7 bits are needed, though 1 bit wasted.
To be honnest, I'm always a surprised that in dpdk we prefer having
a strange API just because it's faster or easier to do on one
specific driver (usually i40e or ixgbe). Unfortunately, trying to
optimize the API for one driver may result in making the rest of the
code (application and other drivers) slower and more complex.

In your proposition, there is no inner l4_type. I consider it's as
useful as the other fields. From what I see, there are only 2 bits
left. What do you think about changing the packet type to 64 bits now?

From an API point of view, I think it would be good to have the
same structure for inner and outer types. For instance (this is
just an example):

union layer_pkt_type {
struct {
uint16_t l2_type:4;
uint16_t l3_type:4;
uint16_t l4_type:4;
uint16_t tun_type:4;
};
uint16_t u16;
};

struct pkt_type {
union layer_pkt_type outer;
union layer_pkt_type inner;
};

When your application decapsulates tunnels, you can just do
outer = inner and enter into the same code.
Post by Zhang, Helin
Post by Olivier MATZ
Post by Helin Zhang
+ * RTE_PTYPE_L3_IPV6, RTE_PTYPE_L3_IPV6_EXT, RTE_PTYPE_L4_TCP,
+RTE_PTYPE_L4_UDP
+ * and RTE_PTYPE_L4_SCTP should be kept as below in a contiguous 7 bits.
+ *
+ * Note that L3 types values are selected for checking IPV4/IPV6
+header from
+ * performance point of view. Reading annotations of
+RTE_ETH_IS_IPV4_HDR and
+ * RTE_ETH_IS_IPV6_HDR is needed for any future changes of L3 type
values.
Post by Helin Zhang
+ */
+#define RTE_PTYPE_UNKNOWN 0x0000 /*
0b0000000000000000 */
Post by Helin Zhang
+/* bit 3:0 for tunnel types */
+#define RTE_PTYPE_TUNNEL_IP 0x0001 /*
0b0000000000000001 */
Post by Helin Zhang
+#define RTE_PTYPE_TUNNEL_TCP 0x0002 /*
0b0000000000000010 */
Post by Helin Zhang
+#define RTE_PTYPE_TUNNEL_UDP 0x0003 /*
0b0000000000000011 */
Post by Helin Zhang
+#define RTE_PTYPE_TUNNEL_GRE 0x0004 /*
0b0000000000000100 */
Post by Helin Zhang
+#define RTE_PTYPE_TUNNEL_VXLAN 0x0005 /*
0b0000000000000101 */
Post by Helin Zhang
+#define RTE_PTYPE_TUNNEL_NVGRE 0x0006 /*
0b0000000000000110 */
Post by Helin Zhang
+#define RTE_PTYPE_TUNNEL_GENEVE 0x0007 /*
0b0000000000000111 */
Post by Helin Zhang
+#define RTE_PTYPE_TUNNEL_GRENAT 0x0008 /*
0b0000000000001000 */
Post by Helin Zhang
+#define RTE_PTYPE_TUNNEL_GRENAT_MAC 0x0009 /*
0b0000000000001001 */
Post by Helin Zhang
+#define RTE_PTYPE_TUNNEL_GRENAT_MACVLAN 0x000a /*
0b0000000000001010 */
Post by Helin Zhang
+#define RTE_PTYPE_TUNNEL_MASK 0x000f /*
0b0000000000001111 */
Post by Helin Zhang
+/* bit 7:4 for L3 types */
+#define RTE_PTYPE_L3_IPV4 0x0010 /*
0b0000000000010000 */
Post by Helin Zhang
+#define RTE_PTYPE_L3_IPV4_EXT 0x0030 /*
0b0000000000110000 */
Post by Helin Zhang
+#define RTE_PTYPE_L3_IPV6 0x0040 /*
0b0000000001000000 */
Post by Helin Zhang
+#define RTE_PTYPE_L3_IPV4_EXT_UNKNOWN 0x0090 /*
0b0000000010010000 */
Post by Helin Zhang
+#define RTE_PTYPE_L3_IPV6_EXT 0x00c0 /*
0b0000000011000000 */
Post by Helin Zhang
+#define RTE_PTYPE_L3_IPV6_EXT_UNKNOWN 0x00e0 /*
0b0000000011100000 */
Post by Helin Zhang
+#define RTE_PTYPE_L3_MASK 0x00f0 /*
0b0000000011110000 */
can we expect that when RTE_PTYPE_L3_IPV4, RTE_PTYPE_L3_IPV4_EXT or
RTE_PTYPE_L3_IPV4_EXT_UNKNOWN is set, the hardware also verified the
L3 checksum?
RTE_PTYPE_L3_IPV4 means there is NONE-EXT. Each time only one of above 3 can be set.
These bits don't indicate any checksum, checksum should be indicated by other flags.
They are just for packet types hardware can recognized.
I think these 2 information are linked:

- if the hardware cannot recognize packet, it cannot calculate the
checksum because it does not know the packet type
- if the hardware can recognize the packet, it can verify that the
checksum is good or wrong

Today, we have:

- PKT_RX_IPV4_HDR and PKT_RX_IPV4_HDR_EXT to tell if the packet is
seen as IPv4 by the hw.

- We can suppose that:

- PKT_RX_IPV4_HDR(_EXT)=0 -> no hw checksum information
- PKT_RX_IPV4_HDR(_EXT)=1 and PKT_RX_IP_CKSUM_BAD=0 -> checksum
is correct
- PKT_RX_IPV4_HDR(_EXT)=1 and PKT_RX_IP_CKSUM_BAD=1 -> checksum
is not correct

- We cannot do the same with L4 because we have no L4 type info,
but it would be good to be able to do the same.

With your patch, you are removing the PKT_RX_IPV4_HDR and
PKT_RX_IPV4_HDR_EXT flags, but I think the above assumption
about checksum should be kept. As you are adding a L4 type
info, the same method could be applied to L4 checksums.

I think this would definitely solve the problem described by Stephen.
Post by Zhang, Helin
Post by Olivier MATZ
- if packet_type is IPv4* and PKT_RX_IP_CKSUM_BAD is 0
-> checksum was checked by hw and is good
- if packet_type is IPv4* and PKT_RX_IP_CKSUM_BAD is 1
-> checksum was checked by hw and is bad
- if packet_type is not IPv4*
-> checksum was not checked by hw
I think it would solve the problem asked by Stephen
http://dpdk.org/ml/archives/dev/2015-January/011550.html
Post by Helin Zhang
+/* bit 10:8 for L4 types */
+#define RTE_PTYPE_L4_TCP 0x0100 /*
0b0000000100000000 */
Post by Helin Zhang
+#define RTE_PTYPE_L4_UDP 0x0200 /*
0b0000001000000000 */
Post by Helin Zhang
+#define RTE_PTYPE_L4_FRAG 0x0300 /*
0b0000001100000000 */
Post by Helin Zhang
+#define RTE_PTYPE_L4_SCTP 0x0400 /*
0b0000010000000000 */
Post by Helin Zhang
+#define RTE_PTYPE_L4_ICMP 0x0500 /*
0b0000010100000000 */
Post by Helin Zhang
+#define RTE_PTYPE_L4_NONFRAG 0x0600 /*
0b0000011000000000 */
Post by Helin Zhang
+#define RTE_PTYPE_L4_MASK 0x0700 /*
0b0000011100000000 */
Same question for L4.
Note: it would means that if a hardware is able to recognize a TCP packet but
not to verify the checksum, it has to set RTE_PTYPE_L4 to unknown.
Post by Helin Zhang
+/* bit 13:11 for inner L3 types */
+#define RTE_PTYPE_INNER_L3_IPV4 0x0800 /*
0b0000100000000000 */
Post by Helin Zhang
+#define RTE_PTYPE_INNER_L3_IPV4_EXT 0x1000 /*
0b0001000000000000 */
Post by Helin Zhang
+#define RTE_PTYPE_INNER_L3_IPV6 0x1800 /*
0b0001100000000000 */
Post by Helin Zhang
+#define RTE_PTYPE_INNER_L3_IPV6_EXT 0x2000 /*
0b0010000000000000 */
Post by Helin Zhang
+#define RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN 0x2800 /*
+0b0010100000000000 */ #define
RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN 0x3000 /* 0b0011000000000000
*/
We cannot define the hardware behaviors, it just reports the hardware recognized
packet information directly to the mbuf.
Based on my experiment on i40e hardware, if a IPV4 packet with wrong checksum,
by default, the PMD driver cannot see the packet at all. So we don't need to care
about it too much!
I agree that the hardware reports some info that can be different
depending on the hw. But the role of the driver is to convert these info
into a common API with a well-defined behavior.

Regards,
Olivier
Zhang, Helin
2015-02-03 03:18:59 UTC
Permalink
-----Original Message-----
Sent: Monday, February 2, 2015 7:18 PM
Cc: Stephen Hemminger
Subject: Re: [dpdk-dev] [PATCH 01/17] mbuf: add definitions of unified packet
types
Hi Helin,
Post by Zhang, Helin
Post by Olivier MATZ
Post by Helin Zhang
+/*
+ * Sixteen bits are divided into several fields to mark packet types.
+Note that
+ * each field is indexical.
+ * - Bit 3:0 is for tunnel types.
+ * - Bit 7:4 is for L3 or outer L3 (for tunneling case) types.
+ * - Bit 10:8 is for L4 types. It can also be used for inner L4 types for
+ * tunneling packets.
+ * - Bit 13:11 is for inner L3 types.
+ * - Bit 15:14 is reserved.
Is there a reason why using this specific order?
Yes, to support ixgbe Vector PMD, outer L3 types and L4 types need to
be contiguous and in this order.
When you say "need to be", do you mean it's impossible to do in another
manner or just that it would be slower?
It was designed to be like this, otherwise, performance drop must be expected.
Post by Zhang, Helin
Post by Olivier MATZ
Also, there are 4 bits for outer L3 types and 3 bits for inner L3
types, but both of them have 6 different supported types. Is it intentional?
Yes, it is to support ixgbe Vector PMD. Contiguous 7 bits are needed, though
1 bit wasted.
To be honnest, I'm always a surprised that in dpdk we prefer having a strange
API just because it's faster or easier to do on one specific driver (usually i40e or
ixgbe). Unfortunately, trying to optimize the API for one driver may result in
making the rest of the code (application and other drivers) slower and more
complex.
Based on my understanding, 'faster' is most of DPDK customers wanted. Otherwise,
they don't need DPDK. Different hardware must have different capabilities, I am trying
to unify at least packet types to get things easier.
In your proposition, there is no inner l4_type. I consider it's as useful as the
other fields. From what I see, there are only 2 bits left. What do you think about
changing the packet type to 64 bits now?
For tunneling cases, L4_type is for inner L4 type, outer L4 type is not needed, as it
can be in tunnel type.
I can expect 64 bits are needed in the future. But for now, I don't see any strong
demand on that for currently supported hardware.
In addition, there is no free bit in the first cache line of mbuf header, mbuf changes
are needed to expand it. I'd prefer to do it later to make things easier.
From an API point of view, I think it would be good to have the same structure
union layer_pkt_type {
struct {
uint16_t l2_type:4;
uint16_t l3_type:4;
uint16_t l4_type:4;
uint16_t tun_type:4;
};
uint16_t u16;
};
struct pkt_type {
union layer_pkt_type outer;
union layer_pkt_type inner;
};
When your application decapsulates tunnels, you can just do outer = inner and
enter into the same code.
Expanding packet_type is not easy, as there is no free bits in the first cache line.
Is there any tunnel type in inner packet? Is it a waste?
Is L2 type really needed? I don't know.
Post by Zhang, Helin
Post by Olivier MATZ
Post by Helin Zhang
+ * RTE_PTYPE_L3_IPV6, RTE_PTYPE_L3_IPV6_EXT, RTE_PTYPE_L4_TCP,
+RTE_PTYPE_L4_UDP
+ * and RTE_PTYPE_L4_SCTP should be kept as below in a contiguous 7
bits.
Post by Zhang, Helin
Post by Olivier MATZ
Post by Helin Zhang
+ *
+ * Note that L3 types values are selected for checking IPV4/IPV6
+header from
+ * performance point of view. Reading annotations of
+RTE_ETH_IS_IPV4_HDR and
+ * RTE_ETH_IS_IPV6_HDR is needed for any future changes of L3 type
values.
Post by Helin Zhang
+ */
+#define RTE_PTYPE_UNKNOWN 0x0000 /*
0b0000000000000000 */
Post by Helin Zhang
+/* bit 3:0 for tunnel types */
+#define RTE_PTYPE_TUNNEL_IP 0x0001 /*
0b0000000000000001 */
Post by Helin Zhang
+#define RTE_PTYPE_TUNNEL_TCP 0x0002 /*
0b0000000000000010 */
Post by Helin Zhang
+#define RTE_PTYPE_TUNNEL_UDP 0x0003 /*
0b0000000000000011 */
Post by Helin Zhang
+#define RTE_PTYPE_TUNNEL_GRE 0x0004 /*
0b0000000000000100 */
Post by Helin Zhang
+#define RTE_PTYPE_TUNNEL_VXLAN 0x0005 /*
0b0000000000000101 */
Post by Helin Zhang
+#define RTE_PTYPE_TUNNEL_NVGRE 0x0006 /*
0b0000000000000110 */
Post by Helin Zhang
+#define RTE_PTYPE_TUNNEL_GENEVE 0x0007 /*
0b0000000000000111 */
Post by Helin Zhang
+#define RTE_PTYPE_TUNNEL_GRENAT 0x0008 /*
0b0000000000001000 */
Post by Helin Zhang
+#define RTE_PTYPE_TUNNEL_GRENAT_MAC 0x0009 /*
0b0000000000001001 */
Post by Helin Zhang
+#define RTE_PTYPE_TUNNEL_GRENAT_MACVLAN 0x000a /*
0b0000000000001010 */
Post by Helin Zhang
+#define RTE_PTYPE_TUNNEL_MASK 0x000f /*
0b0000000000001111 */
Post by Helin Zhang
+/* bit 7:4 for L3 types */
+#define RTE_PTYPE_L3_IPV4 0x0010 /*
0b0000000000010000 */
Post by Helin Zhang
+#define RTE_PTYPE_L3_IPV4_EXT 0x0030 /*
0b0000000000110000 */
Post by Helin Zhang
+#define RTE_PTYPE_L3_IPV6 0x0040 /*
0b0000000001000000 */
Post by Helin Zhang
+#define RTE_PTYPE_L3_IPV4_EXT_UNKNOWN 0x0090 /*
0b0000000010010000 */
Post by Helin Zhang
+#define RTE_PTYPE_L3_IPV6_EXT 0x00c0 /*
0b0000000011000000 */
Post by Helin Zhang
+#define RTE_PTYPE_L3_IPV6_EXT_UNKNOWN 0x00e0 /*
0b0000000011100000 */
Post by Helin Zhang
+#define RTE_PTYPE_L3_MASK 0x00f0 /*
0b0000000011110000 */
can we expect that when RTE_PTYPE_L3_IPV4, RTE_PTYPE_L3_IPV4_EXT or
RTE_PTYPE_L3_IPV4_EXT_UNKNOWN is set, the hardware also verified the
L3 checksum?
RTE_PTYPE_L3_IPV4 means there is NONE-EXT. Each time only one of above 3
can be set.
Post by Zhang, Helin
These bits don't indicate any checksum, checksum should be indicated by
other flags.
Post by Zhang, Helin
They are just for packet types hardware can recognized.
- if the hardware cannot recognize packet, it cannot calculate the
checksum because it does not know the packet type
- if the hardware can recognize the packet, it can verify that the
checksum is good or wrong
We cannot know how hardware works, we care about what hardware can report.
- PKT_RX_IPV4_HDR and PKT_RX_IPV4_HDR_EXT to tell if the packet is
seen as IPv4 by the hw.
- PKT_RX_IPV4_HDR(_EXT)=0 -> no hw checksum information
- PKT_RX_IPV4_HDR(_EXT)=1 and PKT_RX_IP_CKSUM_BAD=0 -> checksum
is correct
- PKT_RX_IPV4_HDR(_EXT)=1 and PKT_RX_IP_CKSUM_BAD=1 -> checksum
is not correct
- We cannot do the same with L4 because we have no L4 type info,
but it would be good to be able to do the same.
With your patch, you are removing the PKT_RX_IPV4_HDR and
PKT_RX_IPV4_HDR_EXT flags, but I think the above assumption about
checksum should be kept. As you are adding a L4 type info, the same method
could be applied to L4 checksums.
I think this would definitely solve the problem described by Stephen.
I think packet type and checksum are different things. They are reported by different fields.
PKT_RX_IPV4_HDR and PKT_RX_IPV4_HDR_EXT mean packet type only,
nothing about checksum. Checksum GOOD/BAD can be reported by other flags in ol_flags.
Post by Zhang, Helin
Post by Olivier MATZ
- if packet_type is IPv4* and PKT_RX_IP_CKSUM_BAD is 0
-> checksum was checked by hw and is good
- if packet_type is IPv4* and PKT_RX_IP_CKSUM_BAD is 1
-> checksum was checked by hw and is bad
- if packet_type is not IPv4*
-> checksum was not checked by hw
I think it would solve the problem asked by Stephen
http://dpdk.org/ml/archives/dev/2015-January/011550.html
Post by Helin Zhang
+/* bit 10:8 for L4 types */
+#define RTE_PTYPE_L4_TCP 0x0100 /*
0b0000000100000000 */
Post by Helin Zhang
+#define RTE_PTYPE_L4_UDP 0x0200 /*
0b0000001000000000 */
Post by Helin Zhang
+#define RTE_PTYPE_L4_FRAG 0x0300 /*
0b0000001100000000 */
Post by Helin Zhang
+#define RTE_PTYPE_L4_SCTP 0x0400 /*
0b0000010000000000 */
Post by Helin Zhang
+#define RTE_PTYPE_L4_ICMP 0x0500 /*
0b0000010100000000 */
Post by Helin Zhang
+#define RTE_PTYPE_L4_NONFRAG 0x0600 /*
0b0000011000000000 */
Post by Helin Zhang
+#define RTE_PTYPE_L4_MASK 0x0700 /*
0b0000011100000000 */
Same question for L4.
Note: it would means that if a hardware is able to recognize a TCP
packet but not to verify the checksum, it has to set RTE_PTYPE_L4 to
unknown.
Post by Zhang, Helin
Post by Olivier MATZ
Post by Helin Zhang
+/* bit 13:11 for inner L3 types */
+#define RTE_PTYPE_INNER_L3_IPV4 0x0800 /*
0b0000100000000000 */
Post by Helin Zhang
+#define RTE_PTYPE_INNER_L3_IPV4_EXT 0x1000 /*
0b0001000000000000 */
Post by Helin Zhang
+#define RTE_PTYPE_INNER_L3_IPV6 0x1800 /*
0b0001100000000000 */
Post by Helin Zhang
+#define RTE_PTYPE_INNER_L3_IPV6_EXT 0x2000 /*
0b0010000000000000 */
Post by Helin Zhang
+#define RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN 0x2800 /*
+0b0010100000000000 */ #define
RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN 0x3000 /*
0b0011000000000000 */
Post by Zhang, Helin
We cannot define the hardware behaviors, it just reports the hardware
recognized packet information directly to the mbuf.
Based on my experiment on i40e hardware, if a IPV4 packet with wrong
checksum, by default, the PMD driver cannot see the packet at all. So
we don't need to care about it too much!
I agree that the hardware reports some info that can be different depending on
the hw. But the role of the driver is to convert these info into a common API
with a well-defined behavior.
Yes, driver should report the received packet information to a well-defined behavior,
but not the same behavior, even for the same packet.
Capability can be queried for each port, and then the application can know the port
capability well, and know what the hardware can report, and what the hardware
cannot report.
Driver should enable the hardware with its advanced capabilities as most as possible.
Regards,
Olivier
Zhang, Helin
2015-02-03 06:37:28 UTC
Permalink
-----Original Message-----
From: Zhang, Helin
Sent: Tuesday, February 3, 2015 11:19 AM
Cc: Stephen Hemminger
Subject: RE: [dpdk-dev] [PATCH 01/17] mbuf: add definitions of unified packet
types
-----Original Message-----
Sent: Monday, February 2, 2015 7:18 PM
Cc: Stephen Hemminger
Subject: Re: [dpdk-dev] [PATCH 01/17] mbuf: add definitions of unified
packet types
Hi Helin,
Post by Zhang, Helin
Post by Olivier MATZ
Post by Helin Zhang
+/*
+ * Sixteen bits are divided into several fields to mark packet types.
+Note that
+ * each field is indexical.
+ * - Bit 3:0 is for tunnel types.
+ * - Bit 7:4 is for L3 or outer L3 (for tunneling case) types.
+ * - Bit 10:8 is for L4 types. It can also be used for inner L4 types for
+ * tunneling packets.
+ * - Bit 13:11 is for inner L3 types.
+ * - Bit 15:14 is reserved.
Is there a reason why using this specific order?
Yes, to support ixgbe Vector PMD, outer L3 types and L4 types need
to be contiguous and in this order.
When you say "need to be", do you mean it's impossible to do in
another manner or just that it would be slower?
It was designed to be like this, otherwise, performance drop must be expected.
Post by Zhang, Helin
Post by Olivier MATZ
Also, there are 4 bits for outer L3 types and 3 bits for inner L3
types, but both of them have 6 different supported types. Is it intentional?
Yes, it is to support ixgbe Vector PMD. Contiguous 7 bits are needed, though
1 bit wasted.
To be honnest, I'm always a surprised that in dpdk we prefer having a
strange API just because it's faster or easier to do on one specific
driver (usually i40e or ixgbe). Unfortunately, trying to optimize the
API for one driver may result in making the rest of the code
(application and other drivers) slower and more complex.
Based on my understanding, 'faster' is most of DPDK customers wanted.
Otherwise, they don't need DPDK. Different hardware must have different
capabilities, I am trying to unify at least packet types to get things easier.
In your proposition, there is no inner l4_type. I consider it's as
useful as the other fields. From what I see, there are only 2 bits
left. What do you think about changing the packet type to 64 bits now?
For tunneling cases, L4_type is for inner L4 type, outer L4 type is not needed, as
it can be in tunnel type.
I can expect 64 bits are needed in the future. But for now, I don't see any
strong demand on that for currently supported hardware.
In addition, there is no free bit in the first cache line of mbuf header, mbuf
changes are needed to expand it. I'd prefer to do it later to make things easier.
Sorry, I misremember the usage of the first cache line of mbuf. It still has some
free space. Based on this, enlarging (to 32 or 64 bits) the packet type might be good.
From an API point of view, I think it would be good to have the same
union layer_pkt_type {
struct {
uint16_t l2_type:4;
uint16_t l3_type:4;
uint16_t l4_type:4;
uint16_t tun_type:4;
};
uint16_t u16;
};
struct pkt_type {
union layer_pkt_type outer;
union layer_pkt_type inner;
};
When your application decapsulates tunnels, you can just do outer =
inner and enter into the same code.
Expanding packet_type is not easy, as there is no free bits in the first cache line.
Is there any tunnel type in inner packet? Is it a waste?
Is L2 type really needed? I don't know.
If it is now not short of space in mbuf, the definition as yours might be good.
But tun_type is not required for inner packet, I'd prefer to define it as needed
with taking into account the Vector PMD support. It seems 32 bits might be enough,
like below,
struct pkt_type {
uint32_t l2_type:4;
uint32_t l3_type:4;
uint32_t l4_type:4;
uint32_t tun_type:4;
uint32_t inner_l2_type:4;
uint32_t inner_l3_type:4;
uint32_t inner_l4_type:4;
}

Regards,
Helin
Post by Zhang, Helin
Post by Olivier MATZ
Post by Helin Zhang
+ * RTE_PTYPE_L3_IPV6, RTE_PTYPE_L3_IPV6_EXT, RTE_PTYPE_L4_TCP,
+RTE_PTYPE_L4_UDP
+ * and RTE_PTYPE_L4_SCTP should be kept as below in a contiguous
+7
bits.
Post by Zhang, Helin
Post by Olivier MATZ
Post by Helin Zhang
+ *
+ * Note that L3 types values are selected for checking IPV4/IPV6
+header from
+ * performance point of view. Reading annotations of
+RTE_ETH_IS_IPV4_HDR and
+ * RTE_ETH_IS_IPV6_HDR is needed for any future changes of L3
+type
values.
Post by Helin Zhang
+ */
+#define RTE_PTYPE_UNKNOWN 0x0000 /*
0b0000000000000000 */
Post by Helin Zhang
+/* bit 3:0 for tunnel types */
+#define RTE_PTYPE_TUNNEL_IP 0x0001 /*
0b0000000000000001 */
Post by Helin Zhang
+#define RTE_PTYPE_TUNNEL_TCP 0x0002 /*
0b0000000000000010 */
Post by Helin Zhang
+#define RTE_PTYPE_TUNNEL_UDP 0x0003 /*
0b0000000000000011 */
Post by Helin Zhang
+#define RTE_PTYPE_TUNNEL_GRE 0x0004 /*
0b0000000000000100 */
Post by Helin Zhang
+#define RTE_PTYPE_TUNNEL_VXLAN 0x0005 /*
0b0000000000000101 */
Post by Helin Zhang
+#define RTE_PTYPE_TUNNEL_NVGRE 0x0006 /*
0b0000000000000110 */
Post by Helin Zhang
+#define RTE_PTYPE_TUNNEL_GENEVE 0x0007 /*
0b0000000000000111 */
Post by Helin Zhang
+#define RTE_PTYPE_TUNNEL_GRENAT 0x0008 /*
0b0000000000001000 */
Post by Helin Zhang
+#define RTE_PTYPE_TUNNEL_GRENAT_MAC 0x0009 /*
0b0000000000001001 */
Post by Helin Zhang
+#define RTE_PTYPE_TUNNEL_GRENAT_MACVLAN 0x000a /*
0b0000000000001010 */
Post by Helin Zhang
+#define RTE_PTYPE_TUNNEL_MASK 0x000f /*
0b0000000000001111 */
Post by Helin Zhang
+/* bit 7:4 for L3 types */
+#define RTE_PTYPE_L3_IPV4 0x0010 /*
0b0000000000010000 */
Post by Helin Zhang
+#define RTE_PTYPE_L3_IPV4_EXT 0x0030 /*
0b0000000000110000 */
Post by Helin Zhang
+#define RTE_PTYPE_L3_IPV6 0x0040 /*
0b0000000001000000 */
Post by Helin Zhang
+#define RTE_PTYPE_L3_IPV4_EXT_UNKNOWN 0x0090 /*
0b0000000010010000 */
Post by Helin Zhang
+#define RTE_PTYPE_L3_IPV6_EXT 0x00c0 /*
0b0000000011000000 */
Post by Helin Zhang
+#define RTE_PTYPE_L3_IPV6_EXT_UNKNOWN 0x00e0 /*
0b0000000011100000 */
Post by Helin Zhang
+#define RTE_PTYPE_L3_MASK 0x00f0 /*
0b0000000011110000 */
can we expect that when RTE_PTYPE_L3_IPV4, RTE_PTYPE_L3_IPV4_EXT
or
Post by Zhang, Helin
Post by Olivier MATZ
RTE_PTYPE_L3_IPV4_EXT_UNKNOWN is set, the hardware also verified the
L3 checksum?
RTE_PTYPE_L3_IPV4 means there is NONE-EXT. Each time only one of above 3
can be set.
Post by Zhang, Helin
These bits don't indicate any checksum, checksum should be indicated by
other flags.
Post by Zhang, Helin
They are just for packet types hardware can recognized.
- if the hardware cannot recognize packet, it cannot calculate the
checksum because it does not know the packet type
- if the hardware can recognize the packet, it can verify that the
checksum is good or wrong
We cannot know how hardware works, we care about what hardware can report.
- PKT_RX_IPV4_HDR and PKT_RX_IPV4_HDR_EXT to tell if the packet is
seen as IPv4 by the hw.
- PKT_RX_IPV4_HDR(_EXT)=0 -> no hw checksum information
- PKT_RX_IPV4_HDR(_EXT)=1 and PKT_RX_IP_CKSUM_BAD=0 ->
checksum
is correct
- PKT_RX_IPV4_HDR(_EXT)=1 and PKT_RX_IP_CKSUM_BAD=1 ->
checksum
is not correct
- We cannot do the same with L4 because we have no L4 type info,
but it would be good to be able to do the same.
With your patch, you are removing the PKT_RX_IPV4_HDR and
PKT_RX_IPV4_HDR_EXT flags, but I think the above assumption about
checksum should be kept. As you are adding a L4 type info, the same
method could be applied to L4 checksums.
I think this would definitely solve the problem described by Stephen.
I think packet type and checksum are different things. They are reported by
different fields.
PKT_RX_IPV4_HDR and PKT_RX_IPV4_HDR_EXT mean packet type only,
nothing about checksum. Checksum GOOD/BAD can be reported by other flags in ol_flags.
Post by Zhang, Helin
Post by Olivier MATZ
- if packet_type is IPv4* and PKT_RX_IP_CKSUM_BAD is 0
-> checksum was checked by hw and is good
- if packet_type is IPv4* and PKT_RX_IP_CKSUM_BAD is 1
-> checksum was checked by hw and is bad
- if packet_type is not IPv4*
-> checksum was not checked by hw
I think it would solve the problem asked by Stephen
http://dpdk.org/ml/archives/dev/2015-January/011550.html
Post by Helin Zhang
+/* bit 10:8 for L4 types */
+#define RTE_PTYPE_L4_TCP 0x0100 /*
0b0000000100000000 */
Post by Helin Zhang
+#define RTE_PTYPE_L4_UDP 0x0200 /*
0b0000001000000000 */
Post by Helin Zhang
+#define RTE_PTYPE_L4_FRAG 0x0300 /*
0b0000001100000000 */
Post by Helin Zhang
+#define RTE_PTYPE_L4_SCTP 0x0400 /*
0b0000010000000000 */
Post by Helin Zhang
+#define RTE_PTYPE_L4_ICMP 0x0500 /*
0b0000010100000000 */
Post by Helin Zhang
+#define RTE_PTYPE_L4_NONFRAG 0x0600 /*
0b0000011000000000 */
Post by Helin Zhang
+#define RTE_PTYPE_L4_MASK 0x0700 /*
0b0000011100000000 */
Same question for L4.
Note: it would means that if a hardware is able to recognize a TCP
packet but not to verify the checksum, it has to set RTE_PTYPE_L4 to
unknown.
Post by Zhang, Helin
Post by Olivier MATZ
Post by Helin Zhang
+/* bit 13:11 for inner L3 types */
+#define RTE_PTYPE_INNER_L3_IPV4 0x0800 /*
0b0000100000000000 */
Post by Helin Zhang
+#define RTE_PTYPE_INNER_L3_IPV4_EXT 0x1000 /*
0b0001000000000000 */
Post by Helin Zhang
+#define RTE_PTYPE_INNER_L3_IPV6 0x1800 /*
0b0001100000000000 */
Post by Helin Zhang
+#define RTE_PTYPE_INNER_L3_IPV6_EXT 0x2000 /*
0b0010000000000000 */
Post by Helin Zhang
+#define RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN 0x2800 /*
+0b0010100000000000 */ #define
RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN 0x3000 /*
0b0011000000000000 */
Post by Zhang, Helin
We cannot define the hardware behaviors, it just reports the
hardware recognized packet information directly to the mbuf.
Based on my experiment on i40e hardware, if a IPV4 packet with wrong
checksum, by default, the PMD driver cannot see the packet at all.
So we don't need to care about it too much!
I agree that the hardware reports some info that can be different
depending on the hw. But the role of the driver is to convert these
info into a common API with a well-defined behavior.
Yes, driver should report the received packet information to a well-defined
behavior, but not the same behavior, even for the same packet.
Capability can be queried for each port, and then the application can know the
port capability well, and know what the hardware can report, and what the
hardware cannot report.
Driver should enable the hardware with its advanced capabilities as most as possible.
Regards,
Olivier
Olivier MATZ
2015-02-03 09:12:10 UTC
Permalink
Hi Helin,
Post by Zhang, Helin
Post by Zhang, Helin
When your application decapsulates tunnels, you can just do outer =
inner and enter into the same code.
Expanding packet_type is not easy, as there is no free bits in the first cache line.
Is there any tunnel type in inner packet? Is it a waste?
Is L2 type really needed? I don't know.
If it is now not short of space in mbuf, the definition as yours might be good.
But tun_type is not required for inner packet, I'd prefer to define it as needed
with taking into account the Vector PMD support. It seems 32 bits might be enough,
like below,
struct pkt_type {
uint32_t l2_type:4;
uint32_t l3_type:4;
uint32_t l4_type:4;
uint32_t tun_type:4;
uint32_t inner_l2_type:4;
uint32_t inner_l3_type:4;
uint32_t inner_l4_type:4;
}
Yes, I think a structure like this would be much better!
Maybe a union with a u32 could also help to assign the value
in one operation.

Thanks,
Olivier
Helin Zhang
2015-01-29 03:15:50 UTC
Permalink
To unify packet types among all PMDs, bit masks of packet type for
ol_flags are replaced by unified packet type.

Signed-off-by: Helin Zhang <***@intel.com>
---
lib/librte_pmd_e1000/igb_rxtx.c | 95 ++++++++++++++++++++++++++++++++++-------
1 file changed, 80 insertions(+), 15 deletions(-)

diff --git a/lib/librte_pmd_e1000/igb_rxtx.c b/lib/librte_pmd_e1000/igb_rxtx.c
index 5c394a9..1ffb39e 100644
--- a/lib/librte_pmd_e1000/igb_rxtx.c
+++ b/lib/librte_pmd_e1000/igb_rxtx.c
@@ -602,17 +602,82 @@ eth_igb_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
* RX functions
*
**********************************************************************/
+#define IGB_PACKET_TYPE_IPV4 0X01
+#define IGB_PACKET_TYPE_IPV4_TCP 0X11
+#define IGB_PACKET_TYPE_IPV4_UDP 0X21
+#define IGB_PACKET_TYPE_IPV4_SCTP 0X41
+#define IGB_PACKET_TYPE_IPV4_EXT 0X03
+#define IGB_PACKET_TYPE_IPV4_EXT_SCTP 0X43
+#define IGB_PACKET_TYPE_IPV6 0X04
+#define IGB_PACKET_TYPE_IPV6_TCP 0X14
+#define IGB_PACKET_TYPE_IPV6_UDP 0X24
+#define IGB_PACKET_TYPE_IPV6_EXT 0X0C
+#define IGB_PACKET_TYPE_IPV6_EXT_TCP 0X1C
+#define IGB_PACKET_TYPE_IPV6_EXT_UDP 0X2C
+#define IGB_PACKET_TYPE_IPV4_IPV6 0X05
+#define IGB_PACKET_TYPE_IPV4_IPV6_TCP 0X15
+#define IGB_PACKET_TYPE_IPV4_IPV6_UDP 0X25
+#define IGB_PACKET_TYPE_IPV4_IPV6_EXT 0X0D
+#define IGB_PACKET_TYPE_IPV4_IPV6_EXT_TCP 0X1D
+#define IGB_PACKET_TYPE_IPV4_IPV6_EXT_UDP 0X2D
+#define IGB_PACKET_TYPE_MAX 0X80
+#define IGB_PACKET_TYPE_MASK 0X7F
+#define IGB_PACKET_TYPE_SHIFT 0X04
+static inline uint16_t
+igb_rxd_pkt_info_to_pkt_type(uint16_t pkt_info)
+{
+ static const uint16_t
+ ptype_table[IGB_PACKET_TYPE_MAX] __rte_cache_aligned = {
+ [IGB_PACKET_TYPE_IPV4] = RTE_PTYPE_L3_IPV4,
+ [IGB_PACKET_TYPE_IPV4_EXT] = RTE_PTYPE_L3_IPV4_EXT,
+ [IGB_PACKET_TYPE_IPV6] = RTE_PTYPE_L3_IPV6,
+ [IGB_PACKET_TYPE_IPV4_IPV6] = RTE_PTYPE_L3_IPV4 |
+ RTE_PTYPE_TUNNEL_IP | RTE_PTYPE_INNER_L3_IPV6,
+ [IGB_PACKET_TYPE_IPV6_EXT] = RTE_PTYPE_L3_IPV6_EXT,
+ [IGB_PACKET_TYPE_IPV4_IPV6_EXT] = RTE_PTYPE_L3_IPV4 |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT,
+ [IGB_PACKET_TYPE_IPV4_TCP] = RTE_PTYPE_L3_IPV4 |
+ RTE_PTYPE_L4_TCP,
+ [IGB_PACKET_TYPE_IPV6_TCP] = RTE_PTYPE_L3_IPV6 |
+ RTE_PTYPE_L4_TCP,
+ [IGB_PACKET_TYPE_IPV4_IPV6_TCP] = RTE_PTYPE_L3_IPV4 |
+ RTE_PTYPE_TUNNEL_IP | RTE_PTYPE_INNER_L3_IPV6 |
+ RTE_PTYPE_L4_TCP,
+ [IGB_PACKET_TYPE_IPV6_EXT_TCP] = RTE_PTYPE_L3_IPV6_EXT |
+ RTE_PTYPE_L4_TCP,
+ [IGB_PACKET_TYPE_IPV4_IPV6_EXT_TCP] = RTE_PTYPE_L3_IPV4 |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_L4_TCP,
+ [IGB_PACKET_TYPE_IPV4_UDP] = RTE_PTYPE_L3_IPV4 |
+ RTE_PTYPE_L4_UDP,
+ [IGB_PACKET_TYPE_IPV6_UDP] = RTE_PTYPE_L3_IPV6 |
+ RTE_PTYPE_L4_UDP,
+ [IGB_PACKET_TYPE_IPV4_IPV6_UDP] = RTE_PTYPE_L3_IPV4 |
+ RTE_PTYPE_TUNNEL_IP | RTE_PTYPE_INNER_L3_IPV6 |
+ RTE_PTYPE_L4_UDP,
+ [IGB_PACKET_TYPE_IPV6_EXT_UDP] = RTE_PTYPE_L3_IPV6_EXT |
+ RTE_PTYPE_L4_UDP,
+ [IGB_PACKET_TYPE_IPV4_IPV6_EXT_UDP] = RTE_PTYPE_L3_IPV4 |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_L4_UDP,
+ [IGB_PACKET_TYPE_IPV4_SCTP] = RTE_PTYPE_L3_IPV4 |
+ RTE_PTYPE_L4_SCTP,
+ [IGB_PACKET_TYPE_IPV4_EXT_SCTP] = RTE_PTYPE_L3_IPV4_EXT |
+ RTE_PTYPE_L4_SCTP,
+ };
+ if (unlikely(pkt_info & E1000_RXDADV_PKTTYPE_ETQF))
+ return RTE_PTYPE_UNKNOWN;
+
+ pkt_info = (pkt_info >> IGB_PACKET_TYPE_SHIFT) & IGB_PACKET_TYPE_MASK;
+
+ return ptype_table[pkt_info];
+}
+
static inline uint64_t
rx_desc_hlen_type_rss_to_pkt_flags(uint32_t hl_tp_rs)
{
- uint64_t pkt_flags;
-
- static uint64_t ip_pkt_types_map[16] = {
- 0, PKT_RX_IPV4_HDR, PKT_RX_IPV4_HDR_EXT, PKT_RX_IPV4_HDR_EXT,
- PKT_RX_IPV6_HDR, 0, 0, 0,
- PKT_RX_IPV6_HDR_EXT, 0, 0, 0,
- PKT_RX_IPV6_HDR_EXT, 0, 0, 0,
- };
+ uint64_t pkt_flags = ((hl_tp_rs & 0x0F) == 0) ? 0 : PKT_RX_RSS_HASH;

#if defined(RTE_LIBRTE_IEEE1588)
static uint32_t ip_pkt_etqf_map[8] = {
@@ -620,14 +685,10 @@ rx_desc_hlen_type_rss_to_pkt_flags(uint32_t hl_tp_rs)
0, 0, 0, 0,
};

- pkt_flags = (hl_tp_rs & E1000_RXDADV_PKTTYPE_ETQF) ?
- ip_pkt_etqf_map[(hl_tp_rs >> 4) & 0x07] :
- ip_pkt_types_map[(hl_tp_rs >> 4) & 0x0F];
-#else
- pkt_flags = (hl_tp_rs & E1000_RXDADV_PKTTYPE_ETQF) ? 0 :
- ip_pkt_types_map[(hl_tp_rs >> 4) & 0x0F];
+ pkt_flags |= ip_pkt_etqf_map[(hl_tp_rs >> 4) & 0x07];
#endif
- return pkt_flags | (((hl_tp_rs & 0x0F) == 0) ? 0 : PKT_RX_RSS_HASH);
+
+ return pkt_flags;
}

static inline uint64_t
@@ -802,6 +863,8 @@ eth_igb_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
pkt_flags = pkt_flags | rx_desc_status_to_pkt_flags(staterr);
pkt_flags = pkt_flags | rx_desc_error_to_pkt_flags(staterr);
rxm->ol_flags = pkt_flags;
+ rxm->packet_type = igb_rxd_pkt_info_to_pkt_type(rxd.wb.lower.
+ lo_dword.hs_rss.pkt_info);

/*
* Store the mbuf address into the next entry of the array
@@ -1036,6 +1099,8 @@ eth_igb_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
pkt_flags = pkt_flags | rx_desc_status_to_pkt_flags(staterr);
pkt_flags = pkt_flags | rx_desc_error_to_pkt_flags(staterr);
first_seg->ol_flags = pkt_flags;
+ first_seg->packet_type = igb_rxd_pkt_info_to_pkt_type(rxd.wb.
+ lower.lo_dword.hs_rss.pkt_info);

/* Prefetch data of first segment, if configured to do so. */
rte_packet_prefetch((char *)first_seg->buf_addr +
--
1.8.1.4
Helin Zhang
2015-01-29 03:15:51 UTC
Permalink
To unify packet types among all PMDs, bit masks of packet type for
ol_flags are replaced by unified packet type.

Signed-off-by: Helin Zhang <***@intel.com>
---
lib/librte_pmd_ixgbe/ixgbe_rxtx.c | 141 +++++++++++++++++++++++++++++---------
1 file changed, 107 insertions(+), 34 deletions(-)

diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
index e6766b3..aefb4e9 100644
--- a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
+++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
@@ -866,40 +866,102 @@ end_of_tx:
* RX functions
*
**********************************************************************/
-static inline uint64_t
-rx_desc_hlen_type_rss_to_pkt_flags(uint32_t hl_tp_rs)
+#define IXGBE_PACKET_TYPE_IPV4 0X01
+#define IXGBE_PACKET_TYPE_IPV4_TCP 0X11
+#define IXGBE_PACKET_TYPE_IPV4_UDP 0X21
+#define IXGBE_PACKET_TYPE_IPV4_SCTP 0X41
+#define IXGBE_PACKET_TYPE_IPV4_EXT 0X03
+#define IXGBE_PACKET_TYPE_IPV4_EXT_SCTP 0X43
+#define IXGBE_PACKET_TYPE_IPV6 0X04
+#define IXGBE_PACKET_TYPE_IPV6_TCP 0X14
+#define IXGBE_PACKET_TYPE_IPV6_UDP 0X24
+#define IXGBE_PACKET_TYPE_IPV6_EXT 0X0C
+#define IXGBE_PACKET_TYPE_IPV6_EXT_TCP 0X1C
+#define IXGBE_PACKET_TYPE_IPV6_EXT_UDP 0X2C
+#define IXGBE_PACKET_TYPE_IPV4_IPV6 0X05
+#define IXGBE_PACKET_TYPE_IPV4_IPV6_TCP 0X15
+#define IXGBE_PACKET_TYPE_IPV4_IPV6_UDP 0X25
+#define IXGBE_PACKET_TYPE_IPV4_IPV6_EXT 0X0D
+#define IXGBE_PACKET_TYPE_IPV4_IPV6_EXT_TCP 0X1D
+#define IXGBE_PACKET_TYPE_IPV4_IPV6_EXT_UDP 0X2D
+#define IXGBE_PACKET_TYPE_MAX 0X80
+#define IXGBE_PACKET_TYPE_MASK 0X7F
+#define IXGBE_PACKET_TYPE_SHIFT 0X04
+static inline uint16_t
+ixgbe_rxd_pkt_info_to_pkt_type(uint16_t pkt_info)
{
- uint64_t pkt_flags;
-
- static uint64_t ip_pkt_types_map[16] = {
- 0, PKT_RX_IPV4_HDR, PKT_RX_IPV4_HDR_EXT, PKT_RX_IPV4_HDR_EXT,
- PKT_RX_IPV6_HDR, 0, 0, 0,
- PKT_RX_IPV6_HDR_EXT, 0, 0, 0,
- PKT_RX_IPV6_HDR_EXT, 0, 0, 0,
+ static const uint16_t
+ ptype_table[IXGBE_PACKET_TYPE_MAX] __rte_cache_aligned = {
+ [IXGBE_PACKET_TYPE_IPV4] = RTE_PTYPE_L3_IPV4,
+ [IXGBE_PACKET_TYPE_IPV4_EXT] = RTE_PTYPE_L3_IPV4_EXT,
+ [IXGBE_PACKET_TYPE_IPV6] = RTE_PTYPE_L3_IPV6,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6] = RTE_PTYPE_L3_IPV4 |
+ RTE_PTYPE_TUNNEL_IP | RTE_PTYPE_INNER_L3_IPV6,
+ [IXGBE_PACKET_TYPE_IPV6_EXT] = RTE_PTYPE_L3_IPV6_EXT,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6_EXT] = RTE_PTYPE_L3_IPV4 |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT,
+ [IXGBE_PACKET_TYPE_IPV4_TCP] = RTE_PTYPE_L3_IPV4 |
+ RTE_PTYPE_L4_TCP,
+ [IXGBE_PACKET_TYPE_IPV6_TCP] = RTE_PTYPE_L3_IPV6 |
+ RTE_PTYPE_L4_TCP,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6_TCP] = RTE_PTYPE_L3_IPV4 |
+ RTE_PTYPE_TUNNEL_IP | RTE_PTYPE_INNER_L3_IPV6 |
+ RTE_PTYPE_L4_TCP,
+ [IXGBE_PACKET_TYPE_IPV6_EXT_TCP] = RTE_PTYPE_L3_IPV6_EXT |
+ RTE_PTYPE_L4_TCP,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6_EXT_TCP] = RTE_PTYPE_L3_IPV4 |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_L4_TCP,
+ [IXGBE_PACKET_TYPE_IPV4_UDP] = RTE_PTYPE_L3_IPV4 |
+ RTE_PTYPE_L4_UDP,
+ [IXGBE_PACKET_TYPE_IPV6_UDP] = RTE_PTYPE_L3_IPV6 |
+ RTE_PTYPE_L4_UDP,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6_UDP] = RTE_PTYPE_L3_IPV4 |
+ RTE_PTYPE_TUNNEL_IP | RTE_PTYPE_INNER_L3_IPV6 |
+ RTE_PTYPE_L4_UDP,
+ [IXGBE_PACKET_TYPE_IPV6_EXT_UDP] = RTE_PTYPE_L3_IPV6_EXT |
+ RTE_PTYPE_L4_UDP,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6_EXT_UDP] = RTE_PTYPE_L3_IPV4 |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_L4_UDP,
+ [IXGBE_PACKET_TYPE_IPV4_SCTP] = RTE_PTYPE_L3_IPV4 |
+ RTE_PTYPE_L4_SCTP,
+ [IXGBE_PACKET_TYPE_IPV4_EXT_SCTP] = RTE_PTYPE_L3_IPV4_EXT |
+ RTE_PTYPE_L4_SCTP,
};
+ if (unlikely(pkt_info & IXGBE_RXDADV_PKTTYPE_ETQF))
+ return RTE_PTYPE_UNKNOWN;

- static uint64_t ip_rss_types_map[16] = {
+ pkt_info = (pkt_info >> IXGBE_PACKET_TYPE_SHIFT) &
+ IXGBE_PACKET_TYPE_MASK;
+
+ return ptype_table[pkt_info];
+}
+
+static inline uint64_t
+ixgbe_rxd_pkt_info_to_pkt_flags(uint16_t pkt_info)
+{
+ static uint64_t ip_rss_types_map[16] __rte_cache_aligned = {
0, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH,
0, PKT_RX_RSS_HASH, 0, PKT_RX_RSS_HASH,
PKT_RX_RSS_HASH, 0, 0, 0,
0, 0, 0, PKT_RX_FDIR,
};
-
#ifdef RTE_LIBRTE_IEEE1588
static uint64_t ip_pkt_etqf_map[8] = {
0, 0, 0, PKT_RX_IEEE1588_PTP,
0, 0, 0, 0,
};

- pkt_flags = (hl_tp_rs & IXGBE_RXDADV_PKTTYPE_ETQF) ?
- ip_pkt_etqf_map[(hl_tp_rs >> 4) & 0x07] :
- ip_pkt_types_map[(hl_tp_rs >> 4) & 0x0F];
+ if (likely(pkt_info & IXGBE_RXDADV_PKTTYPE_ETQF))
+ return ip_pkt_etqf_map[(pkt_info >> 4) & 0X07] |
+ ip_rss_types_map[pkt_info & 0xF];
+ else
+ return ip_rss_types_map[pkt_info & 0xF];
#else
- pkt_flags = (hl_tp_rs & IXGBE_RXDADV_PKTTYPE_ETQF) ? 0 :
- ip_pkt_types_map[(hl_tp_rs >> 4) & 0x0F];
-
+ return ip_rss_types_map[pkt_info & 0xF];
#endif
- return pkt_flags | ip_rss_types_map[hl_tp_rs & 0xF];
}

static inline uint64_t
@@ -956,7 +1018,9 @@ ixgbe_rx_scan_hw_ring(struct igb_rx_queue *rxq)
struct rte_mbuf *mb;
uint16_t pkt_len;
uint64_t pkt_flags;
- int s[LOOK_AHEAD], nb_dd;
+ int nb_dd;
+ uint32_t s[LOOK_AHEAD];
+ uint16_t pkt_info[LOOK_AHEAD];
int i, j, nb_rx = 0;


@@ -979,6 +1043,9 @@ ixgbe_rx_scan_hw_ring(struct igb_rx_queue *rxq)
for (j = LOOK_AHEAD-1; j >= 0; --j)
s[j] = rxdp[j].wb.upper.status_error;

+ for (j = LOOK_AHEAD-1; j >= 0; --j)
+ pkt_info[j] = rxdp[j].wb.lower.lo_dword.hs_rss.pkt_info;
+
/* Compute how many status bits were set */
nb_dd = 0;
for (j = 0; j < LOOK_AHEAD; ++j)
@@ -996,12 +1063,13 @@ ixgbe_rx_scan_hw_ring(struct igb_rx_queue *rxq)
mb->vlan_tci = rte_le_to_cpu_16(rxdp[j].wb.upper.vlan);

/* convert descriptor fields to rte mbuf flags */
- pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(
- rxdp[j].wb.lower.lo_dword.data);
- /* reuse status field from scan list */
- pkt_flags |= rx_desc_status_to_pkt_flags(s[j]);
+ pkt_flags = rx_desc_status_to_pkt_flags(s[j]);
pkt_flags |= rx_desc_error_to_pkt_flags(s[j]);
+ pkt_flags |=
+ ixgbe_rxd_pkt_info_to_pkt_flags(pkt_info[j]);
mb->ol_flags = pkt_flags;
+ mb->packet_type =
+ ixgbe_rxd_pkt_info_to_pkt_type(pkt_info[j]);

if (likely(pkt_flags & PKT_RX_RSS_HASH))
mb->hash.rss = rxdp[j].wb.lower.hi_dword.rss;
@@ -1198,7 +1266,7 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
union ixgbe_adv_rx_desc rxd;
uint64_t dma_addr;
uint32_t staterr;
- uint32_t hlen_type_rss;
+ uint32_t pkt_info;
uint16_t pkt_len;
uint16_t rx_id;
uint16_t nb_rx;
@@ -1316,14 +1384,17 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
rxm->data_len = pkt_len;
rxm->port = rxq->port_id;

- hlen_type_rss = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.data);
+ pkt_info = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.hs_rss.
+ pkt_info);
/* Only valid if PKT_RX_VLAN_PKT set in pkt_flags */
rxm->vlan_tci = rte_le_to_cpu_16(rxd.wb.upper.vlan);

- pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(hlen_type_rss);
- pkt_flags = pkt_flags | rx_desc_status_to_pkt_flags(staterr);
+ pkt_flags = rx_desc_status_to_pkt_flags(staterr);
pkt_flags = pkt_flags | rx_desc_error_to_pkt_flags(staterr);
+ pkt_flags = pkt_flags |
+ ixgbe_rxd_pkt_info_to_pkt_flags(pkt_info);
rxm->ol_flags = pkt_flags;
+ rxm->packet_type = ixgbe_rxd_pkt_info_to_pkt_type(pkt_info);

if (likely(pkt_flags & PKT_RX_RSS_HASH))
rxm->hash.rss = rxd.wb.lower.hi_dword.rss;
@@ -1382,7 +1453,7 @@ ixgbe_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
union ixgbe_adv_rx_desc rxd;
uint64_t dma; /* Physical address of mbuf data buffer */
uint32_t staterr;
- uint32_t hlen_type_rss;
+ uint16_t pkt_info;
uint16_t rx_id;
uint16_t nb_rx;
uint16_t nb_hold;
@@ -1561,13 +1632,15 @@ ixgbe_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
* set in the pkt_flags field.
*/
first_seg->vlan_tci = rte_le_to_cpu_16(rxd.wb.upper.vlan);
- hlen_type_rss = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.data);
- pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(hlen_type_rss);
- pkt_flags = (pkt_flags |
- rx_desc_status_to_pkt_flags(staterr));
- pkt_flags = (pkt_flags |
- rx_desc_error_to_pkt_flags(staterr));
+ pkt_info = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.hs_rss.
+ pkt_info);
+ pkt_flags = rx_desc_status_to_pkt_flags(staterr);
+ pkt_flags = pkt_flags | rx_desc_error_to_pkt_flags(staterr);
+ pkt_flags = pkt_flags |
+ ixgbe_rxd_pkt_info_to_pkt_flags(pkt_info);
first_seg->ol_flags = pkt_flags;
+ first_seg->packet_type =
+ ixgbe_rxd_pkt_info_to_pkt_type(pkt_info);

if (likely(pkt_flags & PKT_RX_RSS_HASH))
first_seg->hash.rss = rxd.wb.lower.hi_dword.rss;
--
1.8.1.4
Helin Zhang
2015-01-29 03:15:52 UTC
Permalink
To unify packet types among all PMDs, bit masks of packet type for
ol_flags are replaced by unified packet type for Vector PMD.

Signed-off-by: Cunming Liang <***@intel.com>
Signed-off-by: Helin Zhang <***@intel.com>
---
lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c | 39 +++++++++++++++++++----------------
1 file changed, 21 insertions(+), 18 deletions(-)

diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c b/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
index b54cb19..b3cf7dd 100644
--- a/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
+++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
@@ -134,44 +134,35 @@ ixgbe_rxq_rearm(struct igb_rx_queue *rxq)
*/
#ifdef RTE_IXGBE_RX_OLFLAGS_ENABLE

-#define OLFLAGS_MASK ((uint16_t)(PKT_RX_VLAN_PKT | PKT_RX_IPV4_HDR |\
- PKT_RX_IPV4_HDR_EXT | PKT_RX_IPV6_HDR |\
- PKT_RX_IPV6_HDR_EXT))
-#define OLFLAGS_MASK_V (((uint64_t)OLFLAGS_MASK << 48) | \
- ((uint64_t)OLFLAGS_MASK << 32) | \
- ((uint64_t)OLFLAGS_MASK << 16) | \
- ((uint64_t)OLFLAGS_MASK))
-#define PTYPE_SHIFT (1)
+#define OLFLAGS_MASK_V (((uint64_t)PKT_RX_VLAN_PKT << 48) | \
+ ((uint64_t)PKT_RX_VLAN_PKT << 32) | \
+ ((uint64_t)PKT_RX_VLAN_PKT << 16) | \
+ ((uint64_t)PKT_RX_VLAN_PKT))
#define VTAG_SHIFT (3)

static inline void
desc_to_olflags_v(__m128i descs[4], struct rte_mbuf **rx_pkts)
{
- __m128i ptype0, ptype1, vtag0, vtag1;
+ __m128i vtag0, vtag1;
union {
uint16_t e[4];
uint64_t dword;
} vol;

- ptype0 = _mm_unpacklo_epi16(descs[0], descs[1]);
- ptype1 = _mm_unpacklo_epi16(descs[2], descs[3]);
vtag0 = _mm_unpackhi_epi16(descs[0], descs[1]);
vtag1 = _mm_unpackhi_epi16(descs[2], descs[3]);

- ptype1 = _mm_unpacklo_epi32(ptype0, ptype1);
vtag1 = _mm_unpacklo_epi32(vtag0, vtag1);
-
- ptype1 = _mm_slli_epi16(ptype1, PTYPE_SHIFT);
vtag1 = _mm_srli_epi16(vtag1, VTAG_SHIFT);

- ptype1 = _mm_or_si128(ptype1, vtag1);
- vol.dword = _mm_cvtsi128_si64(ptype1) & OLFLAGS_MASK_V;
+ vol.dword = _mm_cvtsi128_si64(vtag1) & OLFLAGS_MASK_V;

rx_pkts[0]->ol_flags = vol.e[0];
rx_pkts[1]->ol_flags = vol.e[1];
rx_pkts[2]->ol_flags = vol.e[2];
rx_pkts[3]->ol_flags = vol.e[3];
}
+
#else
#define desc_to_olflags_v(desc, rx_pkts) do {} while (0)
#endif
@@ -204,6 +195,8 @@ _recv_raw_pkts_vec(struct igb_rx_queue *rxq, struct rte_mbuf **rx_pkts,
0 /* ignore pkt_type field */
);
__m128i dd_check, eop_check;
+ __m128i desc_mask = _mm_set_epi32(0xFFFFFFFF, 0xFFFFFFFF,
+ 0xFFFFFFFF, 0xFFFF07F0);

if (unlikely(nb_pkts < RTE_IXGBE_VPMD_RX_BURST))
return 0;
@@ -239,7 +232,8 @@ _recv_raw_pkts_vec(struct igb_rx_queue *rxq, struct rte_mbuf **rx_pkts,
0xFF, 0xFF, /* skip high 16 bits pkt_len, zero out */
13, 12, /* octet 12~13, low 16 bits pkt_len */
13, 12, /* octet 12~13, 16 bits data_len */
- 0xFF, 0xFF /* skip pkt_type field */
+ 1, /* octet 1, 8 bits pkt_type field */
+ 0 /* octet 0, 4 bits offset 4 pkt_type field */
);

/* Cache is empty -> need to scan the buffer rings, but first move
@@ -248,6 +242,7 @@ _recv_raw_pkts_vec(struct igb_rx_queue *rxq, struct rte_mbuf **rx_pkts,

/*
* A. load 4 packet in one loop
+ * [A*. mask out 4 unused dirty field in desc]
* B. copy 4 mbuf point from swring to rx_pkts
* C. calc the number of DD bits among the 4 packets
* [C*. extract the end-of-packet bit, if requested]
@@ -289,6 +284,14 @@ _recv_raw_pkts_vec(struct igb_rx_queue *rxq, struct rte_mbuf **rx_pkts,
/* B.2 copy 2 mbuf point into rx_pkts */
_mm_storeu_si128((__m128i *)&rx_pkts[pos+2], mbp2);

+ /* A* mask out 0~3 bits RSS type */
+ descs[3] = _mm_and_si128(descs[3], desc_mask);
+ descs[2] = _mm_and_si128(descs[2], desc_mask);
+
+ /* A* mask out 0~3 bits RSS type */
+ descs[1] = _mm_and_si128(descs[1], desc_mask);
+ descs[0] = _mm_and_si128(descs[0], desc_mask);
+
/* avoid compiler reorder optimization */
rte_compiler_barrier();

@@ -301,7 +304,7 @@ _recv_raw_pkts_vec(struct igb_rx_queue *rxq, struct rte_mbuf **rx_pkts,
/* C.1 4=>2 filter staterr info only */
sterr_tmp1 = _mm_unpackhi_epi32(descs[1], descs[0]);

- /* set ol_flags with packet type and vlan tag */
+ /* set ol_flags with vlan packet type */
desc_to_olflags_v(descs, &rx_pkts[pos]);

/* D.2 pkt 3,4 set in_port/nb_seg and remove crc */
--
1.8.1.4
Bruce Richardson
2015-01-29 23:30:28 UTC
Permalink
Post by Helin Zhang
To unify packet types among all PMDs, bit masks of packet type for
ol_flags are replaced by unified packet type for Vector PMD.
Two suggestions on the commit log:
1. Can you add scalar and vector into the titles to make it clear how this
patch and the previous ones differ
2. Can you add a note calling out performance impacts for this patch. If no
performance impacts, then please note that for reviewers.

/Bruce
Post by Helin Zhang
---
lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c | 39 +++++++++++++++++++----------------
1 file changed, 21 insertions(+), 18 deletions(-)
diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c b/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
index b54cb19..b3cf7dd 100644
--- a/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
+++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
@@ -134,44 +134,35 @@ ixgbe_rxq_rearm(struct igb_rx_queue *rxq)
*/
#ifdef RTE_IXGBE_RX_OLFLAGS_ENABLE
-#define OLFLAGS_MASK ((uint16_t)(PKT_RX_VLAN_PKT | PKT_RX_IPV4_HDR |\
- PKT_RX_IPV4_HDR_EXT | PKT_RX_IPV6_HDR |\
- PKT_RX_IPV6_HDR_EXT))
-#define OLFLAGS_MASK_V (((uint64_t)OLFLAGS_MASK << 48) | \
- ((uint64_t)OLFLAGS_MASK << 32) | \
- ((uint64_t)OLFLAGS_MASK << 16) | \
- ((uint64_t)OLFLAGS_MASK))
-#define PTYPE_SHIFT (1)
+#define OLFLAGS_MASK_V (((uint64_t)PKT_RX_VLAN_PKT << 48) | \
+ ((uint64_t)PKT_RX_VLAN_PKT << 32) | \
+ ((uint64_t)PKT_RX_VLAN_PKT << 16) | \
+ ((uint64_t)PKT_RX_VLAN_PKT))
#define VTAG_SHIFT (3)
static inline void
desc_to_olflags_v(__m128i descs[4], struct rte_mbuf **rx_pkts)
{
- __m128i ptype0, ptype1, vtag0, vtag1;
+ __m128i vtag0, vtag1;
union {
uint16_t e[4];
uint64_t dword;
} vol;
- ptype0 = _mm_unpacklo_epi16(descs[0], descs[1]);
- ptype1 = _mm_unpacklo_epi16(descs[2], descs[3]);
vtag0 = _mm_unpackhi_epi16(descs[0], descs[1]);
vtag1 = _mm_unpackhi_epi16(descs[2], descs[3]);
- ptype1 = _mm_unpacklo_epi32(ptype0, ptype1);
vtag1 = _mm_unpacklo_epi32(vtag0, vtag1);
-
- ptype1 = _mm_slli_epi16(ptype1, PTYPE_SHIFT);
vtag1 = _mm_srli_epi16(vtag1, VTAG_SHIFT);
- ptype1 = _mm_or_si128(ptype1, vtag1);
- vol.dword = _mm_cvtsi128_si64(ptype1) & OLFLAGS_MASK_V;
+ vol.dword = _mm_cvtsi128_si64(vtag1) & OLFLAGS_MASK_V;
rx_pkts[0]->ol_flags = vol.e[0];
rx_pkts[1]->ol_flags = vol.e[1];
rx_pkts[2]->ol_flags = vol.e[2];
rx_pkts[3]->ol_flags = vol.e[3];
}
+
#else
#define desc_to_olflags_v(desc, rx_pkts) do {} while (0)
#endif
@@ -204,6 +195,8 @@ _recv_raw_pkts_vec(struct igb_rx_queue *rxq, struct rte_mbuf **rx_pkts,
0 /* ignore pkt_type field */
);
__m128i dd_check, eop_check;
+ __m128i desc_mask = _mm_set_epi32(0xFFFFFFFF, 0xFFFFFFFF,
+ 0xFFFFFFFF, 0xFFFF07F0);
if (unlikely(nb_pkts < RTE_IXGBE_VPMD_RX_BURST))
return 0;
@@ -239,7 +232,8 @@ _recv_raw_pkts_vec(struct igb_rx_queue *rxq, struct rte_mbuf **rx_pkts,
0xFF, 0xFF, /* skip high 16 bits pkt_len, zero out */
13, 12, /* octet 12~13, low 16 bits pkt_len */
13, 12, /* octet 12~13, 16 bits data_len */
- 0xFF, 0xFF /* skip pkt_type field */
+ 1, /* octet 1, 8 bits pkt_type field */
+ 0 /* octet 0, 4 bits offset 4 pkt_type field */
);
/* Cache is empty -> need to scan the buffer rings, but first move
@@ -248,6 +242,7 @@ _recv_raw_pkts_vec(struct igb_rx_queue *rxq, struct rte_mbuf **rx_pkts,
/*
* A. load 4 packet in one loop
+ * [A*. mask out 4 unused dirty field in desc]
* B. copy 4 mbuf point from swring to rx_pkts
* C. calc the number of DD bits among the 4 packets
* [C*. extract the end-of-packet bit, if requested]
@@ -289,6 +284,14 @@ _recv_raw_pkts_vec(struct igb_rx_queue *rxq, struct rte_mbuf **rx_pkts,
/* B.2 copy 2 mbuf point into rx_pkts */
_mm_storeu_si128((__m128i *)&rx_pkts[pos+2], mbp2);
+ /* A* mask out 0~3 bits RSS type */
+ descs[3] = _mm_and_si128(descs[3], desc_mask);
+ descs[2] = _mm_and_si128(descs[2], desc_mask);
+
+ /* A* mask out 0~3 bits RSS type */
+ descs[1] = _mm_and_si128(descs[1], desc_mask);
+ descs[0] = _mm_and_si128(descs[0], desc_mask);
+
/* avoid compiler reorder optimization */
rte_compiler_barrier();
@@ -301,7 +304,7 @@ _recv_raw_pkts_vec(struct igb_rx_queue *rxq, struct rte_mbuf **rx_pkts,
/* C.1 4=>2 filter staterr info only */
sterr_tmp1 = _mm_unpackhi_epi32(descs[1], descs[0]);
- /* set ol_flags with packet type and vlan tag */
+ /* set ol_flags with vlan packet type */
desc_to_olflags_v(descs, &rx_pkts[pos]);
/* D.2 pkt 3,4 set in_port/nb_seg and remove crc */
--
1.8.1.4
Liang, Cunming
2015-01-29 23:52:03 UTC
Permalink
-----Original Message-----
From: Richardson, Bruce
Sent: Thursday, January 29, 2015 4:30 PM
To: Zhang, Helin
Konstantin
Subject: Re: [PATCH 04/17] ixgbe: support of unified packet type
Post by Helin Zhang
To unify packet types among all PMDs, bit masks of packet type for
ol_flags are replaced by unified packet type for Vector PMD.
1. Can you add scalar and vector into the titles to make it clear how this
patch and the previous ones differ
2. Can you add a note calling out performance impacts for this patch. If no
performance impacts, then please note that for reviewers.
[Liang, Cunming] Accept, will update it in v2.
For performance, lose 1 cycle per packet during 4x10GE io fwd loopback unit test.
/Bruce
Post by Helin Zhang
---
lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c | 39
+++++++++++++++++++----------------
Post by Helin Zhang
1 file changed, 21 insertions(+), 18 deletions(-)
diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
b/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
Post by Helin Zhang
index b54cb19..b3cf7dd 100644
--- a/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
+++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
@@ -134,44 +134,35 @@ ixgbe_rxq_rearm(struct igb_rx_queue *rxq)
*/
#ifdef RTE_IXGBE_RX_OLFLAGS_ENABLE
-#define OLFLAGS_MASK ((uint16_t)(PKT_RX_VLAN_PKT |
PKT_RX_IPV4_HDR |\
Post by Helin Zhang
- PKT_RX_IPV4_HDR_EXT | PKT_RX_IPV6_HDR |\
- PKT_RX_IPV6_HDR_EXT))
-#define OLFLAGS_MASK_V (((uint64_t)OLFLAGS_MASK << 48) | \
- ((uint64_t)OLFLAGS_MASK << 32) | \
- ((uint64_t)OLFLAGS_MASK << 16) | \
- ((uint64_t)OLFLAGS_MASK))
-#define PTYPE_SHIFT (1)
+#define OLFLAGS_MASK_V (((uint64_t)PKT_RX_VLAN_PKT << 48) | \
+ ((uint64_t)PKT_RX_VLAN_PKT << 32) | \
+ ((uint64_t)PKT_RX_VLAN_PKT << 16) | \
+ ((uint64_t)PKT_RX_VLAN_PKT))
#define VTAG_SHIFT (3)
static inline void
desc_to_olflags_v(__m128i descs[4], struct rte_mbuf **rx_pkts)
{
- __m128i ptype0, ptype1, vtag0, vtag1;
+ __m128i vtag0, vtag1;
union {
uint16_t e[4];
uint64_t dword;
} vol;
- ptype0 = _mm_unpacklo_epi16(descs[0], descs[1]);
- ptype1 = _mm_unpacklo_epi16(descs[2], descs[3]);
vtag0 = _mm_unpackhi_epi16(descs[0], descs[1]);
vtag1 = _mm_unpackhi_epi16(descs[2], descs[3]);
- ptype1 = _mm_unpacklo_epi32(ptype0, ptype1);
vtag1 = _mm_unpacklo_epi32(vtag0, vtag1);
-
- ptype1 = _mm_slli_epi16(ptype1, PTYPE_SHIFT);
vtag1 = _mm_srli_epi16(vtag1, VTAG_SHIFT);
- ptype1 = _mm_or_si128(ptype1, vtag1);
- vol.dword = _mm_cvtsi128_si64(ptype1) & OLFLAGS_MASK_V;
+ vol.dword = _mm_cvtsi128_si64(vtag1) & OLFLAGS_MASK_V;
rx_pkts[0]->ol_flags = vol.e[0];
rx_pkts[1]->ol_flags = vol.e[1];
rx_pkts[2]->ol_flags = vol.e[2];
rx_pkts[3]->ol_flags = vol.e[3];
}
+
#else
#define desc_to_olflags_v(desc, rx_pkts) do {} while (0)
#endif
@@ -204,6 +195,8 @@ _recv_raw_pkts_vec(struct igb_rx_queue *rxq, struct
rte_mbuf **rx_pkts,
Post by Helin Zhang
0 /* ignore pkt_type field */
);
__m128i dd_check, eop_check;
+ __m128i desc_mask = _mm_set_epi32(0xFFFFFFFF, 0xFFFFFFFF,
+ 0xFFFFFFFF, 0xFFFF07F0);
if (unlikely(nb_pkts < RTE_IXGBE_VPMD_RX_BURST))
return 0;
@@ -239,7 +232,8 @@ _recv_raw_pkts_vec(struct igb_rx_queue *rxq, struct
rte_mbuf **rx_pkts,
Post by Helin Zhang
0xFF, 0xFF, /* skip high 16 bits pkt_len, zero out */
13, 12, /* octet 12~13, low 16 bits pkt_len */
13, 12, /* octet 12~13, 16 bits data_len */
- 0xFF, 0xFF /* skip pkt_type field */
+ 1, /* octet 1, 8 bits pkt_type field */
+ 0 /* octet 0, 4 bits offset 4 pkt_type field */
);
/* Cache is empty -> need to scan the buffer rings, but first move
@@ -248,6 +242,7 @@ _recv_raw_pkts_vec(struct igb_rx_queue *rxq, struct
rte_mbuf **rx_pkts,
Post by Helin Zhang
/*
* A. load 4 packet in one loop
+ * [A*. mask out 4 unused dirty field in desc]
* B. copy 4 mbuf point from swring to rx_pkts
* C. calc the number of DD bits among the 4 packets
* [C*. extract the end-of-packet bit, if requested]
@@ -289,6 +284,14 @@ _recv_raw_pkts_vec(struct igb_rx_queue *rxq, struct
rte_mbuf **rx_pkts,
Post by Helin Zhang
/* B.2 copy 2 mbuf point into rx_pkts */
_mm_storeu_si128((__m128i *)&rx_pkts[pos+2], mbp2);
+ /* A* mask out 0~3 bits RSS type */
+ descs[3] = _mm_and_si128(descs[3], desc_mask);
+ descs[2] = _mm_and_si128(descs[2], desc_mask);
+
+ /* A* mask out 0~3 bits RSS type */
+ descs[1] = _mm_and_si128(descs[1], desc_mask);
+ descs[0] = _mm_and_si128(descs[0], desc_mask);
+
/* avoid compiler reorder optimization */
rte_compiler_barrier();
@@ -301,7 +304,7 @@ _recv_raw_pkts_vec(struct igb_rx_queue *rxq, struct
rte_mbuf **rx_pkts,
Post by Helin Zhang
/* C.1 4=>2 filter staterr info only */
sterr_tmp1 = _mm_unpackhi_epi32(descs[1], descs[0]);
- /* set ol_flags with packet type and vlan tag */
+ /* set ol_flags with vlan packet type */
desc_to_olflags_v(descs, &rx_pkts[pos]);
/* D.2 pkt 3,4 set in_port/nb_seg and remove crc */
--
1.8.1.4
Bruce Richardson
2015-01-30 03:39:34 UTC
Permalink
Post by Liang, Cunming
-----Original Message-----
From: Richardson, Bruce
Sent: Thursday, January 29, 2015 4:30 PM
To: Zhang, Helin
Konstantin
Subject: Re: [PATCH 04/17] ixgbe: support of unified packet type
Post by Helin Zhang
To unify packet types among all PMDs, bit masks of packet type for
ol_flags are replaced by unified packet type for Vector PMD.
1. Can you add scalar and vector into the titles to make it clear how this
patch and the previous ones differ
2. Can you add a note calling out performance impacts for this patch. If no
performance impacts, then please note that for reviewers.
[Liang, Cunming] Accept, will update it in v2.
For performance, lose 1 cycle per packet during 4x10GE io fwd loopback unit test.
Good to know, thanks.

/Bruce
Zhang, Helin
2015-01-30 06:09:42 UTC
Permalink
Hi Bruce
-----Original Message-----
From: Richardson, Bruce
Sent: Friday, January 30, 2015 7:30 AM
To: Zhang, Helin
Konstantin
Subject: Re: [PATCH 04/17] ixgbe: support of unified packet type
Post by Helin Zhang
To unify packet types among all PMDs, bit masks of packet type for
ol_flags are replaced by unified packet type for Vector PMD.
1. Can you add scalar and vector into the titles to make it clear how this patch
and the previous ones differ 2. Can you add a note calling out performance
impacts for this patch. If no performance impacts, then please note that for
reviewers.
OK. That will be in the v2 patches. Thanks for the good comments!

Regards,
Helin
/Bruce
Post by Helin Zhang
---
lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c | 39
+++++++++++++++++++----------------
1 file changed, 21 insertions(+), 18 deletions(-)
diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
b/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
index b54cb19..b3cf7dd 100644
--- a/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
+++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
@@ -134,44 +134,35 @@ ixgbe_rxq_rearm(struct igb_rx_queue *rxq)
*/
#ifdef RTE_IXGBE_RX_OLFLAGS_ENABLE
-#define OLFLAGS_MASK ((uint16_t)(PKT_RX_VLAN_PKT |
PKT_RX_IPV4_HDR |\
Post by Helin Zhang
- PKT_RX_IPV4_HDR_EXT | PKT_RX_IPV6_HDR |\
- PKT_RX_IPV6_HDR_EXT))
-#define OLFLAGS_MASK_V (((uint64_t)OLFLAGS_MASK << 48) | \
- ((uint64_t)OLFLAGS_MASK << 32) | \
- ((uint64_t)OLFLAGS_MASK << 16) | \
- ((uint64_t)OLFLAGS_MASK))
-#define PTYPE_SHIFT (1)
+#define OLFLAGS_MASK_V (((uint64_t)PKT_RX_VLAN_PKT << 48) | \
+ ((uint64_t)PKT_RX_VLAN_PKT << 32) | \
+ ((uint64_t)PKT_RX_VLAN_PKT << 16) | \
+ ((uint64_t)PKT_RX_VLAN_PKT))
#define VTAG_SHIFT (3)
static inline void
desc_to_olflags_v(__m128i descs[4], struct rte_mbuf **rx_pkts) {
- __m128i ptype0, ptype1, vtag0, vtag1;
+ __m128i vtag0, vtag1;
union {
uint16_t e[4];
uint64_t dword;
} vol;
- ptype0 = _mm_unpacklo_epi16(descs[0], descs[1]);
- ptype1 = _mm_unpacklo_epi16(descs[2], descs[3]);
vtag0 = _mm_unpackhi_epi16(descs[0], descs[1]);
vtag1 = _mm_unpackhi_epi16(descs[2], descs[3]);
- ptype1 = _mm_unpacklo_epi32(ptype0, ptype1);
vtag1 = _mm_unpacklo_epi32(vtag0, vtag1);
-
- ptype1 = _mm_slli_epi16(ptype1, PTYPE_SHIFT);
vtag1 = _mm_srli_epi16(vtag1, VTAG_SHIFT);
- ptype1 = _mm_or_si128(ptype1, vtag1);
- vol.dword = _mm_cvtsi128_si64(ptype1) & OLFLAGS_MASK_V;
+ vol.dword = _mm_cvtsi128_si64(vtag1) & OLFLAGS_MASK_V;
rx_pkts[0]->ol_flags = vol.e[0];
rx_pkts[1]->ol_flags = vol.e[1];
rx_pkts[2]->ol_flags = vol.e[2];
rx_pkts[3]->ol_flags = vol.e[3];
}
+
#else
rte_mbuf **rx_pkts,
Post by Helin Zhang
0 /* ignore pkt_type field */
);
__m128i dd_check, eop_check;
+ __m128i desc_mask = _mm_set_epi32(0xFFFFFFFF, 0xFFFFFFFF,
+ 0xFFFFFFFF, 0xFFFF07F0);
if (unlikely(nb_pkts < RTE_IXGBE_VPMD_RX_BURST))
return 0;
@@ -239,7 +232,8 @@ _recv_raw_pkts_vec(struct igb_rx_queue *rxq,
struct rte_mbuf **rx_pkts,
Post by Helin Zhang
0xFF, 0xFF, /* skip high 16 bits pkt_len, zero out */
13, 12, /* octet 12~13, low 16 bits pkt_len */
13, 12, /* octet 12~13, 16 bits data_len */
- 0xFF, 0xFF /* skip pkt_type field */
+ 1, /* octet 1, 8 bits pkt_type field */
+ 0 /* octet 0, 4 bits offset 4 pkt_type field */
);
/* Cache is empty -> need to scan the buffer rings, but first move
@@ -248,6 +242,7 @@ _recv_raw_pkts_vec(struct igb_rx_queue *rxq,
struct rte_mbuf **rx_pkts,
/*
* A. load 4 packet in one loop
+ * [A*. mask out 4 unused dirty field in desc]
* B. copy 4 mbuf point from swring to rx_pkts
* C. calc the number of DD bits among the 4 packets
**rx_pkts,
Post by Helin Zhang
/* B.2 copy 2 mbuf point into rx_pkts */
_mm_storeu_si128((__m128i *)&rx_pkts[pos+2], mbp2);
+ /* A* mask out 0~3 bits RSS type */
+ descs[3] = _mm_and_si128(descs[3], desc_mask);
+ descs[2] = _mm_and_si128(descs[2], desc_mask);
+
+ /* A* mask out 0~3 bits RSS type */
+ descs[1] = _mm_and_si128(descs[1], desc_mask);
+ descs[0] = _mm_and_si128(descs[0], desc_mask);
+
/* avoid compiler reorder optimization */
rte_compiler_barrier();
@@ -301,7 +304,7 @@ _recv_raw_pkts_vec(struct igb_rx_queue *rxq,
struct rte_mbuf **rx_pkts,
Post by Helin Zhang
/* C.1 4=>2 filter staterr info only */
sterr_tmp1 = _mm_unpackhi_epi32(descs[1], descs[0]);
- /* set ol_flags with packet type and vlan tag */
+ /* set ol_flags with vlan packet type */
desc_to_olflags_v(descs, &rx_pkts[pos]);
/* D.2 pkt 3,4 set in_port/nb_seg and remove crc */
--
1.8.1.4
Helin Zhang
2015-01-29 03:15:53 UTC
Permalink
To unify packet types among all PMDs, bit masks of packet type for
ol_flags are replaced by unified packet type.

Signed-off-by: Helin Zhang <***@intel.com>
Signed-off-by: Jijiang Liu <***@intel.com>
---
lib/librte_pmd_i40e/i40e_rxtx.c | 778 ++++++++++++++++++++++++++--------------
1 file changed, 504 insertions(+), 274 deletions(-)

diff --git a/lib/librte_pmd_i40e/i40e_rxtx.c b/lib/librte_pmd_i40e/i40e_rxtx.c
index 2beae3c..68029c3 100644
--- a/lib/librte_pmd_i40e/i40e_rxtx.c
+++ b/lib/librte_pmd_i40e/i40e_rxtx.c
@@ -146,272 +146,503 @@ i40e_rxd_error_to_pkt_flags(uint64_t qword)
return flags;
}

-/* Translate pkt types to pkt flags */
-static inline uint64_t
-i40e_rxd_ptype_to_pkt_flags(uint64_t qword)
+/* For each value it means, datasheet of hardware can tell more details */
+static inline uint16_t
+i40e_rxd_pkt_type_mapping(uint8_t ptype)
{
- uint8_t ptype = (uint8_t)((qword & I40E_RXD_QW1_PTYPE_MASK) >>
- I40E_RXD_QW1_PTYPE_SHIFT);
- static const uint64_t ip_ptype_map[I40E_MAX_PKT_TYPE] = {
- 0, /* PTYPE 0 */
- 0, /* PTYPE 1 */
- 0, /* PTYPE 2 */
- 0, /* PTYPE 3 */
- 0, /* PTYPE 4 */
- 0, /* PTYPE 5 */
- 0, /* PTYPE 6 */
- 0, /* PTYPE 7 */
- 0, /* PTYPE 8 */
- 0, /* PTYPE 9 */
- 0, /* PTYPE 10 */
- 0, /* PTYPE 11 */
- 0, /* PTYPE 12 */
- 0, /* PTYPE 13 */
- 0, /* PTYPE 14 */
- 0, /* PTYPE 15 */
- 0, /* PTYPE 16 */
- 0, /* PTYPE 17 */
- 0, /* PTYPE 18 */
- 0, /* PTYPE 19 */
- 0, /* PTYPE 20 */
- 0, /* PTYPE 21 */
- PKT_RX_IPV4_HDR, /* PTYPE 22 */
- PKT_RX_IPV4_HDR, /* PTYPE 23 */
- PKT_RX_IPV4_HDR, /* PTYPE 24 */
- 0, /* PTYPE 25 */
- PKT_RX_IPV4_HDR, /* PTYPE 26 */
- PKT_RX_IPV4_HDR, /* PTYPE 27 */
- PKT_RX_IPV4_HDR, /* PTYPE 28 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 29 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 30 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 31 */
- 0, /* PTYPE 32 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 33 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 34 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 35 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 36 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 37 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 38 */
- 0, /* PTYPE 39 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 40 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 41 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 42 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 43 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 44 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 45 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 46 */
- 0, /* PTYPE 47 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 48 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 49 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 50 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 51 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 52 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 53 */
- 0, /* PTYPE 54 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 55 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 56 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 57 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 58 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 59 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 60 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 61 */
- 0, /* PTYPE 62 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 63 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 64 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 65 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 66 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 67 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 68 */
- 0, /* PTYPE 69 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 70 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 71 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 72 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 73 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 74 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 75 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 76 */
- 0, /* PTYPE 77 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 78 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 79 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 80 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 81 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 82 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 83 */
- 0, /* PTYPE 84 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 85 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 86 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 87 */
- PKT_RX_IPV6_HDR, /* PTYPE 88 */
- PKT_RX_IPV6_HDR, /* PTYPE 89 */
- PKT_RX_IPV6_HDR, /* PTYPE 90 */
- 0, /* PTYPE 91 */
- PKT_RX_IPV6_HDR, /* PTYPE 92 */
- PKT_RX_IPV6_HDR, /* PTYPE 93 */
- PKT_RX_IPV6_HDR, /* PTYPE 94 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 95 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 96 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 97 */
- 0, /* PTYPE 98 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 99 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 100 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 101 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 102 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 103 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 104 */
- 0, /* PTYPE 105 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 106 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 107 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 108 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 109 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 110 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 111 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 112 */
- 0, /* PTYPE 113 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 114 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 115 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 116 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 117 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 118 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 119 */
- 0, /* PTYPE 120 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 121 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 122 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 123 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 124 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 125 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 126 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 127 */
- 0, /* PTYPE 128 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 129 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 130 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 131 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 132 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 133 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 134 */
- 0, /* PTYPE 135 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 136 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 137 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 138 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 139 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 140 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 141 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 142 */
- 0, /* PTYPE 143 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 144 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 145 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 146 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 147 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 148 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 149 */
- 0, /* PTYPE 150 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 151 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 152 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 153 */
- 0, /* PTYPE 154 */
- 0, /* PTYPE 155 */
- 0, /* PTYPE 156 */
- 0, /* PTYPE 157 */
- 0, /* PTYPE 158 */
- 0, /* PTYPE 159 */
- 0, /* PTYPE 160 */
- 0, /* PTYPE 161 */
- 0, /* PTYPE 162 */
- 0, /* PTYPE 163 */
- 0, /* PTYPE 164 */
- 0, /* PTYPE 165 */
- 0, /* PTYPE 166 */
- 0, /* PTYPE 167 */
- 0, /* PTYPE 168 */
- 0, /* PTYPE 169 */
- 0, /* PTYPE 170 */
- 0, /* PTYPE 171 */
- 0, /* PTYPE 172 */
- 0, /* PTYPE 173 */
- 0, /* PTYPE 174 */
- 0, /* PTYPE 175 */
- 0, /* PTYPE 176 */
- 0, /* PTYPE 177 */
- 0, /* PTYPE 178 */
- 0, /* PTYPE 179 */
- 0, /* PTYPE 180 */
- 0, /* PTYPE 181 */
- 0, /* PTYPE 182 */
- 0, /* PTYPE 183 */
- 0, /* PTYPE 184 */
- 0, /* PTYPE 185 */
- 0, /* PTYPE 186 */
- 0, /* PTYPE 187 */
- 0, /* PTYPE 188 */
- 0, /* PTYPE 189 */
- 0, /* PTYPE 190 */
- 0, /* PTYPE 191 */
- 0, /* PTYPE 192 */
- 0, /* PTYPE 193 */
- 0, /* PTYPE 194 */
- 0, /* PTYPE 195 */
- 0, /* PTYPE 196 */
- 0, /* PTYPE 197 */
- 0, /* PTYPE 198 */
- 0, /* PTYPE 199 */
- 0, /* PTYPE 200 */
- 0, /* PTYPE 201 */
- 0, /* PTYPE 202 */
- 0, /* PTYPE 203 */
- 0, /* PTYPE 204 */
- 0, /* PTYPE 205 */
- 0, /* PTYPE 206 */
- 0, /* PTYPE 207 */
- 0, /* PTYPE 208 */
- 0, /* PTYPE 209 */
- 0, /* PTYPE 210 */
- 0, /* PTYPE 211 */
- 0, /* PTYPE 212 */
- 0, /* PTYPE 213 */
- 0, /* PTYPE 214 */
- 0, /* PTYPE 215 */
- 0, /* PTYPE 216 */
- 0, /* PTYPE 217 */
- 0, /* PTYPE 218 */
- 0, /* PTYPE 219 */
- 0, /* PTYPE 220 */
- 0, /* PTYPE 221 */
- 0, /* PTYPE 222 */
- 0, /* PTYPE 223 */
- 0, /* PTYPE 224 */
- 0, /* PTYPE 225 */
- 0, /* PTYPE 226 */
- 0, /* PTYPE 227 */
- 0, /* PTYPE 228 */
- 0, /* PTYPE 229 */
- 0, /* PTYPE 230 */
- 0, /* PTYPE 231 */
- 0, /* PTYPE 232 */
- 0, /* PTYPE 233 */
- 0, /* PTYPE 234 */
- 0, /* PTYPE 235 */
- 0, /* PTYPE 236 */
- 0, /* PTYPE 237 */
- 0, /* PTYPE 238 */
- 0, /* PTYPE 239 */
- 0, /* PTYPE 240 */
- 0, /* PTYPE 241 */
- 0, /* PTYPE 242 */
- 0, /* PTYPE 243 */
- 0, /* PTYPE 244 */
- 0, /* PTYPE 245 */
- 0, /* PTYPE 246 */
- 0, /* PTYPE 247 */
- 0, /* PTYPE 248 */
- 0, /* PTYPE 249 */
- 0, /* PTYPE 250 */
- 0, /* PTYPE 251 */
- 0, /* PTYPE 252 */
- 0, /* PTYPE 253 */
- 0, /* PTYPE 254 */
- 0, /* PTYPE 255 */
+ static const uint16_t ptype_table[UINT8_MAX] __rte_cache_aligned = {
+ /* [0] - [21] reserved */
+
+ /* Non tunneled IPv4 */
+ [22] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_FRAG,
+ [23] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_NONFRAG,
+ [24] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_UDP,
+ /* [25] reserved */
+ [26] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_TCP,
+ [27] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_SCTP,
+ [28] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_ICMP,
+
+ /* IPv4 --> IPv4 */
+ [29] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_FRAG,
+ [30] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_NONFRAG,
+ [31] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_UDP,
+ /* [32] reserved */
+ [33] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_TCP,
+ [34] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_SCTP,
+ [35] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_ICMP,
+
+ /* IPv4 --> IPv6 */
+ [36] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_FRAG,
+ [37] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_NONFRAG,
+ [38] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_UDP,
+ /* [39] reserved */
+ [40] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_TCP,
+ [41] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_SCTP,
+ [42] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN */
+ [43] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> IPv4 */
+ [44] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_FRAG,
+ [45] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_NONFRAG,
+ [46] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_UDP,
+ /* [47] reserved */
+ [48] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_TCP,
+ [49] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_SCTP,
+ [50] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> IPv6 */
+ [51] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_FRAG,
+ [52] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_NONFRAG,
+ [53] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_UDP,
+ /* [54] reserved */
+ [55] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_TCP,
+ [56] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_SCTP,
+ [57] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC */
+ [58] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MAC,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC --> IPv4 */
+ [59] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_FRAG,
+ [60] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_NONFRAG,
+ [61] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_UDP,
+ /* [62] reserved */
+ [63] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_TCP,
+ [64] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_SCTP,
+ [65] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC --> IPv6 */
+ [66] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_FRAG,
+ [67] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_NONFRAG,
+ [68] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_UDP,
+ /* [69] reserved */
+ [70] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_TCP,
+ [71] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_SCTP,
+ [72] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC/VLAN */
+ [73] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MACVLAN,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv4 */
+ [74] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MACVLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_FRAG,
+ [75] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MACVLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_NONFRAG,
+ [76] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MACVLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_UDP,
+ /* [77] reserved */
+ [78] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MACVLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_TCP,
+ [79] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MACVLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_SCTP,
+ [80] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MACVLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv6 */
+ [81] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MACVLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_FRAG,
+ [82] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MACVLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_NONFRAG,
+ [83] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MACVLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_UDP,
+ /* [84] reserved */
+ [85] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MACVLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_TCP,
+ [86] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MACVLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_SCTP,
+ [87] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MACVLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_ICMP,
+
+ /* Non tunneled IPv6 */
+ [88] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_FRAG,
+ [89] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_NONFRAG,
+ [90] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_UDP,
+ /* [91] reserved */
+ [92] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_TCP,
+ [93] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_SCTP,
+ [94] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_ICMP,
+
+ /* IPv6 --> IPv4 */
+ [95] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_FRAG,
+ [96] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_NONFRAG,
+ [97] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_UDP,
+ /* [98] reserved */
+ [99] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_TCP,
+ [100] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_SCTP,
+ [101] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_ICMP,
+
+ /* IPv6 --> IPv6 */
+ [102] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_FRAG,
+ [103] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_NONFRAG,
+ [104] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_UDP,
+ /* [105] reserved */
+ [106] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_TCP,
+ [107] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_SCTP,
+ [108] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN */
+ [109] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> IPv4 */
+ [110] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_FRAG,
+ [111] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_NONFRAG,
+ [112] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_UDP,
+ /* [113] reserved */
+ [114] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_TCP,
+ [115] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_SCTP,
+ [116] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> IPv6 */
+ [117] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_FRAG,
+ [118] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_NONFRAG,
+ [119] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_UDP,
+ /* [120] reserved */
+ [121] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_TCP,
+ [122] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_SCTP,
+ [123] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC */
+ [124] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MAC,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC --> IPv4 */
+ [125] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_FRAG,
+ [126] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_NONFRAG,
+ [127] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_UDP,
+ /* [128] reserved */
+ [129] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_TCP,
+ [130] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_SCTP,
+ [131] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC --> IPv6 */
+ [132] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_FRAG,
+ [133] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_NONFRAG,
+ [134] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_UDP,
+ /* [135] reserved */
+ [136] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_TCP,
+ [137] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_SCTP,
+ [138] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC/VLAN */
+ [139] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MACVLAN,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv4 */
+ [140] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MACVLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_FRAG,
+ [141] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MACVLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_NONFRAG,
+ [142] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MACVLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_UDP,
+ /* [143] reserved */
+ [144] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MACVLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_TCP,
+ [145] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MACVLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_SCTP,
+ [146] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MACVLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv6 */
+ [147] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MACVLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_FRAG,
+ [148] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MACVLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_NONFRAG,
+ [149] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MACVLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_UDP,
+ /* [150] reserved */
+ [151] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MACVLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_TCP,
+ [152] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MACVLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_SCTP,
+ [153] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MACVLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_ICMP,
+
+ /* All others reserved */
};

- return ip_ptype_map[ptype];
+ return ptype_table[ptype];
}

#define I40E_RX_DESC_EXT_STATUS_FLEXBH_MASK 0x03
@@ -708,11 +939,11 @@ i40e_rx_scan_hw_ring(struct i40e_rx_queue *rxq)
rxdp[j].wb.qword0.lo_dword.l2tag1) : 0;
pkt_flags = i40e_rxd_status_to_pkt_flags(qword1);
pkt_flags |= i40e_rxd_error_to_pkt_flags(qword1);
- pkt_flags |= i40e_rxd_ptype_to_pkt_flags(qword1);

- mb->packet_type = (uint16_t)((qword1 &
- I40E_RXD_QW1_PTYPE_MASK) >>
- I40E_RXD_QW1_PTYPE_SHIFT);
+ mb->packet_type =
+ i40e_rxd_pkt_type_mapping((uint8_t)((qword1 &
+ I40E_RXD_QW1_PTYPE_MASK) >>
+ I40E_RXD_QW1_PTYPE_SHIFT));
if (pkt_flags & PKT_RX_RSS_HASH)
mb->hash.rss = rte_le_to_cpu_32(\
rxdp[j].wb.qword0.hi_dword.rss);
@@ -951,9 +1182,9 @@ i40e_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
rte_le_to_cpu_16(rxd.wb.qword0.lo_dword.l2tag1) : 0;
pkt_flags = i40e_rxd_status_to_pkt_flags(qword1);
pkt_flags |= i40e_rxd_error_to_pkt_flags(qword1);
- pkt_flags |= i40e_rxd_ptype_to_pkt_flags(qword1);
- rxm->packet_type = (uint16_t)((qword1 & I40E_RXD_QW1_PTYPE_MASK) >>
- I40E_RXD_QW1_PTYPE_SHIFT);
+ rxm->packet_type =
+ i40e_rxd_pkt_type_mapping((uint8_t)((qword1 &
+ I40E_RXD_QW1_PTYPE_MASK) >> I40E_RXD_QW1_PTYPE_SHIFT));
if (pkt_flags & PKT_RX_RSS_HASH)
rxm->hash.rss =
rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
@@ -1110,10 +1341,9 @@ i40e_recv_scattered_pkts(void *rx_queue,
rte_le_to_cpu_16(rxd.wb.qword0.lo_dword.l2tag1) : 0;
pkt_flags = i40e_rxd_status_to_pkt_flags(qword1);
pkt_flags |= i40e_rxd_error_to_pkt_flags(qword1);
- pkt_flags |= i40e_rxd_ptype_to_pkt_flags(qword1);
- first_seg->packet_type = (uint16_t)((qword1 &
- I40E_RXD_QW1_PTYPE_MASK) >>
- I40E_RXD_QW1_PTYPE_SHIFT);
+ first_seg->packet_type =
+ i40e_rxd_pkt_type_mapping((uint8_t)((qword1 &
+ I40E_RXD_QW1_PTYPE_MASK) >> I40E_RXD_QW1_PTYPE_SHIFT));
if (pkt_flags & PKT_RX_RSS_HASH)
rxm->hash.rss =
rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
--
1.8.1.4
Helin Zhang
2015-01-29 03:15:54 UTC
Permalink
To unify packet types among all PMDs, bit masks of packet type for
ol_flags are replaced by unified packet type.

Signed-off-by: Helin Zhang <***@intel.com>
---
lib/librte_pmd_bond/rte_eth_bond_pmd.c | 9 ++++-----
1 file changed, 4 insertions(+), 5 deletions(-)

diff --git a/lib/librte_pmd_bond/rte_eth_bond_pmd.c b/lib/librte_pmd_bond/rte_eth_bond_pmd.c
index 8b80297..acd8e77 100644
--- a/lib/librte_pmd_bond/rte_eth_bond_pmd.c
+++ b/lib/librte_pmd_bond/rte_eth_bond_pmd.c
@@ -319,12 +319,11 @@ xmit_l23_hash(const struct rte_mbuf *buf, uint8_t slave_count)

hash = ether_hash(eth_hdr);

- if (buf->ol_flags & PKT_RX_IPV4_HDR) {
+ if (RTE_ETH_IS_IPV4_HDR(buf->packet_type)) {
struct ipv4_hdr *ipv4_hdr = (struct ipv4_hdr *)
((char *)(eth_hdr + 1) + vlan_offset);
l3hash = ipv4_hash(ipv4_hdr);
-
- } else if (buf->ol_flags & PKT_RX_IPV6_HDR) {
+ } else if (RTE_ETH_IS_IPV6_HDR(buf->packet_type)) {
struct ipv6_hdr *ipv6_hdr = (struct ipv6_hdr *)
((char *)(eth_hdr + 1) + vlan_offset);
l3hash = ipv6_hash(ipv6_hdr);
@@ -346,7 +345,7 @@ xmit_l34_hash(const struct rte_mbuf *buf, uint8_t slave_count)
struct tcp_hdr *tcp_hdr = NULL;
uint32_t hash, l3hash = 0, l4hash = 0;

- if (buf->ol_flags & PKT_RX_IPV4_HDR) {
+ if (RTE_ETH_IS_IPV4_HDR(buf->packet_type)) {
struct ipv4_hdr *ipv4_hdr = (struct ipv4_hdr *)
((char *)(eth_hdr + 1) + vlan_offset);
size_t ip_hdr_offset;
@@ -365,7 +364,7 @@ xmit_l34_hash(const struct rte_mbuf *buf, uint8_t slave_count)
ip_hdr_offset);
l4hash = HASH_L4_PORTS(udp_hdr);
}
- } else if (buf->ol_flags & PKT_RX_IPV6_HDR) {
+ } else if (RTE_ETH_IS_IPV6_HDR(buf->packet_type)) {
struct ipv6_hdr *ipv6_hdr = (struct ipv6_hdr *)
((char *)(eth_hdr + 1) + vlan_offset);
l3hash = ipv6_hash(ipv6_hdr);
--
1.8.1.4
Declan Doherty
2015-02-11 15:01:11 UTC
Permalink
Post by Helin Zhang
To unify packet types among all PMDs, bit masks of packet type for
ol_flags are replaced by unified packet type.
---
lib/librte_pmd_bond/rte_eth_bond_pmd.c | 9 ++++-----
1 file changed, 4 insertions(+), 5 deletions(-)
diff --git a/lib/librte_pmd_bond/rte_eth_bond_pmd.c b/lib/librte_pmd_bond/rte_eth_bond_pmd.c
index 8b80297..acd8e77 100644
--- a/lib/librte_pmd_bond/rte_eth_bond_pmd.c
+++ b/lib/librte_pmd_bond/rte_eth_bond_pmd.c
@@ -319,12 +319,11 @@ xmit_l23_hash(const struct rte_mbuf *buf, uint8_t slave_count)
hash = ether_hash(eth_hdr);
- if (buf->ol_flags & PKT_RX_IPV4_HDR) {
+ if (RTE_ETH_IS_IPV4_HDR(buf->packet_type)) {
struct ipv4_hdr *ipv4_hdr = (struct ipv4_hdr *)
((char *)(eth_hdr + 1) + vlan_offset);
l3hash = ipv4_hash(ipv4_hdr);
-
- } else if (buf->ol_flags & PKT_RX_IPV6_HDR) {
+ } else if (RTE_ETH_IS_IPV6_HDR(buf->packet_type)) {
struct ipv6_hdr *ipv6_hdr = (struct ipv6_hdr *)
((char *)(eth_hdr + 1) + vlan_offset);
l3hash = ipv6_hash(ipv6_hdr);
@@ -346,7 +345,7 @@ xmit_l34_hash(const struct rte_mbuf *buf, uint8_t slave_count)
struct tcp_hdr *tcp_hdr = NULL;
uint32_t hash, l3hash = 0, l4hash = 0;
- if (buf->ol_flags & PKT_RX_IPV4_HDR) {
+ if (RTE_ETH_IS_IPV4_HDR(buf->packet_type)) {
struct ipv4_hdr *ipv4_hdr = (struct ipv4_hdr *)
((char *)(eth_hdr + 1) + vlan_offset);
size_t ip_hdr_offset;
@@ -365,7 +364,7 @@ xmit_l34_hash(const struct rte_mbuf *buf, uint8_t slave_count)
ip_hdr_offset);
l4hash = HASH_L4_PORTS(udp_hdr);
}
- } else if (buf->ol_flags & PKT_RX_IPV6_HDR) {
+ } else if (RTE_ETH_IS_IPV6_HDR(buf->packet_type)) {
struct ipv6_hdr *ipv6_hdr = (struct ipv6_hdr *)
((char *)(eth_hdr + 1) + vlan_offset);
l3hash = ipv6_hash(ipv6_hdr);
Hey Helin,
this patch should no longer be necessary as commit #
bffc9b35e3acd70895b73616c850d8d37fe5732e removed all references to the
ol_flags in the link bonding code.

Declan
Zhang, Helin
2015-02-13 00:36:22 UTC
Permalink
Hi Declan

Yes, I got it. I already have v2 patch of it which has no changes for bond anymore. Thanks!

Regards,
Helin
-----Original Message-----
From: Doherty, Declan
Sent: Wednesday, February 11, 2015 11:01 PM
Subject: Re: [dpdk-dev] [PATCH 06/17] bond: support of unified packet type
Post by Helin Zhang
To unify packet types among all PMDs, bit masks of packet type for
ol_flags are replaced by unified packet type.
---
lib/librte_pmd_bond/rte_eth_bond_pmd.c | 9 ++++-----
1 file changed, 4 insertions(+), 5 deletions(-)
diff --git a/lib/librte_pmd_bond/rte_eth_bond_pmd.c
b/lib/librte_pmd_bond/rte_eth_bond_pmd.c
index 8b80297..acd8e77 100644
--- a/lib/librte_pmd_bond/rte_eth_bond_pmd.c
+++ b/lib/librte_pmd_bond/rte_eth_bond_pmd.c
@@ -319,12 +319,11 @@ xmit_l23_hash(const struct rte_mbuf *buf, uint8_t slave_count)
hash = ether_hash(eth_hdr);
- if (buf->ol_flags & PKT_RX_IPV4_HDR) {
+ if (RTE_ETH_IS_IPV4_HDR(buf->packet_type)) {
struct ipv4_hdr *ipv4_hdr = (struct ipv4_hdr *)
((char *)(eth_hdr + 1) + vlan_offset);
l3hash = ipv4_hash(ipv4_hdr);
-
- } else if (buf->ol_flags & PKT_RX_IPV6_HDR) {
+ } else if (RTE_ETH_IS_IPV6_HDR(buf->packet_type)) {
struct ipv6_hdr *ipv6_hdr = (struct ipv6_hdr *)
((char *)(eth_hdr + 1) + vlan_offset);
l3hash = ipv6_hash(ipv6_hdr);
@@ -346,7 +345,7 @@ xmit_l34_hash(const struct rte_mbuf *buf, uint8_t
slave_count)
Post by Helin Zhang
struct tcp_hdr *tcp_hdr = NULL;
uint32_t hash, l3hash = 0, l4hash = 0;
- if (buf->ol_flags & PKT_RX_IPV4_HDR) {
+ if (RTE_ETH_IS_IPV4_HDR(buf->packet_type)) {
struct ipv4_hdr *ipv4_hdr = (struct ipv4_hdr *)
((char *)(eth_hdr + 1) + vlan_offset);
size_t ip_hdr_offset;
@@ -365,7 +364,7 @@ xmit_l34_hash(const struct rte_mbuf *buf, uint8_t
slave_count)
Post by Helin Zhang
ip_hdr_offset);
l4hash = HASH_L4_PORTS(udp_hdr);
}
- } else if (buf->ol_flags & PKT_RX_IPV6_HDR) {
+ } else if (RTE_ETH_IS_IPV6_HDR(buf->packet_type)) {
struct ipv6_hdr *ipv6_hdr = (struct ipv6_hdr *)
((char *)(eth_hdr + 1) + vlan_offset);
l3hash = ipv6_hash(ipv6_hdr);
Hey Helin,
this patch should no longer be necessary as commit #
bffc9b35e3acd70895b73616c850d8d37fe5732e removed all references to the
ol_flags in the link bonding code.
Declan
Helin Zhang
2015-01-29 03:15:55 UTC
Permalink
To unify packet types among all PMDs, bit masks of packet type for
ol_flags are replaced by unified packet type.

Signed-off-by: Helin Zhang <***@intel.com>
---
lib/librte_pmd_enic/enic_main.c | 14 ++++++++------
1 file changed, 8 insertions(+), 6 deletions(-)

diff --git a/lib/librte_pmd_enic/enic_main.c b/lib/librte_pmd_enic/enic_main.c
index 48fdca2..9acba9a 100644
--- a/lib/librte_pmd_enic/enic_main.c
+++ b/lib/librte_pmd_enic/enic_main.c
@@ -423,7 +423,7 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
rx_pkt->pkt_len = bytes_written;

if (ipv4) {
- rx_pkt->ol_flags |= PKT_RX_IPV4_HDR;
+ rx_pkt->packet_type = RTE_PTYPE_L3_IPV4;
if (!csum_not_calc) {
if (unlikely(!ipv4_csum_ok))
rx_pkt->ol_flags |= PKT_RX_IP_CKSUM_BAD;
@@ -432,7 +432,7 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
rx_pkt->ol_flags |= PKT_RX_L4_CKSUM_BAD;
}
} else if (ipv6)
- rx_pkt->ol_flags |= PKT_RX_IPV6_HDR;
+ rx_pkt->packet_type = RTE_PTYPE_L3_IPV6;
} else {
/* Header split */
if (sop && !eop) {
@@ -445,7 +445,7 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
*rx_pkt_bucket = rx_pkt;
rx_pkt->pkt_len = bytes_written;
if (ipv4) {
- rx_pkt->ol_flags |= PKT_RX_IPV4_HDR;
+ rx_pkt->packet_type = RTE_PTYPE_L3_IPV4;
if (!csum_not_calc) {
if (unlikely(!ipv4_csum_ok))
rx_pkt->ol_flags |=
@@ -457,13 +457,14 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
PKT_RX_L4_CKSUM_BAD;
}
} else if (ipv6)
- rx_pkt->ol_flags |= PKT_RX_IPV6_HDR;
+ rx_pkt->packet_type = RTE_PTYPE_L3_IPV6;
} else {
/* Payload */
hdr_rx_pkt = *rx_pkt_bucket;
hdr_rx_pkt->pkt_len += bytes_written;
if (ipv4) {
- hdr_rx_pkt->ol_flags |= PKT_RX_IPV4_HDR;
+ hdr_rx_pkt->packet_type =
+ RTE_PTYPE_L3_IPV4;
if (!csum_not_calc) {
if (unlikely(!ipv4_csum_ok))
hdr_rx_pkt->ol_flags |=
@@ -475,7 +476,8 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
PKT_RX_L4_CKSUM_BAD;
}
} else if (ipv6)
- hdr_rx_pkt->ol_flags |= PKT_RX_IPV6_HDR;
+ hdr_rx_pkt->packet_type =
+ RTE_PTYPE_L3_IPV6;

}
}
--
1.8.1.4
Helin Zhang
2015-01-29 03:15:56 UTC
Permalink
To unify packet types among all PMDs, bit masks of packet type for
ol_flags are replaced by unified packet type.

Signed-off-by: Helin Zhang <***@intel.com>
---
lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c b/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
index 8425f32..c85ebd8 100644
--- a/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
+++ b/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
@@ -650,9 +650,9 @@ vmxnet3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
struct ipv4_hdr *ip = (struct ipv4_hdr *)(eth + 1);

if (((ip->version_ihl & 0xf) << 2) > (int)sizeof(struct ipv4_hdr))
- rxm->ol_flags |= PKT_RX_IPV4_HDR_EXT;
+ rxm->packet_type = RTE_PTYPE_L3_IPV4_EXT;
else
- rxm->ol_flags |= PKT_RX_IPV4_HDR;
+ rxm->packet_type = RTE_PTYPE_L3_IPV4;

if (!rcd->cnc) {
if (!rcd->ipc)
--
1.8.1.4
Helin Zhang
2015-01-29 03:15:57 UTC
Permalink
To unify packet types among all PMDs, bit masks and relevant macros
of packet type for ol_flags are replaced by unified packet type and
relevant macros.

Signed-off-by: Helin Zhang <***@intel.com>
---
app/test-pipeline/pipeline_hash.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/app/test-pipeline/pipeline_hash.c b/app/test-pipeline/pipeline_hash.c
index 4598ad4..db650c8 100644
--- a/app/test-pipeline/pipeline_hash.c
+++ b/app/test-pipeline/pipeline_hash.c
@@ -459,14 +459,14 @@ app_main_loop_rx_metadata(void) {
signature = RTE_MBUF_METADATA_UINT32_PTR(m, 0);
key = RTE_MBUF_METADATA_UINT8_PTR(m, 32);

- if (m->ol_flags & PKT_RX_IPV4_HDR) {
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type)) {
ip_hdr = (struct ipv4_hdr *)
&m_data[sizeof(struct ether_hdr)];
ip_dst = ip_hdr->dst_addr;

k32 = (uint32_t *) key;
k32[0] = ip_dst & 0xFFFFFF00;
- } else {
+ } else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
ipv6_hdr = (struct ipv6_hdr *)
&m_data[sizeof(struct ether_hdr)];
ipv6_dst = ipv6_hdr->dst_addr;
--
1.8.1.4
Helin Zhang
2015-01-29 03:15:58 UTC
Permalink
To unify packet types among all PMDs, bit masks and relevant macros
of packet type for ol_flags are replaced by unified packet type and
relevant macros.

Signed-off-by: Helin Zhang <***@intel.com>
---
app/test-pmd/csumonly.c | 6 +++---
app/test-pmd/rxonly.c | 9 +++------
2 files changed, 6 insertions(+), 9 deletions(-)

diff --git a/app/test-pmd/csumonly.c b/app/test-pmd/csumonly.c
index 41711fd..5e08272 100644
--- a/app/test-pmd/csumonly.c
+++ b/app/test-pmd/csumonly.c
@@ -319,7 +319,7 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
uint16_t nb_tx;
uint16_t i;
uint64_t ol_flags;
- uint16_t testpmd_ol_flags;
+ uint16_t testpmd_ol_flags, packet_type;
uint8_t l4_proto, l4_tun_len = 0;
uint16_t ethertype = 0, outer_ethertype = 0;
uint16_t l2_len = 0, l3_len = 0, l4_len = 0;
@@ -362,6 +362,7 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
tunnel = 0;
l4_tun_len = 0;
m = pkts_burst[i];
+ packet_type = m->packet_type;

/* Update the L3/L4 checksum error packet statistics */
rx_bad_ip_csum += ((m->ol_flags & PKT_RX_IP_CKSUM_BAD) != 0);
@@ -387,8 +388,7 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)

/* currently, this flag is set by i40e only if the
* packet is vxlan */
- } else if (m->ol_flags & (PKT_RX_TUNNEL_IPV4_HDR |
- PKT_RX_TUNNEL_IPV6_HDR))
+ } else if (RTE_ETH_IS_TUNNEL_PKT(packet_type))
tunnel = 1;

if (tunnel == 1) {
diff --git a/app/test-pmd/rxonly.c b/app/test-pmd/rxonly.c
index fdfe990..8eb68c4 100644
--- a/app/test-pmd/rxonly.c
+++ b/app/test-pmd/rxonly.c
@@ -92,7 +92,7 @@ pkt_burst_receive(struct fwd_stream *fs)
uint64_t ol_flags;
uint16_t nb_rx;
uint16_t i, packet_type;
- uint64_t is_encapsulation;
+ uint16_t is_encapsulation;

#ifdef RTE_TEST_PMD_RECORD_CORE_CYCLES
uint64_t start_tsc;
@@ -135,10 +135,7 @@ pkt_burst_receive(struct fwd_stream *fs)
eth_type = RTE_BE_TO_CPU_16(eth_hdr->ether_type);
ol_flags = mb->ol_flags;
packet_type = mb->packet_type;
-
- is_encapsulation = ol_flags & (PKT_RX_TUNNEL_IPV4_HDR |
- PKT_RX_TUNNEL_IPV6_HDR);
-
+ is_encapsulation = RTE_ETH_IS_TUNNEL_PKT(packet_type);
print_ether_addr(" src=", &eth_hdr->s_addr);
print_ether_addr(" - dst=", &eth_hdr->d_addr);
printf(" - type=0x%04x - length=%u - nb_segs=%d",
@@ -174,7 +171,7 @@ pkt_burst_receive(struct fwd_stream *fs)
l2_len = sizeof(struct ether_hdr);

/* Do not support ipv4 option field */
- if (ol_flags & PKT_RX_TUNNEL_IPV4_HDR) {
+ if (RTE_ETH_IS_IPV4_HDR(packet_type)) {
l3_len = sizeof(struct ipv4_hdr);
ipv4_hdr = (struct ipv4_hdr *) (rte_pktmbuf_mtod(mb,
unsigned char *) + l2_len);
--
1.8.1.4
Helin Zhang
2015-01-29 03:15:59 UTC
Permalink
To unify packet types among all PMDs, bit masks and relevant macros
of packet type for ol_flags are replaced by unified packet type and
relevant macros.

Signed-off-by: Helin Zhang <***@intel.com>
---
app/test/packet_burst_generator.c | 10 ++++------
1 file changed, 4 insertions(+), 6 deletions(-)

diff --git a/app/test/packet_burst_generator.c b/app/test/packet_burst_generator.c
index 4a89663..0a936ea 100644
--- a/app/test/packet_burst_generator.c
+++ b/app/test/packet_burst_generator.c
@@ -258,18 +258,16 @@ nomore_mbuf:
pkt->vlan_tci = ETHER_TYPE_IPv4;
pkt->l3_len = sizeof(struct ipv4_hdr);

+ pkt->packet_type = RTE_PTYPE_L3_IPV4;
if (vlan_enabled)
- pkt->ol_flags = PKT_RX_IPV4_HDR | PKT_RX_VLAN_PKT;
- else
- pkt->ol_flags = PKT_RX_IPV4_HDR;
+ pkt->ol_flags = PKT_RX_VLAN_PKT;
} else {
pkt->vlan_tci = ETHER_TYPE_IPv6;
pkt->l3_len = sizeof(struct ipv6_hdr);

+ pkt->packet_type = RTE_PTYPE_L3_IPV6;
if (vlan_enabled)
- pkt->ol_flags = PKT_RX_IPV6_HDR | PKT_RX_VLAN_PKT;
- else
- pkt->ol_flags = PKT_RX_IPV6_HDR;
+ pkt->ol_flags = PKT_RX_VLAN_PKT;
}

pkts_burst[nb_pkt] = pkt;
--
1.8.1.4
Helin Zhang
2015-01-29 03:16:00 UTC
Permalink
To unify packet types among all PMDs, bit masks and relevant macros
of packet type for ol_flags are replaced by unified packet type and
relevant macros.

Signed-off-by: Helin Zhang <***@intel.com>
---
examples/ip_fragmentation/main.c | 7 +++----
1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/examples/ip_fragmentation/main.c b/examples/ip_fragmentation/main.c
index eac5427..152844e 100644
--- a/examples/ip_fragmentation/main.c
+++ b/examples/ip_fragmentation/main.c
@@ -286,7 +286,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, struct lcore_queue_conf *qconf,
len = qconf->tx_mbufs[port_out].len;

/* if this is an IPv4 packet */
- if (m->ol_flags & PKT_RX_IPV4_HDR) {
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type)) {
struct ipv4_hdr *ip_hdr;
uint32_t ip_dst;
/* Read the lookup key (i.e. ip_dst) from the input packet */
@@ -320,9 +320,8 @@ l3fwd_simple_forward(struct rte_mbuf *m, struct lcore_queue_conf *qconf,
if (unlikely (len2 < 0))
return;
}
- }
- /* if this is an IPv6 packet */
- else if (m->ol_flags & PKT_RX_IPV6_HDR) {
+ } else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
+ /* if this is an IPv6 packet */
struct ipv6_hdr *ip_hdr;

ipv6 = 1;
--
1.8.1.4
Helin Zhang
2015-01-29 03:16:02 UTC
Permalink
To unify packet types among all PMDs, bit masks and relevant macros
of packet type for ol_flags are replaced by unified packet type and
relevant macros.

Signed-off-by: Helin Zhang <***@intel.com>
---
examples/l3fwd-acl/main.c | 19 ++++++-------------
1 file changed, 6 insertions(+), 13 deletions(-)

diff --git a/examples/l3fwd-acl/main.c b/examples/l3fwd-acl/main.c
index f1f7601..af70ccd 100644
--- a/examples/l3fwd-acl/main.c
+++ b/examples/l3fwd-acl/main.c
@@ -651,9 +651,7 @@ prepare_one_packet(struct rte_mbuf **pkts_in, struct acl_search_t *acl,
struct ipv4_hdr *ipv4_hdr;
struct rte_mbuf *pkt = pkts_in[index];

- int type = pkt->ol_flags & (PKT_RX_IPV4_HDR | PKT_RX_IPV6_HDR);
-
- if (type == PKT_RX_IPV4_HDR) {
+ if (RTE_ETH_IS_IPV4_HDR(pkt->packet_type)) {

ipv4_hdr = (struct ipv4_hdr *)(rte_pktmbuf_mtod(pkt,
unsigned char *) + sizeof(struct ether_hdr));
@@ -674,8 +672,7 @@ prepare_one_packet(struct rte_mbuf **pkts_in, struct acl_search_t *acl,
rte_pktmbuf_free(pkt);
}

- } else if (type == PKT_RX_IPV6_HDR) {
-
+ } else if (RTE_ETH_IS_IPV6_HDR(pkt->packet_type)) {
/* Fill acl structure */
acl->data_ipv6[acl->num_ipv6] = MBUF_IPV6_2PROTO(pkt);
acl->m_ipv6[(acl->num_ipv6)++] = pkt;
@@ -693,17 +690,13 @@ prepare_one_packet(struct rte_mbuf **pkts_in, struct acl_search_t *acl,
{
struct rte_mbuf *pkt = pkts_in[index];

- int type = pkt->ol_flags & (PKT_RX_IPV4_HDR | PKT_RX_IPV6_HDR);
-
- if (type == PKT_RX_IPV4_HDR) {
-
+ if (RTE_ETH_IS_IPV4_HDR(pkt->packet_type)) {
/* Fill acl structure */
acl->data_ipv4[acl->num_ipv4] = MBUF_IPV4_2PROTO(pkt);
acl->m_ipv4[(acl->num_ipv4)++] = pkt;


- } else if (type == PKT_RX_IPV6_HDR) {
-
+ } else if (RTE_ETH_IS_IPV6_HDR(pkt->packet_type)) {
/* Fill acl structure */
acl->data_ipv6[acl->num_ipv6] = MBUF_IPV6_2PROTO(pkt);
acl->m_ipv6[(acl->num_ipv6)++] = pkt;
@@ -751,9 +744,9 @@ send_one_packet(struct rte_mbuf *m, uint32_t res)
/* in the ACL list, drop it */
#ifdef L3FWDACL_DEBUG
if ((res & ACL_DENY_SIGNATURE) != 0) {
- if (m->ol_flags & PKT_RX_IPV4_HDR)
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type))
dump_acl4_rule(m, res);
- else
+ else if (RTE_ETH_IS_IPV6_HDR(m->packet_type))
dump_acl6_rule(m, res);
}
#endif
--
1.8.1.4
Helin Zhang
2015-01-29 03:16:01 UTC
Permalink
To unify packet types among all PMDs, bit masks and relevant macros
of packet type for ol_flags are replaced by unified packet type and
relevant macros.

Signed-off-by: Helin Zhang <***@intel.com>
---
examples/ip_reassembly/main.c | 7 +++----
1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c
index 8492153..5ef2135 100644
--- a/examples/ip_reassembly/main.c
+++ b/examples/ip_reassembly/main.c
@@ -357,7 +357,7 @@ reassemble(struct rte_mbuf *m, uint8_t portid, uint32_t queue,
dst_port = portid;

/* if packet is IPv4 */
- if (m->ol_flags & (PKT_RX_IPV4_HDR)) {
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type)) {
struct ipv4_hdr *ip_hdr;
uint32_t ip_dst;

@@ -397,9 +397,8 @@ reassemble(struct rte_mbuf *m, uint8_t portid, uint32_t queue,
}

eth_hdr->ether_type = rte_be_to_cpu_16(ETHER_TYPE_IPv4);
- }
- /* if packet is IPv6 */
- else if (m->ol_flags & (PKT_RX_IPV6_HDR | PKT_RX_IPV6_HDR_EXT)) {
+ } else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
+ /* if packet is IPv6 */
struct ipv6_extension_fragment *frag_hdr;
struct ipv6_hdr *ip_hdr;
--
1.8.1.4
Helin Zhang
2015-01-29 03:16:03 UTC
Permalink
To unify packet types among all PMDs, bit masks and relevant macros
of packet type for ol_flags are replaced by unified packet type and
relevant macros.

Signed-off-by: Helin Zhang <***@intel.com>
---
examples/l3fwd-power/main.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/examples/l3fwd-power/main.c b/examples/l3fwd-power/main.c
index f6b55b9..964e5b9 100644
--- a/examples/l3fwd-power/main.c
+++ b/examples/l3fwd-power/main.c
@@ -638,7 +638,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid,

eth_hdr = rte_pktmbuf_mtod(m, struct ether_hdr *);

- if (m->ol_flags & PKT_RX_IPV4_HDR) {
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type)) {
/* Handle IPv4 headers.*/
ipv4_hdr =
(struct ipv4_hdr *)(rte_pktmbuf_mtod(m, unsigned char*)
@@ -673,8 +673,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid,
ether_addr_copy(&ports_eth_addr[dst_port], &eth_hdr->s_addr);

send_single_packet(m, dst_port);
- }
- else {
+ } else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
/* Handle IPv6 headers.*/
#if (APP_LOOKUP_METHOD == APP_LOOKUP_EXACT_MATCH)
struct ipv6_hdr *ipv6_hdr;
--
1.8.1.4
Helin Zhang
2015-01-29 03:16:05 UTC
Permalink
To unify packet types among all PMDs, bit masks and relevant macros
of packet type for ol_flags are replaced by unified packet type and
relevant macros.

Signed-off-by: Helin Zhang <***@intel.com>
---
lib/librte_mbuf/rte_mbuf.c | 6 ------
lib/librte_mbuf/rte_mbuf.h | 10 ++--------
2 files changed, 2 insertions(+), 14 deletions(-)

diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c
index 1b14e02..8050ccf 100644
--- a/lib/librte_mbuf/rte_mbuf.c
+++ b/lib/librte_mbuf/rte_mbuf.c
@@ -215,14 +215,8 @@ const char *rte_get_rx_ol_flag_name(uint64_t mask)
/* case PKT_RX_HBUF_OVERFLOW: return "PKT_RX_HBUF_OVERFLOW"; */
/* case PKT_RX_RECIP_ERR: return "PKT_RX_RECIP_ERR"; */
/* case PKT_RX_MAC_ERR: return "PKT_RX_MAC_ERR"; */
- case PKT_RX_IPV4_HDR: return "PKT_RX_IPV4_HDR";
- case PKT_RX_IPV4_HDR_EXT: return "PKT_RX_IPV4_HDR_EXT";
- case PKT_RX_IPV6_HDR: return "PKT_RX_IPV6_HDR";
- case PKT_RX_IPV6_HDR_EXT: return "PKT_RX_IPV6_HDR_EXT";
case PKT_RX_IEEE1588_PTP: return "PKT_RX_IEEE1588_PTP";
case PKT_RX_IEEE1588_TMST: return "PKT_RX_IEEE1588_TMST";
- case PKT_RX_TUNNEL_IPV4_HDR: return "PKT_RX_TUNNEL_IPV4_HDR";
- case PKT_RX_TUNNEL_IPV6_HDR: return "PKT_RX_TUNNEL_IPV6_HDR";
default: return NULL;
}
}
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index 94ae344..5df0d61 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -90,16 +90,10 @@ extern "C" {
#define PKT_RX_HBUF_OVERFLOW (0ULL << 0) /**< Header buffer overflow. */
#define PKT_RX_RECIP_ERR (0ULL << 0) /**< Hardware processing error. */
#define PKT_RX_MAC_ERR (0ULL << 0) /**< MAC error. */
-#define PKT_RX_IPV4_HDR (1ULL << 5) /**< RX packet with IPv4 header. */
-#define PKT_RX_IPV4_HDR_EXT (1ULL << 6) /**< RX packet with extended IPv4 header. */
-#define PKT_RX_IPV6_HDR (1ULL << 7) /**< RX packet with IPv6 header. */
-#define PKT_RX_IPV6_HDR_EXT (1ULL << 8) /**< RX packet with extended IPv6 header. */
#define PKT_RX_IEEE1588_PTP (1ULL << 9) /**< RX IEEE1588 L2 Ethernet PT Packet. */
#define PKT_RX_IEEE1588_TMST (1ULL << 10) /**< RX IEEE1588 L2/L4 timestamped packet.*/
-#define PKT_RX_TUNNEL_IPV4_HDR (1ULL << 11) /**< RX tunnel packet with IPv4 header.*/
-#define PKT_RX_TUNNEL_IPV6_HDR (1ULL << 12) /**< RX tunnel packet with IPv6 header. */
-#define PKT_RX_FDIR_ID (1ULL << 13) /**< FD id reported if FDIR match. */
-#define PKT_RX_FDIR_FLX (1ULL << 14) /**< Flexible bytes reported if FDIR match. */
+#define PKT_RX_FDIR_ID (1ULL << 11) /**< FD id reported if FDIR match. */
+#define PKT_RX_FDIR_FLX (1ULL << 12) /**< Flexible bytes reported if FDIR match. */
/* add new RX flags here */

/* add new TX flags here */
--
1.8.1.4
Olivier MATZ
2015-01-30 13:37:15 UTC
Permalink
Hi Helin,
Post by Helin Zhang
To unify packet types among all PMDs, bit masks and relevant macros
of packet type for ol_flags are replaced by unified packet type and
relevant macros.
---
lib/librte_mbuf/rte_mbuf.c | 6 ------
lib/librte_mbuf/rte_mbuf.h | 10 ++--------
2 files changed, 2 insertions(+), 14 deletions(-)
diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c
index 1b14e02..8050ccf 100644
--- a/lib/librte_mbuf/rte_mbuf.c
+++ b/lib/librte_mbuf/rte_mbuf.c
@@ -215,14 +215,8 @@ const char *rte_get_rx_ol_flag_name(uint64_t mask)
/* case PKT_RX_HBUF_OVERFLOW: return "PKT_RX_HBUF_OVERFLOW"; */
/* case PKT_RX_RECIP_ERR: return "PKT_RX_RECIP_ERR"; */
/* case PKT_RX_MAC_ERR: return "PKT_RX_MAC_ERR"; */
- case PKT_RX_IPV4_HDR: return "PKT_RX_IPV4_HDR";
- case PKT_RX_IPV4_HDR_EXT: return "PKT_RX_IPV4_HDR_EXT";
- case PKT_RX_IPV6_HDR: return "PKT_RX_IPV6_HDR";
- case PKT_RX_IPV6_HDR_EXT: return "PKT_RX_IPV6_HDR_EXT";
case PKT_RX_IEEE1588_PTP: return "PKT_RX_IEEE1588_PTP";
case PKT_RX_IEEE1588_TMST: return "PKT_RX_IEEE1588_TMST";
- case PKT_RX_TUNNEL_IPV4_HDR: return "PKT_RX_TUNNEL_IPV4_HDR";
- case PKT_RX_TUNNEL_IPV6_HDR: return "PKT_RX_TUNNEL_IPV6_HDR";
I see you are not removing IEEE1588. Is there a reason why it is not
handled as a packet_type?
Post by Helin Zhang
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index 94ae344..5df0d61 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -90,16 +90,10 @@ extern "C" {
#define PKT_RX_HBUF_OVERFLOW (0ULL << 0) /**< Header buffer overflow. */
#define PKT_RX_RECIP_ERR (0ULL << 0) /**< Hardware processing error. */
#define PKT_RX_MAC_ERR (0ULL << 0) /**< MAC error. */
-#define PKT_RX_IPV4_HDR (1ULL << 5) /**< RX packet with IPv4 header. */
-#define PKT_RX_IPV4_HDR_EXT (1ULL << 6) /**< RX packet with extended IPv4 header. */
-#define PKT_RX_IPV6_HDR (1ULL << 7) /**< RX packet with IPv6 header. */
-#define PKT_RX_IPV6_HDR_EXT (1ULL << 8) /**< RX packet with extended IPv6 header. */
#define PKT_RX_IEEE1588_PTP (1ULL << 9) /**< RX IEEE1588 L2 Ethernet PT Packet. */
#define PKT_RX_IEEE1588_TMST (1ULL << 10) /**< RX IEEE1588 L2/L4 timestamped packet.*/
-#define PKT_RX_TUNNEL_IPV4_HDR (1ULL << 11) /**< RX tunnel packet with IPv4 header.*/
-#define PKT_RX_TUNNEL_IPV6_HDR (1ULL << 12) /**< RX tunnel packet with IPv6 header. */
-#define PKT_RX_FDIR_ID (1ULL << 13) /**< FD id reported if FDIR match. */
-#define PKT_RX_FDIR_FLX (1ULL << 14) /**< Flexible bytes reported if FDIR match. */
+#define PKT_RX_FDIR_ID (1ULL << 11) /**< FD id reported if FDIR match. */
+#define PKT_RX_FDIR_FLX (1ULL << 12) /**< Flexible bytes reported if FDIR match. */
It looks like but numbers are not contiguous anymore (there is a hole
between 5 and 8).

Regards,
Olivier
Zhang, Helin
2015-02-02 01:53:20 UTC
Permalink
Hi Olivier
-----Original Message-----
Sent: Friday, January 30, 2015 9:37 PM
Subject: Re: [dpdk-dev] [PATCH 17/17] mbuf: remove old packet type bit masks
for ol_flags
Hi Helin,
To unify packet types among all PMDs, bit masks and relevant macros of
packet type for ol_flags are replaced by unified packet type and
relevant macros.
---
lib/librte_mbuf/rte_mbuf.c | 6 ------ lib/librte_mbuf/rte_mbuf.h |
10 ++--------
2 files changed, 2 insertions(+), 14 deletions(-)
diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c
index 1b14e02..8050ccf 100644
--- a/lib/librte_mbuf/rte_mbuf.c
+++ b/lib/librte_mbuf/rte_mbuf.c
@@ -215,14 +215,8 @@ const char *rte_get_rx_ol_flag_name(uint64_t
mask)
/* case PKT_RX_HBUF_OVERFLOW: return "PKT_RX_HBUF_OVERFLOW";
*/
/* case PKT_RX_RECIP_ERR: return "PKT_RX_RECIP_ERR"; */
/* case PKT_RX_MAC_ERR: return "PKT_RX_MAC_ERR"; */
- case PKT_RX_IPV4_HDR: return "PKT_RX_IPV4_HDR";
- case PKT_RX_IPV4_HDR_EXT: return "PKT_RX_IPV4_HDR_EXT";
- case PKT_RX_IPV6_HDR: return "PKT_RX_IPV6_HDR";
- case PKT_RX_IPV6_HDR_EXT: return "PKT_RX_IPV6_HDR_EXT";
case PKT_RX_IEEE1588_PTP: return "PKT_RX_IEEE1588_PTP";
case PKT_RX_IEEE1588_TMST: return "PKT_RX_IEEE1588_TMST";
- case PKT_RX_TUNNEL_IPV4_HDR: return "PKT_RX_TUNNEL_IPV4_HDR";
- case PKT_RX_TUNNEL_IPV6_HDR: return "PKT_RX_TUNNEL_IPV6_HDR";
I see you are not removing IEEE1588. Is there a reason why it is not handled as
a packet_type?
Ieee1588 is not a part of information reported by hardware in packet type.
Yes, your idea on this is worth being taken into account.
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index 94ae344..5df0d61 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -90,16 +90,10 @@ extern "C" {
#define PKT_RX_HBUF_OVERFLOW (0ULL << 0) /**< Header buffer
overflow. */
#define PKT_RX_RECIP_ERR (0ULL << 0) /**< Hardware processing
error. */
#define PKT_RX_MAC_ERR (0ULL << 0) /**< MAC error. */
-#define PKT_RX_IPV4_HDR (1ULL << 5) /**< RX packet with IPv4
header. */
-#define PKT_RX_IPV4_HDR_EXT (1ULL << 6) /**< RX packet with
extended IPv4 header. */
-#define PKT_RX_IPV6_HDR (1ULL << 7) /**< RX packet with IPv6
header. */
-#define PKT_RX_IPV6_HDR_EXT (1ULL << 8) /**< RX packet with
extended IPv6 header. */ #define PKT_RX_IEEE1588_PTP (1ULL << 9)
/**< RX IEEE1588 L2 Ethernet PT Packet. */ #define
PKT_RX_IEEE1588_TMST (1ULL << 10) /**< RX IEEE1588 L2/L4 timestamped
packet.*/ -#define PKT_RX_TUNNEL_IPV4_HDR (1ULL << 11) /**< RX tunnel
packet with IPv4 header.*/ -#define PKT_RX_TUNNEL_IPV6_HDR (1ULL << 12)
/**< RX tunnel packet with IPv6 header. */
-#define PKT_RX_FDIR_ID (1ULL << 13) /**< FD id reported if FDIR
match. */
-#define PKT_RX_FDIR_FLX (1ULL << 14) /**< Flexible bytes reported if
FDIR match. */
+#define PKT_RX_FDIR_ID (1ULL << 11) /**< FD id reported if FDIR
match. */
+#define PKT_RX_FDIR_FLX (1ULL << 12) /**< Flexible bytes reported if
FDIR match. */
It looks like but numbers are not contiguous anymore (there is a hole between
5 and 8).
Initially I don't want to move the following values up, as I am not sure if it may
affect other features.
I'd prefer to keep that hole as reserved. What's the opinion from you guys?
Thanks for the good comments!
Regards,
Olivier
Olivier MATZ
2015-02-02 11:22:56 UTC
Permalink
Hi Helin,
Post by Zhang, Helin
*/
Post by Helin Zhang
/* case PKT_RX_RECIP_ERR: return "PKT_RX_RECIP_ERR"; */
/* case PKT_RX_MAC_ERR: return "PKT_RX_MAC_ERR"; */
- case PKT_RX_IPV4_HDR: return "PKT_RX_IPV4_HDR";
- case PKT_RX_IPV4_HDR_EXT: return "PKT_RX_IPV4_HDR_EXT";
- case PKT_RX_IPV6_HDR: return "PKT_RX_IPV6_HDR";
- case PKT_RX_IPV6_HDR_EXT: return "PKT_RX_IPV6_HDR_EXT";
case PKT_RX_IEEE1588_PTP: return "PKT_RX_IEEE1588_PTP";
case PKT_RX_IEEE1588_TMST: return "PKT_RX_IEEE1588_TMST";
- case PKT_RX_TUNNEL_IPV4_HDR: return "PKT_RX_TUNNEL_IPV4_HDR";
- case PKT_RX_TUNNEL_IPV6_HDR: return "PKT_RX_TUNNEL_IPV6_HDR";
I see you are not removing IEEE1588. Is there a reason why it is not handled as
a packet_type?
Ieee1588 is not a part of information reported by hardware in packet type.
Yes, your idea on this is worth being taken into account.
This would indeed be more coherent. In this case, I think it would
require to add a l2_type in the packet_type info.
Post by Zhang, Helin
Post by Helin Zhang
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index 94ae344..5df0d61 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -90,16 +90,10 @@ extern "C" {
#define PKT_RX_HBUF_OVERFLOW (0ULL << 0) /**< Header buffer
overflow. */
Post by Helin Zhang
#define PKT_RX_RECIP_ERR (0ULL << 0) /**< Hardware processing
error. */
Post by Helin Zhang
#define PKT_RX_MAC_ERR (0ULL << 0) /**< MAC error. */
-#define PKT_RX_IPV4_HDR (1ULL << 5) /**< RX packet with IPv4
header. */
Post by Helin Zhang
-#define PKT_RX_IPV4_HDR_EXT (1ULL << 6) /**< RX packet with
extended IPv4 header. */
Post by Helin Zhang
-#define PKT_RX_IPV6_HDR (1ULL << 7) /**< RX packet with IPv6
header. */
Post by Helin Zhang
-#define PKT_RX_IPV6_HDR_EXT (1ULL << 8) /**< RX packet with
extended IPv6 header. */ #define PKT_RX_IEEE1588_PTP (1ULL << 9)
/**< RX IEEE1588 L2 Ethernet PT Packet. */ #define
PKT_RX_IEEE1588_TMST (1ULL << 10) /**< RX IEEE1588 L2/L4 timestamped
packet.*/ -#define PKT_RX_TUNNEL_IPV4_HDR (1ULL << 11) /**< RX tunnel
packet with IPv4 header.*/ -#define PKT_RX_TUNNEL_IPV6_HDR (1ULL << 12)
/**< RX tunnel packet with IPv6 header. */
Post by Helin Zhang
-#define PKT_RX_FDIR_ID (1ULL << 13) /**< FD id reported if FDIR
match. */
Post by Helin Zhang
-#define PKT_RX_FDIR_FLX (1ULL << 14) /**< Flexible bytes reported if
FDIR match. */
Post by Helin Zhang
+#define PKT_RX_FDIR_ID (1ULL << 11) /**< FD id reported if FDIR
match. */
Post by Helin Zhang
+#define PKT_RX_FDIR_FLX (1ULL << 12) /**< Flexible bytes reported if
FDIR match. */
It looks like but numbers are not contiguous anymore (there is a hole between
5 and 8).
Initially I don't want to move the following values up, as I am not sure if it may
affect other features.
I'd prefer to keep that hole as reserved. What's the opinion from you guys?
Thanks for the good comments!
I think changing the flags bit value to keep ordering is not a problem.

Regards,
Olivier
Helin Zhang
2015-01-29 03:16:04 UTC
Permalink
To unify packet types among all PMDs, bit masks and relevant macros
of packet type for ol_flags are replaced by unified packet type and
relevant macros.

Signed-off-by: Helin Zhang <***@intel.com>
---
examples/l3fwd/main.c | 64 +++++++++++++++++++++++++++++----------------------
1 file changed, 37 insertions(+), 27 deletions(-)

diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
index 6f7d7d4..d02a19c 100644
--- a/examples/l3fwd/main.c
+++ b/examples/l3fwd/main.c
@@ -958,7 +958,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid, struct lcore_conf *qcon

eth_hdr = rte_pktmbuf_mtod(m, struct ether_hdr *);

- if (m->ol_flags & PKT_RX_IPV4_HDR) {
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type)) {
/* Handle IPv4 headers.*/
ipv4_hdr = (struct ipv4_hdr *)(rte_pktmbuf_mtod(m, unsigned char *) +
sizeof(struct ether_hdr));
@@ -993,7 +993,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid, struct lcore_conf *qcon

send_single_packet(m, dst_port);

- } else {
+ } else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
/* Handle IPv6 headers.*/
struct ipv6_hdr *ipv6_hdr;

@@ -1039,11 +1039,11 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid, struct lcore_conf *qcon
* to BAD_PORT value.
*/
static inline __attribute__((always_inline)) void
-rfc1812_process(struct ipv4_hdr *ipv4_hdr, uint16_t *dp, uint32_t flags)
+rfc1812_process(struct ipv4_hdr *ipv4_hdr, uint16_t *dp, uint16_t ptype)
{
uint8_t ihl;

- if ((flags & PKT_RX_IPV4_HDR) != 0) {
+ if (RTE_ETH_IS_IPV4_HDR(ptype)) {

ihl = ipv4_hdr->version_ihl - IPV4_MIN_VER_IHL;

@@ -1074,11 +1074,11 @@ get_dst_port(const struct lcore_conf *qconf, struct rte_mbuf *pkt,
struct ipv6_hdr *ipv6_hdr;
struct ether_hdr *eth_hdr;

- if (pkt->ol_flags & PKT_RX_IPV4_HDR) {
+ if (RTE_ETH_IS_IPV4_HDR(pkt->packet_type)) {
if (rte_lpm_lookup(qconf->ipv4_lookup_struct, dst_ipv4,
&next_hop) != 0)
next_hop = portid;
- } else if (pkt->ol_flags & PKT_RX_IPV6_HDR) {
+ } else if (RTE_ETH_IS_IPV6_HDR(pkt->packet_type)) {
eth_hdr = rte_pktmbuf_mtod(pkt, struct ether_hdr *);
ipv6_hdr = (struct ipv6_hdr *)(eth_hdr + 1);
if (rte_lpm6_lookup(qconf->ipv6_lookup_struct,
@@ -1112,7 +1112,7 @@ process_packet(struct lcore_conf *qconf, struct rte_mbuf *pkt,
ve = val_eth[dp];

dst_port[0] = dp;
- rfc1812_process(ipv4_hdr, dst_port, pkt->ol_flags);
+ rfc1812_process(ipv4_hdr, dst_port, pkt->packet_type);

te = _mm_blend_epi16(te, ve, MASK_ETH);
_mm_store_si128((__m128i *)eth_hdr, te);
@@ -1122,7 +1122,7 @@ process_packet(struct lcore_conf *qconf, struct rte_mbuf *pkt,
* Read ol_flags and destination IPV4 addresses from 4 mbufs.
*/
static inline void
-processx4_step1(struct rte_mbuf *pkt[FWDSTEP], __m128i *dip, uint32_t *flag)
+processx4_step1(struct rte_mbuf *pkt[FWDSTEP], __m128i *dip, int *ipv4_flag)
{
struct ipv4_hdr *ipv4_hdr;
struct ether_hdr *eth_hdr;
@@ -1131,22 +1131,22 @@ processx4_step1(struct rte_mbuf *pkt[FWDSTEP], __m128i *dip, uint32_t *flag)
eth_hdr = rte_pktmbuf_mtod(pkt[0], struct ether_hdr *);
ipv4_hdr = (struct ipv4_hdr *)(eth_hdr + 1);
x0 = ipv4_hdr->dst_addr;
- flag[0] = pkt[0]->ol_flags & PKT_RX_IPV4_HDR;

eth_hdr = rte_pktmbuf_mtod(pkt[1], struct ether_hdr *);
ipv4_hdr = (struct ipv4_hdr *)(eth_hdr + 1);
x1 = ipv4_hdr->dst_addr;
- flag[0] &= pkt[1]->ol_flags;

eth_hdr = rte_pktmbuf_mtod(pkt[2], struct ether_hdr *);
ipv4_hdr = (struct ipv4_hdr *)(eth_hdr + 1);
x2 = ipv4_hdr->dst_addr;
- flag[0] &= pkt[2]->ol_flags;

eth_hdr = rte_pktmbuf_mtod(pkt[3], struct ether_hdr *);
ipv4_hdr = (struct ipv4_hdr *)(eth_hdr + 1);
x3 = ipv4_hdr->dst_addr;
- flag[0] &= pkt[3]->ol_flags;
+ *ipv4_flag = RTE_ETH_IS_IPV4_HDR(pkt[0]->packet_type) &&
+ RTE_ETH_IS_IPV4_HDR(pkt[1]->packet_type) &&
+ RTE_ETH_IS_IPV4_HDR(pkt[2]->packet_type) &&
+ RTE_ETH_IS_IPV4_HDR(pkt[3]->packet_type);

dip[0] = _mm_set_epi32(x3, x2, x1, x0);
}
@@ -1156,7 +1156,7 @@ processx4_step1(struct rte_mbuf *pkt[FWDSTEP], __m128i *dip, uint32_t *flag)
* If lookup fails, use incoming port (portid) as destination port.
*/
static inline void
-processx4_step2(const struct lcore_conf *qconf, __m128i dip, uint32_t flag,
+processx4_step2(const struct lcore_conf *qconf, __m128i dip, int ipv4_flag,
uint8_t portid, struct rte_mbuf *pkt[FWDSTEP], uint16_t dprt[FWDSTEP])
{
rte_xmm_t dst;
@@ -1167,7 +1167,7 @@ processx4_step2(const struct lcore_conf *qconf, __m128i dip, uint32_t flag,
dip = _mm_shuffle_epi8(dip, bswap_mask);

/* if all 4 packets are IPV4. */
- if (likely(flag != 0)) {
+ if (likely(ipv4_flag)) {
rte_lpm_lookupx4(qconf->ipv4_lookup_struct, dip, dprt, portid);
} else {
dst.x = dip;
@@ -1218,13 +1218,13 @@ processx4_step3(struct rte_mbuf *pkt[FWDSTEP], uint16_t dst_port[FWDSTEP])
_mm_store_si128(p[3], te[3]);

rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[0] + 1),
- &dst_port[0], pkt[0]->ol_flags);
+ &dst_port[0], pkt[0]->packet_type);
rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[1] + 1),
- &dst_port[1], pkt[1]->ol_flags);
+ &dst_port[1], pkt[1]->packet_type);
rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[2] + 1),
- &dst_port[2], pkt[2]->ol_flags);
+ &dst_port[2], pkt[2]->packet_type);
rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[3] + 1),
- &dst_port[3], pkt[3]->ol_flags);
+ &dst_port[3], pkt[3]->packet_type);
}

/*
@@ -1411,7 +1411,7 @@ main_loop(__attribute__((unused)) void *dummy)
uint16_t *lp;
uint16_t dst_port[MAX_PKT_BURST];
__m128i dip[MAX_PKT_BURST / FWDSTEP];
- uint32_t flag[MAX_PKT_BURST / FWDSTEP];
+ int ipv4_flag[MAX_PKT_BURST / FWDSTEP];
uint16_t pnum[MAX_PKT_BURST + 1];
#endif

@@ -1481,14 +1481,24 @@ main_loop(__attribute__((unused)) void *dummy)
*/
int32_t n = RTE_ALIGN_FLOOR(nb_rx, 4);
for (j = 0; j < n ; j+=4) {
- uint32_t ol_flag = pkts_burst[j]->ol_flags
- & pkts_burst[j+1]->ol_flags
- & pkts_burst[j+2]->ol_flags
- & pkts_burst[j+3]->ol_flags;
- if (ol_flag & PKT_RX_IPV4_HDR ) {
+ if (RTE_ETH_IS_IPV4_HDR(
+ pkts_burst[j]->packet_type) &&
+ RTE_ETH_IS_IPV4_HDR(
+ pkts_burst[j+1]->packet_type) &&
+ RTE_ETH_IS_IPV4_HDR(
+ pkts_burst[j+2]->packet_type) &&
+ RTE_ETH_IS_IPV4_HDR(
+ pkts_burst[j+3]->packet_type)) {
simple_ipv4_fwd_4pkts(&pkts_burst[j],
portid, qconf);
- } else if (ol_flag & PKT_RX_IPV6_HDR) {
+ } else if (RTE_ETH_IS_IPV6_HDR(
+ pkts_burst[j]->packet_type) &&
+ RTE_ETH_IS_IPV6_HDR(
+ pkts_burst[j+1]->packet_type) &&
+ RTE_ETH_IS_IPV6_HDR(
+ pkts_burst[j+2]->packet_type) &&
+ RTE_ETH_IS_IPV6_HDR(
+ pkts_burst[j+3]->packet_type)) {
simple_ipv6_fwd_4pkts(&pkts_burst[j],
portid, qconf);
} else {
@@ -1513,13 +1523,13 @@ main_loop(__attribute__((unused)) void *dummy)
for (j = 0; j != k; j += FWDSTEP) {
processx4_step1(&pkts_burst[j],
&dip[j / FWDSTEP],
- &flag[j / FWDSTEP]);
+ &ipv4_flag[j / FWDSTEP]);
}

k = RTE_ALIGN_FLOOR(nb_rx, FWDSTEP);
for (j = 0; j != k; j += FWDSTEP) {
processx4_step2(qconf, dip[j / FWDSTEP],
- flag[j / FWDSTEP], portid,
+ ipv4_flag[j / FWDSTEP], portid,
&pkts_burst[j], &dst_port[j]);
}
--
1.8.1.4
Olivier MATZ
2015-01-30 13:31:09 UTC
Permalink
Hi Helin,
Post by Helin Zhang
Currently only 6 bits which are stored in ol_flags are used to indicate
the packet types. This is not enough, as some NIC hardware can recognize
quite a lot of packet types, e.g i40e hardware can recognize more than 150
packet types. Hiding those packet types hides hardware offload capabilities
which could be quite useful for improving performance and for end users.
So an unified packet types are needed to support all possible PMDs. Recently
a 16 bits packet_type field has been added in mbuf header which can be used
for this purpose. In addition, all packet types stored in ol_flag field
should be deleted at all, and 6 bits of ol_flags can be save as the benifit.
Initially, 16 bits of packet_type can be divided into several sub fields to
indicate different packet type information of a packet. The initial design
is to divide those bits into 4 fields for L3 types, tunnel types, inner L3
types and L4 types. All PMDs should translate the offloaded packet types
into this 4 fields of information, for user applications.
You haven't answered to my question I asked in your RFC patch [1].
Post by Helin Zhang
Another question I've asked several times[1][2] : what does having
RTE_PTYPE_TUNNEL_IP mean? What fields are checked by the hardware (or
the driver) and what fields should be checked by the application?
Are you sure that all the drivers (ixgbe, i40e, vmxnet3, enic) check the same
fields? (ethertype, ip version, ip len correct, ip checksum correct, flags, ...)
RTE_PTYPE_TUNNEL_IP means hardware recognizes the received packet as an
IP-in-IP packet.
All the fields are filled by PMD which is recognized by hardware. The application
can just use it which can save some cpu cycles to recognize the packet type by
software.
Drivers is responsible for filling with correct values according to the packet types
recognized by its hardware. Different PMDs may fill with different values based on
different capabilities.
Sorry, that does not answer to my question.
Let's take a simple example. Imagine a hardware-1 that is able to
recognize an IP packet by checking the ethertype and that the IP
version is set to 4.
Another hardware-2 recognize an IP packet by checking the ethertype,
the IP version and that the IP length is correct compared to m_len(m).
For the same packet, both hardwares will return RTE_PTYPE_L3_IPV4, but
they don't do the same checks on the packet. As I want my application
behave exactly the same whatever the hardware, I need to know what
checks are done in hardware, so I can decide what checks must be
done in my application.
Example of definition: RTE_PTYPE_L3_IPV4 means that ethertype is
0x0800 and IP.version is 4.
It means that I can skip these 2 tests in my application if I have
this packet_type, but all other checks must be done in software
(ip length, flags, checksum, ...)
For each packet type, we need a definition like above, and we must
check that all drivers setting a packet type behave like described.
I'm not opposed to have a packet_type field in rx mbuf, but I really
think the question above is an important question to make this feature
useful to the applications.


Regards,
Olivier

[1] http://dpdk.org/ml/archives/dev/2015-January/011273.html
Zhang, Helin
2015-02-02 02:44:25 UTC
Permalink
Hi Olivier
-----Original Message-----
Sent: Friday, January 30, 2015 9:31 PM
Subject: Re: [dpdk-dev] [PATCH 00/17] unified packet type
Hi Helin,
Post by Helin Zhang
Currently only 6 bits which are stored in ol_flags are used to
indicate the packet types. This is not enough, as some NIC hardware
can recognize quite a lot of packet types, e.g i40e hardware can
recognize more than 150 packet types. Hiding those packet types hides
hardware offload capabilities which could be quite useful for improving
performance and for end users.
Post by Helin Zhang
So an unified packet types are needed to support all possible PMDs.
Recently a 16 bits packet_type field has been added in mbuf header
which can be used for this purpose. In addition, all packet types
stored in ol_flag field should be deleted at all, and 6 bits of ol_flags can be
save as the benifit.
Post by Helin Zhang
Initially, 16 bits of packet_type can be divided into several sub
fields to indicate different packet type information of a packet. The
initial design is to divide those bits into 4 fields for L3 types,
tunnel types, inner L3 types and L4 types. All PMDs should translate
the offloaded packet types into this 4 fields of information, for user
applications.
You haven't answered to my question I asked in your RFC patch [1].
Post by Helin Zhang
Another question I've asked several times[1][2] : what does having
RTE_PTYPE_TUNNEL_IP mean? What fields are checked by the hardware
(or the driver) and what fields should be checked by the application?
Are you sure that all the drivers (ixgbe, i40e, vmxnet3, enic)
check the same fields? (ethertype, ip version, ip len correct, ip
checksum correct, flags, ...)
RTE_PTYPE_TUNNEL_IP means hardware recognizes the received packet
as
Post by Helin Zhang
an IP-in-IP packet.
All the fields are filled by PMD which is recognized by hardware.
The application can just use it which can save some cpu cycles to
recognize the packet type by software.
Drivers is responsible for filling with correct values according to
the packet types recognized by its hardware. Different PMDs may fill
with different values based on different capabilities.
Sorry, that does not answer to my question.
Let's take a simple example. Imagine a hardware-1 that is able to
recognize an IP packet by checking the ethertype and that the IP
version is set to 4.
Another hardware-2 recognize an IP packet by checking the ethertype,
the IP version and that the IP length is correct compared to m_len(m).
For the same packet, both hardwares will return RTE_PTYPE_L3_IPV4,
but they don't do the same checks on the packet. As I want my
application behave exactly the same whatever the hardware, I need to
know what checks are done in hardware, so I can decide what checks
must be done in my application.
Example of definition: RTE_PTYPE_L3_IPV4 means that ethertype is
0x0800 and IP.version is 4.
It means that I can skip these 2 tests in my application if I have
this packet_type, but all other checks must be done in software (ip
length, flags, checksum, ...)
For each packet type, we need a definition like above, and we must
check that all drivers setting a packet type behave like described.
Hmm, I think the packet_type may need to be renamed to else, like offload_packet_type.
It is just for hardware reported packet type information. It is not for all
information of a packet.
As different hardware may have different capability, it cannot report the same
in mbuf among different hardware for the same packet.
With your question, I think the hardware capability flags may be needed. Applications
can query the packet type capabilities on each port, then it knows what type of packet
type information can be reported by the corresponding hardware.
What do you think? And are they any better ideas from you?

Thanks you very much!

Regards,
Helin
I'm not opposed to have a packet_type field in rx mbuf, but I really think the
question above is an important question to make this feature useful to the
applications.
Regards,
Olivier
[1] http://dpdk.org/ml/archives/dev/2015-January/011273.html
Olivier MATZ
2015-02-02 11:37:31 UTC
Permalink
Hi Helin,
Post by Zhang, Helin
Let's take a simple example. Imagine a hardware-1 that is able to
recognize an IP packet by checking the ethertype and that the IP
version is set to 4.
Another hardware-2 recognize an IP packet by checking the ethertype,
the IP version and that the IP length is correct compared to m_len(m).
For the same packet, both hardwares will return RTE_PTYPE_L3_IPV4,
but they don't do the same checks on the packet. As I want my
application behave exactly the same whatever the hardware, I need to
know what checks are done in hardware, so I can decide what checks
must be done in my application.
Example of definition: RTE_PTYPE_L3_IPV4 means that ethertype is
0x0800 and IP.version is 4.
It means that I can skip these 2 tests in my application if I have
this packet_type, but all other checks must be done in software (ip
length, flags, checksum, ...)
For each packet type, we need a definition like above, and we must
check that all drivers setting a packet type behave like described.
Hmm, I think the packet_type may need to be renamed to else, like offload_packet_type.
It is just for hardware reported packet type information. It is not for all
information of a packet.
As different hardware may have different capability, it cannot report the same
in mbuf among different hardware for the same packet.
With your question, I think the hardware capability flags may be needed. Applications
can query the packet type capabilities on each port, then it knows what type of packet
type information can be reported by the corresponding hardware.
What do you think? And are they any better ideas from you?
I'm not sure renaming the field would change something here.

The high-level question is: how a software can take advantage of this
information given by the hardware? If the same packet_type does not
have the same meaning depending on the hardware, it's not worth having
this info.

I think the API should describe for each packet type what can be
expected by the application. Here is an example. When a driver sets the
RTE_PTYPE_L3_IPV4 type, it means that:

- the layer 3 is identified as IP by underlying layer (ex: ethertype=IP
if layer 2 is ethernet)
- the IP version field is 4
- there is no IP options (i.e the size of header is 20)
- the checksum field has been verified by hw, and if wrong, the
flag PKT_RX_IP_CKSUM_BAD is set

If the hardware is not able to give all this information, there are
2 solutions:
- do the remaining tests in the driver
- or set l3 pkt_type to unknown

All other conditions that are not described in the API should be checked
by the applition if it needs the information (ex: check that IP dest
address is legal, that ip->len is >= 20, ...).


If we are able to describe this for all packet types, it would really
help application to take advantage of these packet types.

Regards,
Olivier
Ananyev, Konstantin
2015-02-02 17:20:12 UTC
Permalink
Hi Olivier,
-----Original Message-----
Sent: Monday, February 02, 2015 11:38 AM
Subject: Re: [dpdk-dev] [PATCH 00/17] unified packet type
Hi Helin,
Post by Zhang, Helin
Let's take a simple example. Imagine a hardware-1 that is able to
recognize an IP packet by checking the ethertype and that the IP
version is set to 4.
Another hardware-2 recognize an IP packet by checking the ethertype,
the IP version and that the IP length is correct compared to m_len(m).
For the same packet, both hardwares will return RTE_PTYPE_L3_IPV4,
but they don't do the same checks on the packet. As I want my
application behave exactly the same whatever the hardware, I need to
know what checks are done in hardware, so I can decide what checks
must be done in my application.
Example of definition: RTE_PTYPE_L3_IPV4 means that ethertype is
0x0800 and IP.version is 4.
It means that I can skip these 2 tests in my application if I have
this packet_type, but all other checks must be done in software (ip
length, flags, checksum, ...)
For each packet type, we need a definition like above, and we must
check that all drivers setting a packet type behave like described.
Hmm, I think the packet_type may need to be renamed to else, like offload_packet_type.
It is just for hardware reported packet type information. It is not for all
information of a packet.
As different hardware may have different capability, it cannot report the same
in mbuf among different hardware for the same packet.
With your question, I think the hardware capability flags may be needed. Applications
can query the packet type capabilities on each port, then it knows what type of packet
type information can be reported by the corresponding hardware.
What do you think? And are they any better ideas from you?
I'm not sure renaming the field would change something here.
The high-level question is: how a software can take advantage of this
information given by the hardware? If the same packet_type does not
have the same meaning depending on the hardware, it's not worth having
this info.
I think the API should describe for each packet type what can be
expected by the application. Here is an example. When a driver sets the
- the layer 3 is identified as IP by underlying layer (ex: ethertype=IP
if layer 2 is ethernet)
- the IP version field is 4
- there is no IP options (i.e the size of header is 20)
Yes, I suppose that's what supported HW can guarantee when RTE_PTYPE_L3_IPV4 is set.
- the checksum field has been verified by hw, and if wrong, the
flag PKT_RX_IP_CKSUM_BAD is set
Hmm, why is that?
As I remember on many devices it is configurable by SW should HW do RX checksum offload or not.
From DPDK point of view there is hw_ip_checksum field in rte_eth_rxmode.
So it is a possible situation, when at RX HW does packet type determination, but doesn't make L3/L4
checksum calculation.

I suppose for checksum(s) it should be a separate flags (in ol_flags) with 3 possible values:
CKSUM_UNKNOWN, CKSUM_BAD, CKSUM_OK.

Konstantin
If the hardware is not able to give all this information, there are
- do the remaining tests in the driver
- or set l3 pkt_type to unknown
All other conditions that are not described in the API should be checked
by the applition if it needs the information (ex: check that IP dest
address is legal, that ip->len is >= 20, ...).
If we are able to describe this for all packet types, it would really
help application to take advantage of these packet types.
Regards,
Olivier
Zhang, Helin
2015-02-03 03:25:15 UTC
Permalink
-----Original Message-----
From: Ananyev, Konstantin
Sent: Tuesday, February 3, 2015 1:20 AM
Subject: RE: [dpdk-dev] [PATCH 00/17] unified packet type
Hi Olivier,
-----Original Message-----
Sent: Monday, February 02, 2015 11:38 AM
Subject: Re: [dpdk-dev] [PATCH 00/17] unified packet type
Hi Helin,
Post by Zhang, Helin
Let's take a simple example. Imagine a hardware-1 that is able to
recognize an IP packet by checking the ethertype and that the IP
version is set to 4.
Another hardware-2 recognize an IP packet by checking the
ethertype, the IP version and that the IP length is correct compared to
m_len(m).
Post by Zhang, Helin
For the same packet, both hardwares will return
RTE_PTYPE_L3_IPV4, but they don't do the same checks on the
packet. As I want my application behave exactly the same whatever
the hardware, I need to know what checks are done in hardware, so
I can decide what checks must be done in my application.
Example of definition: RTE_PTYPE_L3_IPV4 means that ethertype is
0x0800 and IP.version is 4.
It means that I can skip these 2 tests in my application if I
have this packet_type, but all other checks must be done in
software (ip length, flags, checksum, ...)
For each packet type, we need a definition like above, and we
must check that all drivers setting a packet type behave like described.
Hmm, I think the packet_type may need to be renamed to else, like
offload_packet_type.
Post by Zhang, Helin
It is just for hardware reported packet type information. It is not
for all information of a packet.
As different hardware may have different capability, it cannot
report the same in mbuf among different hardware for the same packet.
With your question, I think the hardware capability flags may be
needed. Applications can query the packet type capabilities on each
port, then it knows what type of packet type information can be reported by
the corresponding hardware.
Post by Zhang, Helin
What do you think? And are they any better ideas from you?
I'm not sure renaming the field would change something here.
The high-level question is: how a software can take advantage of this
information given by the hardware? If the same packet_type does not
have the same meaning depending on the hardware, it's not worth having
this info.
I think the API should describe for each packet type what can be
expected by the application. Here is an example. When a driver sets the
- the layer 3 is identified as IP by underlying layer (ex: ethertype=IP
if layer 2 is ethernet)
- the IP version field is 4
- there is no IP options (i.e the size of header is 20)
Yes, I suppose that's what supported HW can guarantee when
RTE_PTYPE_L3_IPV4 is set.
- the checksum field has been verified by hw, and if wrong, the
flag PKT_RX_IP_CKSUM_BAD is set
Hmm, why is that?
As I remember on many devices it is configurable by SW should HW do RX
checksum offload or not.
From DPDK point of view there is hw_ip_checksum field in rte_eth_rxmode.
So it is a possible situation, when at RX HW does packet type determination,
but doesn't make L3/L4 checksum calculation.
I suppose for checksum(s) it should be a separate flags (in ol_flags) with 3
CKSUM_UNKNOWN, CKSUM_BAD, CKSUM_OK.
Konstantin
I think packet type and checksum are totally different things in DPDK, though
they might have dependencies in hardware.
Checksum good/bad is still indicated in ol_flags. Packet type is nothing about
checksum.

Regards,
Helin
If the hardware is not able to give all this information, there are
- do the remaining tests in the driver
- or set l3 pkt_type to unknown
All other conditions that are not described in the API should be
checked by the applition if it needs the information (ex: check that
IP dest address is legal, that ip->len is >= 20, ...).
If we are able to describe this for all packet types, it would really
help application to take advantage of these packet types.
Regards,
Olivier
Olivier MATZ
2015-02-03 08:55:04 UTC
Permalink
Hi Konstantin,
Post by Ananyev, Konstantin
Post by Olivier MATZ
I think the API should describe for each packet type what can be
expected by the application. Here is an example. When a driver sets the
- the layer 3 is identified as IP by underlying layer (ex: ethertype=IP
if layer 2 is ethernet)
- the IP version field is 4
- there is no IP options (i.e the size of header is 20)
Yes, I suppose that's what supported HW can guarantee when RTE_PTYPE_L3_IPV4 is set.
Post by Olivier MATZ
- the checksum field has been verified by hw, and if wrong, the
flag PKT_RX_IP_CKSUM_BAD is set
Hmm, why is that?
As I remember on many devices it is configurable by SW should HW do RX checksum offload or not.
From DPDK point of view there is hw_ip_checksum field in rte_eth_rxmode.
So it is a possible situation, when at RX HW does packet type determination, but doesn't make L3/L4
checksum calculation.
CKSUM_UNKNOWN, CKSUM_BAD, CKSUM_OK.
Indeed you are right, it's probably better to have specific flags
for checksum.

Regards,
Olivier
Helin Zhang
2015-02-09 06:40:34 UTC
Permalink
Currently only 6 bits which are stored in ol_flags are used to indicate the
packet types. This is not enough, as some NIC hardware can recognize quite
a lot of packet types, e.g i40e hardware can recognize more than 150 packet
types. Hiding those packet types hides hardware offload capabilities which
could be quite useful for improving performance and for end users. So an
unified packet types are needed to support all possible PMDs. A 16 bits
packet_type in mbuf structure can be changed to 32 bits and used for this
purpose. In addition, all packet types stored in ol_flag field should be
deleted at all, and 6 bits of ol_flags can be save as the benifit.

Initially, 32 bits of packet_type can be divided into several sub fields to
indicate different packet type information of a packet. The initial design
is to divide those bits into fields for L2 types, L3 types, L4 types, tunnel
types, inner L2 types, inner L3 types and inner L4 types. All PMDs should
translate the offloaded packet types into these 7 fields of information,
for user applications.

v2 changes:
* Enlarged the packet_type field from 16 bits to 32 bits.
* Redefined the packet type sub-fields.
* Updated the 'struct rte_kni_mbuf' for KNI according to the mbuf changes.
* Used redefined packet types and enlarged packet_type field for all PMDs
and corresponding applications.
* Removed changes in bond and its relevant application, as there is no need
at all according to the recent bond changes.

Helin Zhang (15):
mbuf: add definitions of unified packet types
e1000: support of unified packet type
ixgbe: support of unified packet type
ixgbe: support of unified packet type for vector
i40e: support of unified packet type
enic: support of unified packet type
vmxnet3: support of unified packet type
app/test-pipeline: support of unified packet type
app/test: support of unified packet type
examples/ip_fragmentation: support of unified packet type
examples/ip_reassembly: support of unified packet type
examples/l3fwd-acl: support of unified packet type
examples/l3fwd-power: support of unified packet type
examples/l3fwd: support of unified packet type
mbuf: remove old packet type bit masks

app/test-pipeline/pipeline_hash.c | 7 +-
app/test-pmd/csumonly.c | 6 +-
app/test-pmd/rxonly.c | 9 +-
examples/ip_fragmentation/main.c | 7 +-
examples/ip_reassembly/main.c | 7 +-
examples/l3fwd-acl/main.c | 19 +-
examples/l3fwd-power/main.c | 5 +-
examples/l3fwd/main.c | 64 +-
.../linuxapp/eal/include/exec-env/rte_kni_common.h | 4 +-
lib/librte_mbuf/rte_mbuf.c | 6 -
lib/librte_mbuf/rte_mbuf.h | 127 +++-
lib/librte_pmd_e1000/igb_rxtx.c | 98 ++-
lib/librte_pmd_enic/enic_main.c | 14 +-
lib/librte_pmd_i40e/i40e_rxtx.c | 786 ++++++++++++++-------
lib/librte_pmd_ixgbe/ixgbe_rxtx.c | 146 +++-
lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c | 49 +-
lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c | 4 +-
17 files changed, 914 insertions(+), 444 deletions(-)
--
1.9.3
Helin Zhang
2015-02-09 06:40:35 UTC
Permalink
As there are only 6 bit flags in ol_flags for indicating packet types,
which is not enough to describe all the possible packet types hardware
can recognize. For example, i40e hardware can recognize more than 150
packet types. Unified packet type is composed of tunnel type, L3 type,
L4 type and inner L3 type fields, and can be stored in mbuf field of
'packet_type' which is modified from 16 bits to 32 bits in mbuf structure.
Accordingly, the structure of 'rte_kni_mbuf' needs to be modifed as well.

Signed-off-by: Helin Zhang <***@intel.com>
Signed-off-by: Cunming Liang <***@intel.com>
Signed-off-by: Jijiang Liu <***@intel.com>
---
.../linuxapp/eal/include/exec-env/rte_kni_common.h | 4 +-
lib/librte_mbuf/rte_mbuf.h | 113 +++++++++++++++++++--
2 files changed, 108 insertions(+), 9 deletions(-)

v2 changes:
* Enlarged the packet_type field from 16 bits to 32 bits.
* Redefined the packet type sub-fields.
* Updated the 'struct rte_kni_mbuf' for KNI according to the mbuf changes.

diff --git a/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h b/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h
index 1e55c2d..bd1cc09 100644
--- a/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h
+++ b/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h
@@ -117,9 +117,9 @@ struct rte_kni_mbuf {
uint16_t data_off; /**< Start address of data in segment buffer. */
char pad1[4];
uint64_t ol_flags; /**< Offload features. */
- char pad2[2];
- uint16_t data_len; /**< Amount of data in segment buffer. */
+ char pad2[4];
uint32_t pkt_len; /**< Total pkt len: sum of all segment data_len. */
+ uint16_t data_len; /**< Amount of data in segment buffer. */

/* fields on second cache line */
char pad3[8] __attribute__((__aligned__(RTE_CACHE_LINE_SIZE)));
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index 16059c6..ee912d6 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -165,6 +165,96 @@ extern "C" {
/* Use final bit of flags to indicate a control mbuf */
#define CTRL_MBUF_FLAG (1ULL << 63) /**< Mbuf contains control data */

+/*
+ * 32 bits are divided into several fields to mark packet types. Note that
+ * each field is indexical.
+ * - Bit 3:0 is for L2 types.
+ * - Bit 7:4 is for L3 or outer L3 (for tunneling case) types.
+ * - Bit 11:8 is for L4 or outer L4 (for tunneling case) types.
+ * - Bit 15:12 is for tunnel types.
+ * - Bit 19:16 is for inner L2 types.
+ * - Bit 23:20 is for inner L3 types.
+ * - Bit 27:24 is for inner L4 types.
+ * - Bit 31:28 is reserved.
+ *
+ * To be compatible with Vector PMD, RTE_PTYPE_L3_IPV4, RTE_PTYPE_L3_IPV4_EXT,
+ * RTE_PTYPE_L3_IPV6, RTE_PTYPE_L3_IPV6_EXT, RTE_PTYPE_L4_TCP, RTE_PTYPE_L4_UDP
+ * and RTE_PTYPE_L4_SCTP should be kept as below in a contiguous 7 bits.
+ *
+ * Note that L3 types values are selected for checking IPV4/IPV6 header from
+ * performance point of view. Reading annotations of RTE_ETH_IS_IPV4_HDR and
+ * RTE_ETH_IS_IPV6_HDR is needed for any future changes of L3 type values.
+ */
+#define RTE_PTYPE_UNKNOWN 0x00000000
+/* bit 3:0 for L2 types */
+#define RTE_PTYPE_L2_MAC 0x00000001
+#define RTE_PTYPE_L2_MAC_TIMESYNC 0x00000002
+#define RTE_PTYPE_L2_ARP 0x00000003
+#define RTE_PTYPE_L2_LLDP 0x00000004
+#define RTE_PTYPE_L2_MASK 0x0000000f
+/* bit 7:4 for L3 types */
+#define RTE_PTYPE_L3_IPV4 0x00000010
+#define RTE_PTYPE_L3_IPV4_EXT 0x00000030
+#define RTE_PTYPE_L3_IPV6 0x00000040
+#define RTE_PTYPE_L3_IPV4_EXT_UNKNOWN 0x00000090
+#define RTE_PTYPE_L3_IPV6_EXT 0x000000c0
+#define RTE_PTYPE_L3_IPV6_EXT_UNKNOWN 0x000000e0
+#define RTE_PTYPE_L3_MASK 0x000000f0
+/* bit 11:8 for L4 types */
+#define RTE_PTYPE_L4_TCP 0x00000100
+#define RTE_PTYPE_L4_UDP 0x00000200
+#define RTE_PTYPE_L4_FRAG 0x00000300
+#define RTE_PTYPE_L4_SCTP 0x00000400
+#define RTE_PTYPE_L4_ICMP 0x00000500
+#define RTE_PTYPE_L4_NONFRAG 0x00000600
+#define RTE_PTYPE_L4_MASK 0x00000f00
+/* bit 15:12 for tunnel types */
+#define RTE_PTYPE_TUNNEL_IP 0x00001000
+#define RTE_PTYPE_TUNNEL_GRE 0x00002000
+#define RTE_PTYPE_TUNNEL_VXLAN 0x00003000
+#define RTE_PTYPE_TUNNEL_NVGRE 0x00004000
+#define RTE_PTYPE_TUNNEL_GENEVE 0x00005000
+#define RTE_PTYPE_TUNNEL_GRENAT 0x00006000
+#define RTE_PTYPE_TUNNEL_MASK 0x0000f000
+/* bit 19:16 for inner L2 types */
+#define RTE_PTYPE_INNER_L2_MAC 0x00010000
+#define RTE_PTYPE_INNER_L2_MAC_VLAN 0x00020000
+#define RTE_PTYPE_INNER_L2_MASK 0x000f0000
+/* bit 23:20 for inner L3 types */
+#define RTE_PTYPE_INNER_L3_IPV4 0x00100000
+#define RTE_PTYPE_INNER_L3_IPV4_EXT 0x00200000
+#define RTE_PTYPE_INNER_L3_IPV6 0x00300000
+#define RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN 0x00400000
+#define RTE_PTYPE_INNER_L3_IPV6_EXT 0x00500000
+#define RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN 0x00600000
+#define RTE_PTYPE_INNER_INNER_L3_MASK 0x00f00000
+/* bit 27:24 for inner L4 types */
+#define RTE_PTYPE_INNER_L4_TCP 0x01000000
+#define RTE_PTYPE_INNER_L4_UDP 0x02000000
+#define RTE_PTYPE_INNER_L4_FRAG 0x03000000
+#define RTE_PTYPE_INNER_L4_SCTP 0x04000000
+#define RTE_PTYPE_INNER_L4_ICMP 0x05000000
+#define RTE_PTYPE_INNER_L4_NONFRAG 0x06000000
+#define RTE_PTYPE_INNER_L4_MASK 0x0f000000
+/* bit 31:28 reserved */
+
+/**
+ * Check if the (outer) L3 header is IPv4. To avoid comparing IPv4 types one by
+ * one, bit 4 is selected to be used for IPv4 only. Then checking bit 4 can
+ * determin if it is an IPV4 packet.
+ */
+#define RTE_ETH_IS_IPV4_HDR(ptype) ((ptype) & RTE_PTYPE_L3_IPV4)
+
+/**
+ * Check if the (outer) L3 header is IPv4. To avoid comparing IPv4 types one by
+ * one, bit 6 is selected to be used for IPv4 only. Then checking bit 6 can
+ * determin if it is an IPV4 packet.
+ */
+#define RTE_ETH_IS_IPV6_HDR(ptype) ((ptype) & RTE_PTYPE_L3_IPV6)
+
+/* Check if it is a tunneling packet */
+#define RTE_ETH_IS_TUNNEL_PKT(ptype) ((ptype) & RTE_PTYPE_TUNNEL_MASK)
+
/**
* Get the name of a RX offload flag
*
@@ -232,17 +322,26 @@ struct rte_mbuf {
/* remaining bytes are set on RX when pulling packet from descriptor */
MARKER rx_descriptor_fields1;

- /**
- * The packet type, which is used to indicate ordinary packet and also
- * tunneled packet format, i.e. each number is represented a type of
- * packet.
+ /*
+ * The packet type, which is the combination of outer/inner L2, L3, L4
+ * and tunnel types.
*/
- uint16_t packet_type;
+ union {
+ uint32_t packet_type; /**< L2/L3/L4 and tunnel information. */
+ struct {
+ uint32_t l2_type:4; /**< (Outer) L2 type. */
+ uint32_t l3_type:4; /**< (Outer) L3 type. */
+ uint32_t l4_type:4; /**< (Outer) L4 type. */
+ uint32_t tun_type:4; /**< Tunnel type. */
+ uint32_t inner_l2_type:4; /**< Inner L2 type. */
+ uint32_t inner_l3_type:4; /**< Inner L3 type. */
+ uint32_t inner_l4_type:4; /**< Inner L4 type. */
+ };
+ };

- uint16_t data_len; /**< Amount of data in segment buffer. */
uint32_t pkt_len; /**< Total pkt len: sum of all segments. */
+ uint16_t data_len; /**< Amount of data in segment buffer. */
uint16_t vlan_tci; /**< VLAN Tag Control Identifier (CPU order) */
- uint16_t reserved;
union {
uint32_t rss; /**< RSS hash result if RSS enabled */
struct {
--
1.9.3
Bruce Richardson
2015-02-09 10:27:24 UTC
Permalink
Post by Helin Zhang
As there are only 6 bit flags in ol_flags for indicating packet types,
which is not enough to describe all the possible packet types hardware
can recognize. For example, i40e hardware can recognize more than 150
packet types. Unified packet type is composed of tunnel type, L3 type,
L4 type and inner L3 type fields, and can be stored in mbuf field of
'packet_type' which is modified from 16 bits to 32 bits in mbuf structure.
Accordingly, the structure of 'rte_kni_mbuf' needs to be modifed as well.
---
.../linuxapp/eal/include/exec-env/rte_kni_common.h | 4 +-
lib/librte_mbuf/rte_mbuf.h | 113 +++++++++++++++++++--
2 files changed, 108 insertions(+), 9 deletions(-)
* Enlarged the packet_type field from 16 bits to 32 bits.
* Redefined the packet type sub-fields.
* Updated the 'struct rte_kni_mbuf' for KNI according to the mbuf changes.
Since these changes to the mbuf will break the operation of the vector driver,
that vector driver needs to be taken into account here.

Some suggestions/options:
1. Temporarily disable the VPMD at compile time or at run time as part of this
patch, and put the vector changes as the next patch (re-enabling the driver too)
2. Put in the minimum changes for the new mbuf layout into this patch. It will
make this patch a little longer, but may still be doable as it's only a couple
of fields changing, not the whole structure.

/Bruce
Zhang, Helin
2015-02-10 00:53:52 UTC
Permalink
Hi Bruce

Fortunately I have Steve as the author of a sub-patch for vector PMD in this
patch set. That means we have already taken into account the VPMD in it.
All is workable with vPMD, and with performance result mentioned.
Everything is done for this mbuf changes.

Regards,
Helin
-----Original Message-----
From: Richardson, Bruce
Sent: Monday, February 9, 2015 6:27 PM
To: Zhang, Helin
Konstantin
Subject: Re: [PATCH v2 01/15] mbuf: add definitions of unified packet types
Post by Helin Zhang
As there are only 6 bit flags in ol_flags for indicating packet types,
which is not enough to describe all the possible packet types hardware
can recognize. For example, i40e hardware can recognize more than 150
packet types. Unified packet type is composed of tunnel type, L3 type,
L4 type and inner L3 type fields, and can be stored in mbuf field of
'packet_type' which is modified from 16 bits to 32 bits in mbuf structure.
Accordingly, the structure of 'rte_kni_mbuf' needs to be modifed as well.
---
.../linuxapp/eal/include/exec-env/rte_kni_common.h | 4 +-
lib/librte_mbuf/rte_mbuf.h | 113
+++++++++++++++++++--
Post by Helin Zhang
2 files changed, 108 insertions(+), 9 deletions(-)
* Enlarged the packet_type field from 16 bits to 32 bits.
* Redefined the packet type sub-fields.
* Updated the 'struct rte_kni_mbuf' for KNI according to the mbuf changes.
Since these changes to the mbuf will break the operation of the vector driver,
that vector driver needs to be taken into account here.
1. Temporarily disable the VPMD at compile time or at run time as part of this
patch, and put the vector changes as the next patch (re-enabling the driver too)
2. Put in the minimum changes for the new mbuf layout into this patch. It will
make this patch a little longer, but may still be doable as it's only a couple of
fields changing, not the whole structure.
/Bruce
Bruce Richardson
2015-02-10 10:12:29 UTC
Permalink
Post by Zhang, Helin
Hi Bruce
Fortunately I have Steve as the author of a sub-patch for vector PMD in this
patch set. That means we have already taken into account the VPMD in it.
All is workable with vPMD, and with performance result mentioned.
Everything is done for this mbuf changes.
Regards,
Helin
I see that helin, but between applying this patch and applying the subsequent
patch for the vector PMD, the DPDK vector PMD code is broken, which would cause
problems for anyone doing a git bisect. Hence my suggestion that changes to take
account of the vpmd need to go in this patch (not just in the patch set) to
avoid having broken code following this commit.

/Bruce
Post by Zhang, Helin
-----Original Message-----
From: Richardson, Bruce
Sent: Monday, February 9, 2015 6:27 PM
To: Zhang, Helin
Konstantin
Subject: Re: [PATCH v2 01/15] mbuf: add definitions of unified packet types
Post by Helin Zhang
As there are only 6 bit flags in ol_flags for indicating packet types,
which is not enough to describe all the possible packet types hardware
can recognize. For example, i40e hardware can recognize more than 150
packet types. Unified packet type is composed of tunnel type, L3 type,
L4 type and inner L3 type fields, and can be stored in mbuf field of
'packet_type' which is modified from 16 bits to 32 bits in mbuf structure.
Accordingly, the structure of 'rte_kni_mbuf' needs to be modifed as well.
---
.../linuxapp/eal/include/exec-env/rte_kni_common.h | 4 +-
lib/librte_mbuf/rte_mbuf.h | 113
+++++++++++++++++++--
Post by Helin Zhang
2 files changed, 108 insertions(+), 9 deletions(-)
* Enlarged the packet_type field from 16 bits to 32 bits.
* Redefined the packet type sub-fields.
* Updated the 'struct rte_kni_mbuf' for KNI according to the mbuf changes.
Since these changes to the mbuf will break the operation of the vector driver,
that vector driver needs to be taken into account here.
1. Temporarily disable the VPMD at compile time or at run time as part of this
patch, and put the vector changes as the next patch (re-enabling the driver too)
2. Put in the minimum changes for the new mbuf layout into this patch. It will
make this patch a little longer, but may still be doable as it's only a couple of
fields changing, not the whole structure.
/Bruce
Helin Zhang
2015-02-09 06:40:36 UTC
Permalink
To unify packet types among all PMDs, bit masks of packet type for
ol_flags are replaced by unified packet type.

Signed-off-by: Helin Zhang <***@intel.com>
---
lib/librte_pmd_e1000/igb_rxtx.c | 98 ++++++++++++++++++++++++++++++++++-------
1 file changed, 83 insertions(+), 15 deletions(-)

v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.

diff --git a/lib/librte_pmd_e1000/igb_rxtx.c b/lib/librte_pmd_e1000/igb_rxtx.c
index 5c394a9..12a68f4 100644
--- a/lib/librte_pmd_e1000/igb_rxtx.c
+++ b/lib/librte_pmd_e1000/igb_rxtx.c
@@ -602,17 +602,85 @@ eth_igb_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
* RX functions
*
**********************************************************************/
+#define IGB_PACKET_TYPE_IPV4 0X01
+#define IGB_PACKET_TYPE_IPV4_TCP 0X11
+#define IGB_PACKET_TYPE_IPV4_UDP 0X21
+#define IGB_PACKET_TYPE_IPV4_SCTP 0X41
+#define IGB_PACKET_TYPE_IPV4_EXT 0X03
+#define IGB_PACKET_TYPE_IPV4_EXT_SCTP 0X43
+#define IGB_PACKET_TYPE_IPV6 0X04
+#define IGB_PACKET_TYPE_IPV6_TCP 0X14
+#define IGB_PACKET_TYPE_IPV6_UDP 0X24
+#define IGB_PACKET_TYPE_IPV6_EXT 0X0C
+#define IGB_PACKET_TYPE_IPV6_EXT_TCP 0X1C
+#define IGB_PACKET_TYPE_IPV6_EXT_UDP 0X2C
+#define IGB_PACKET_TYPE_IPV4_IPV6 0X05
+#define IGB_PACKET_TYPE_IPV4_IPV6_TCP 0X15
+#define IGB_PACKET_TYPE_IPV4_IPV6_UDP 0X25
+#define IGB_PACKET_TYPE_IPV4_IPV6_EXT 0X0D
+#define IGB_PACKET_TYPE_IPV4_IPV6_EXT_TCP 0X1D
+#define IGB_PACKET_TYPE_IPV4_IPV6_EXT_UDP 0X2D
+#define IGB_PACKET_TYPE_MAX 0X80
+#define IGB_PACKET_TYPE_MASK 0X7F
+#define IGB_PACKET_TYPE_SHIFT 0X04
+static inline uint32_t
+igb_rxd_pkt_info_to_pkt_type(uint16_t pkt_info)
+{
+ static const uint32_t
+ ptype_table[IGB_PACKET_TYPE_MAX] __rte_cache_aligned = {
+ [IGB_PACKET_TYPE_IPV4] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4,
+ [IGB_PACKET_TYPE_IPV4_EXT] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4_EXT,
+ [IGB_PACKET_TYPE_IPV6] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6,
+ [IGB_PACKET_TYPE_IPV4_IPV6] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6,
+ [IGB_PACKET_TYPE_IPV6_EXT] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6_EXT,
+ [IGB_PACKET_TYPE_IPV4_IPV6_EXT] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT,
+ [IGB_PACKET_TYPE_IPV4_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_TCP,
+ [IGB_PACKET_TYPE_IPV6_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_TCP,
+ [IGB_PACKET_TYPE_IPV4_IPV6_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6 | RTE_PTYPE_INNER_L4_TCP,
+ [IGB_PACKET_TYPE_IPV6_EXT_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_TCP,
+ [IGB_PACKET_TYPE_IPV4_IPV6_EXT_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_INNER_L4_TCP,
+ [IGB_PACKET_TYPE_IPV4_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_UDP,
+ [IGB_PACKET_TYPE_IPV6_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_UDP,
+ [IGB_PACKET_TYPE_IPV4_IPV6_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6 | RTE_PTYPE_INNER_L4_UDP,
+ [IGB_PACKET_TYPE_IPV6_EXT_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_UDP,
+ [IGB_PACKET_TYPE_IPV4_IPV6_EXT_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_INNER_L4_UDP,
+ [IGB_PACKET_TYPE_IPV4_SCTP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_SCTP,
+ [IGB_PACKET_TYPE_IPV4_EXT_SCTP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_L4_SCTP,
+ };
+ if (unlikely(pkt_info & E1000_RXDADV_PKTTYPE_ETQF))
+ return RTE_PTYPE_UNKNOWN;
+
+ pkt_info = (pkt_info >> IGB_PACKET_TYPE_SHIFT) & IGB_PACKET_TYPE_MASK;
+
+ return ptype_table[pkt_info];
+}
+
static inline uint64_t
rx_desc_hlen_type_rss_to_pkt_flags(uint32_t hl_tp_rs)
{
- uint64_t pkt_flags;
-
- static uint64_t ip_pkt_types_map[16] = {
- 0, PKT_RX_IPV4_HDR, PKT_RX_IPV4_HDR_EXT, PKT_RX_IPV4_HDR_EXT,
- PKT_RX_IPV6_HDR, 0, 0, 0,
- PKT_RX_IPV6_HDR_EXT, 0, 0, 0,
- PKT_RX_IPV6_HDR_EXT, 0, 0, 0,
- };
+ uint64_t pkt_flags = ((hl_tp_rs & 0x0F) == 0) ? 0 : PKT_RX_RSS_HASH;

#if defined(RTE_LIBRTE_IEEE1588)
static uint32_t ip_pkt_etqf_map[8] = {
@@ -620,14 +688,10 @@ rx_desc_hlen_type_rss_to_pkt_flags(uint32_t hl_tp_rs)
0, 0, 0, 0,
};

- pkt_flags = (hl_tp_rs & E1000_RXDADV_PKTTYPE_ETQF) ?
- ip_pkt_etqf_map[(hl_tp_rs >> 4) & 0x07] :
- ip_pkt_types_map[(hl_tp_rs >> 4) & 0x0F];
-#else
- pkt_flags = (hl_tp_rs & E1000_RXDADV_PKTTYPE_ETQF) ? 0 :
- ip_pkt_types_map[(hl_tp_rs >> 4) & 0x0F];
+ pkt_flags |= ip_pkt_etqf_map[(hl_tp_rs >> 4) & 0x07];
#endif
- return pkt_flags | (((hl_tp_rs & 0x0F) == 0) ? 0 : PKT_RX_RSS_HASH);
+
+ return pkt_flags;
}

static inline uint64_t
@@ -802,6 +866,8 @@ eth_igb_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
pkt_flags = pkt_flags | rx_desc_status_to_pkt_flags(staterr);
pkt_flags = pkt_flags | rx_desc_error_to_pkt_flags(staterr);
rxm->ol_flags = pkt_flags;
+ rxm->packet_type = igb_rxd_pkt_info_to_pkt_type(rxd.wb.lower.
+ lo_dword.hs_rss.pkt_info);

/*
* Store the mbuf address into the next entry of the array
@@ -1036,6 +1102,8 @@ eth_igb_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
pkt_flags = pkt_flags | rx_desc_status_to_pkt_flags(staterr);
pkt_flags = pkt_flags | rx_desc_error_to_pkt_flags(staterr);
first_seg->ol_flags = pkt_flags;
+ first_seg->packet_type = igb_rxd_pkt_info_to_pkt_type(rxd.wb.
+ lower.lo_dword.hs_rss.pkt_info);

/* Prefetch data of first segment, if configured to do so. */
rte_packet_prefetch((char *)first_seg->buf_addr +
--
1.9.3
Helin Zhang
2015-02-09 06:40:37 UTC
Permalink
To unify packet types among all PMDs, bit masks of packet type for
ol_flags are replaced by unified packet type.
Note that around 2.5% performance drop (64B) was observed of doing
4 ports (1 port per 82599 card) IO forwarding on the same SNB core.

Signed-off-by: Helin Zhang <***@intel.com>
---
lib/librte_pmd_ixgbe/ixgbe_rxtx.c | 146 +++++++++++++++++++++++++++++---------
1 file changed, 112 insertions(+), 34 deletions(-)

v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.

diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
index e6766b3..a2e4234 100644
--- a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
+++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
@@ -866,40 +866,107 @@ end_of_tx:
* RX functions
*
**********************************************************************/
-static inline uint64_t
-rx_desc_hlen_type_rss_to_pkt_flags(uint32_t hl_tp_rs)
+#define IXGBE_PACKET_TYPE_IPV4 0X01
+#define IXGBE_PACKET_TYPE_IPV4_TCP 0X11
+#define IXGBE_PACKET_TYPE_IPV4_UDP 0X21
+#define IXGBE_PACKET_TYPE_IPV4_SCTP 0X41
+#define IXGBE_PACKET_TYPE_IPV4_EXT 0X03
+#define IXGBE_PACKET_TYPE_IPV4_EXT_SCTP 0X43
+#define IXGBE_PACKET_TYPE_IPV6 0X04
+#define IXGBE_PACKET_TYPE_IPV6_TCP 0X14
+#define IXGBE_PACKET_TYPE_IPV6_UDP 0X24
+#define IXGBE_PACKET_TYPE_IPV6_EXT 0X0C
+#define IXGBE_PACKET_TYPE_IPV6_EXT_TCP 0X1C
+#define IXGBE_PACKET_TYPE_IPV6_EXT_UDP 0X2C
+#define IXGBE_PACKET_TYPE_IPV4_IPV6 0X05
+#define IXGBE_PACKET_TYPE_IPV4_IPV6_TCP 0X15
+#define IXGBE_PACKET_TYPE_IPV4_IPV6_UDP 0X25
+#define IXGBE_PACKET_TYPE_IPV4_IPV6_EXT 0X0D
+#define IXGBE_PACKET_TYPE_IPV4_IPV6_EXT_TCP 0X1D
+#define IXGBE_PACKET_TYPE_IPV4_IPV6_EXT_UDP 0X2D
+#define IXGBE_PACKET_TYPE_MAX 0X80
+#define IXGBE_PACKET_TYPE_MASK 0X7F
+#define IXGBE_PACKET_TYPE_SHIFT 0X04
+static inline uint32_t
+ixgbe_rxd_pkt_info_to_pkt_type(uint16_t pkt_info)
{
- uint64_t pkt_flags;
-
- static uint64_t ip_pkt_types_map[16] = {
- 0, PKT_RX_IPV4_HDR, PKT_RX_IPV4_HDR_EXT, PKT_RX_IPV4_HDR_EXT,
- PKT_RX_IPV6_HDR, 0, 0, 0,
- PKT_RX_IPV6_HDR_EXT, 0, 0, 0,
- PKT_RX_IPV6_HDR_EXT, 0, 0, 0,
+ static const uint32_t
+ ptype_table[IXGBE_PACKET_TYPE_MAX] __rte_cache_aligned = {
+ [IXGBE_PACKET_TYPE_IPV4] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4,
+ [IXGBE_PACKET_TYPE_IPV4_EXT] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4_EXT,
+ [IXGBE_PACKET_TYPE_IPV6] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6,
+ [IXGBE_PACKET_TYPE_IPV6_EXT] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6_EXT,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6_EXT] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT,
+ [IXGBE_PACKET_TYPE_IPV4_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_TCP,
+ [IXGBE_PACKET_TYPE_IPV6_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_TCP,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6 | RTE_PTYPE_INNER_L4_TCP,
+ [IXGBE_PACKET_TYPE_IPV6_EXT_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_TCP,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6_EXT_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_INNER_L4_TCP,
+ [IXGBE_PACKET_TYPE_IPV4_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_UDP,
+ [IXGBE_PACKET_TYPE_IPV6_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_UDP,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6 | RTE_PTYPE_INNER_L4_UDP,
+ [IXGBE_PACKET_TYPE_IPV6_EXT_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_UDP,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6_EXT_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_INNER_L4_UDP,
+ [IXGBE_PACKET_TYPE_IPV4_SCTP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_SCTP,
+ [IXGBE_PACKET_TYPE_IPV4_EXT_SCTP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_L4_SCTP,
};
+ if (unlikely(pkt_info & IXGBE_RXDADV_PKTTYPE_ETQF))
+ return RTE_PTYPE_UNKNOWN;

- static uint64_t ip_rss_types_map[16] = {
+ pkt_info = (pkt_info >> IXGBE_PACKET_TYPE_SHIFT) &
+ IXGBE_PACKET_TYPE_MASK;
+
+ return ptype_table[pkt_info];
+}
+
+static inline uint64_t
+ixgbe_rxd_pkt_info_to_pkt_flags(uint16_t pkt_info)
+{
+ static uint64_t ip_rss_types_map[16] __rte_cache_aligned = {
0, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH,
0, PKT_RX_RSS_HASH, 0, PKT_RX_RSS_HASH,
PKT_RX_RSS_HASH, 0, 0, 0,
0, 0, 0, PKT_RX_FDIR,
};
-
#ifdef RTE_LIBRTE_IEEE1588
static uint64_t ip_pkt_etqf_map[8] = {
0, 0, 0, PKT_RX_IEEE1588_PTP,
0, 0, 0, 0,
};

- pkt_flags = (hl_tp_rs & IXGBE_RXDADV_PKTTYPE_ETQF) ?
- ip_pkt_etqf_map[(hl_tp_rs >> 4) & 0x07] :
- ip_pkt_types_map[(hl_tp_rs >> 4) & 0x0F];
+ if (likely(pkt_info & IXGBE_RXDADV_PKTTYPE_ETQF))
+ return ip_pkt_etqf_map[(pkt_info >> 4) & 0X07] |
+ ip_rss_types_map[pkt_info & 0xF];
+ else
+ return ip_rss_types_map[pkt_info & 0xF];
#else
- pkt_flags = (hl_tp_rs & IXGBE_RXDADV_PKTTYPE_ETQF) ? 0 :
- ip_pkt_types_map[(hl_tp_rs >> 4) & 0x0F];
-
+ return ip_rss_types_map[pkt_info & 0xF];
#endif
- return pkt_flags | ip_rss_types_map[hl_tp_rs & 0xF];
}

static inline uint64_t
@@ -956,7 +1023,9 @@ ixgbe_rx_scan_hw_ring(struct igb_rx_queue *rxq)
struct rte_mbuf *mb;
uint16_t pkt_len;
uint64_t pkt_flags;
- int s[LOOK_AHEAD], nb_dd;
+ int nb_dd;
+ uint32_t s[LOOK_AHEAD];
+ uint16_t pkt_info[LOOK_AHEAD];
int i, j, nb_rx = 0;


@@ -979,6 +1048,9 @@ ixgbe_rx_scan_hw_ring(struct igb_rx_queue *rxq)
for (j = LOOK_AHEAD-1; j >= 0; --j)
s[j] = rxdp[j].wb.upper.status_error;

+ for (j = LOOK_AHEAD-1; j >= 0; --j)
+ pkt_info[j] = rxdp[j].wb.lower.lo_dword.hs_rss.pkt_info;
+
/* Compute how many status bits were set */
nb_dd = 0;
for (j = 0; j < LOOK_AHEAD; ++j)
@@ -996,12 +1068,13 @@ ixgbe_rx_scan_hw_ring(struct igb_rx_queue *rxq)
mb->vlan_tci = rte_le_to_cpu_16(rxdp[j].wb.upper.vlan);

/* convert descriptor fields to rte mbuf flags */
- pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(
- rxdp[j].wb.lower.lo_dword.data);
- /* reuse status field from scan list */
- pkt_flags |= rx_desc_status_to_pkt_flags(s[j]);
+ pkt_flags = rx_desc_status_to_pkt_flags(s[j]);
pkt_flags |= rx_desc_error_to_pkt_flags(s[j]);
+ pkt_flags |=
+ ixgbe_rxd_pkt_info_to_pkt_flags(pkt_info[j]);
mb->ol_flags = pkt_flags;
+ mb->packet_type =
+ ixgbe_rxd_pkt_info_to_pkt_type(pkt_info[j]);

if (likely(pkt_flags & PKT_RX_RSS_HASH))
mb->hash.rss = rxdp[j].wb.lower.hi_dword.rss;
@@ -1198,7 +1271,7 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
union ixgbe_adv_rx_desc rxd;
uint64_t dma_addr;
uint32_t staterr;
- uint32_t hlen_type_rss;
+ uint32_t pkt_info;
uint16_t pkt_len;
uint16_t rx_id;
uint16_t nb_rx;
@@ -1316,14 +1389,17 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
rxm->data_len = pkt_len;
rxm->port = rxq->port_id;

- hlen_type_rss = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.data);
+ pkt_info = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.hs_rss.
+ pkt_info);
/* Only valid if PKT_RX_VLAN_PKT set in pkt_flags */
rxm->vlan_tci = rte_le_to_cpu_16(rxd.wb.upper.vlan);

- pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(hlen_type_rss);
- pkt_flags = pkt_flags | rx_desc_status_to_pkt_flags(staterr);
+ pkt_flags = rx_desc_status_to_pkt_flags(staterr);
pkt_flags = pkt_flags | rx_desc_error_to_pkt_flags(staterr);
+ pkt_flags = pkt_flags |
+ ixgbe_rxd_pkt_info_to_pkt_flags(pkt_info);
rxm->ol_flags = pkt_flags;
+ rxm->packet_type = ixgbe_rxd_pkt_info_to_pkt_type(pkt_info);

if (likely(pkt_flags & PKT_RX_RSS_HASH))
rxm->hash.rss = rxd.wb.lower.hi_dword.rss;
@@ -1382,7 +1458,7 @@ ixgbe_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
union ixgbe_adv_rx_desc rxd;
uint64_t dma; /* Physical address of mbuf data buffer */
uint32_t staterr;
- uint32_t hlen_type_rss;
+ uint16_t pkt_info;
uint16_t rx_id;
uint16_t nb_rx;
uint16_t nb_hold;
@@ -1561,13 +1637,15 @@ ixgbe_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
* set in the pkt_flags field.
*/
first_seg->vlan_tci = rte_le_to_cpu_16(rxd.wb.upper.vlan);
- hlen_type_rss = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.data);
- pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(hlen_type_rss);
- pkt_flags = (pkt_flags |
- rx_desc_status_to_pkt_flags(staterr));
- pkt_flags = (pkt_flags |
- rx_desc_error_to_pkt_flags(staterr));
+ pkt_info = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.hs_rss.
+ pkt_info);
+ pkt_flags = rx_desc_status_to_pkt_flags(staterr);
+ pkt_flags = pkt_flags | rx_desc_error_to_pkt_flags(staterr);
+ pkt_flags = pkt_flags |
+ ixgbe_rxd_pkt_info_to_pkt_flags(pkt_info);
first_seg->ol_flags = pkt_flags;
+ first_seg->packet_type =
+ ixgbe_rxd_pkt_info_to_pkt_type(pkt_info);

if (likely(pkt_flags & PKT_RX_RSS_HASH))
first_seg->hash.rss = rxd.wb.lower.hi_dword.rss;
--
1.9.3
Helin Zhang
2015-02-09 06:40:38 UTC
Permalink
To unify packet types among all PMDs, bit masks of packet type for
ol_flags are replaced by unified packet type.
Note that around 2% performance drop (64B) was observed of doing
4 ports (1 port per 82599 card) IO forwarding on the same SNB core.

Signed-off-by: Cunming Liang <***@intel.com>
Signed-off-by: Helin Zhang <***@intel.com>
---
lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c | 49 +++++++++++++++++++----------------
1 file changed, 26 insertions(+), 23 deletions(-)

v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.

diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c b/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
index b54cb19..357eb1d 100644
--- a/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
+++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
@@ -134,44 +134,35 @@ ixgbe_rxq_rearm(struct igb_rx_queue *rxq)
*/
#ifdef RTE_IXGBE_RX_OLFLAGS_ENABLE

-#define OLFLAGS_MASK ((uint16_t)(PKT_RX_VLAN_PKT | PKT_RX_IPV4_HDR |\
- PKT_RX_IPV4_HDR_EXT | PKT_RX_IPV6_HDR |\
- PKT_RX_IPV6_HDR_EXT))
-#define OLFLAGS_MASK_V (((uint64_t)OLFLAGS_MASK << 48) | \
- ((uint64_t)OLFLAGS_MASK << 32) | \
- ((uint64_t)OLFLAGS_MASK << 16) | \
- ((uint64_t)OLFLAGS_MASK))
-#define PTYPE_SHIFT (1)
+#define OLFLAGS_MASK_V (((uint64_t)PKT_RX_VLAN_PKT << 48) | \
+ ((uint64_t)PKT_RX_VLAN_PKT << 32) | \
+ ((uint64_t)PKT_RX_VLAN_PKT << 16) | \
+ ((uint64_t)PKT_RX_VLAN_PKT))
#define VTAG_SHIFT (3)

static inline void
desc_to_olflags_v(__m128i descs[4], struct rte_mbuf **rx_pkts)
{
- __m128i ptype0, ptype1, vtag0, vtag1;
+ __m128i vtag0, vtag1;
union {
uint16_t e[4];
uint64_t dword;
} vol;

- ptype0 = _mm_unpacklo_epi16(descs[0], descs[1]);
- ptype1 = _mm_unpacklo_epi16(descs[2], descs[3]);
vtag0 = _mm_unpackhi_epi16(descs[0], descs[1]);
vtag1 = _mm_unpackhi_epi16(descs[2], descs[3]);

- ptype1 = _mm_unpacklo_epi32(ptype0, ptype1);
vtag1 = _mm_unpacklo_epi32(vtag0, vtag1);
-
- ptype1 = _mm_slli_epi16(ptype1, PTYPE_SHIFT);
vtag1 = _mm_srli_epi16(vtag1, VTAG_SHIFT);

- ptype1 = _mm_or_si128(ptype1, vtag1);
- vol.dword = _mm_cvtsi128_si64(ptype1) & OLFLAGS_MASK_V;
+ vol.dword = _mm_cvtsi128_si64(vtag1) & OLFLAGS_MASK_V;

rx_pkts[0]->ol_flags = vol.e[0];
rx_pkts[1]->ol_flags = vol.e[1];
rx_pkts[2]->ol_flags = vol.e[2];
rx_pkts[3]->ol_flags = vol.e[3];
}
+
#else
#define desc_to_olflags_v(desc, rx_pkts) do {} while (0)
#endif
@@ -197,13 +188,15 @@ _recv_raw_pkts_vec(struct igb_rx_queue *rxq, struct rte_mbuf **rx_pkts,
uint64_t var;
__m128i shuf_msk;
__m128i crc_adjust = _mm_set_epi16(
- 0, 0, 0, 0, /* ignore non-length fields */
+ 0, 0, 0, /* ignore non-length fields */
+ -rxq->crc_len, /* sub crc on data_len */
0, /* ignore high-16bits of pkt_len */
-rxq->crc_len, /* sub crc on pkt_len */
- -rxq->crc_len, /* sub crc on data_len */
- 0 /* ignore pkt_type field */
+ 0, 0 /* ignore pkt_type field */
);
__m128i dd_check, eop_check;
+ __m128i desc_mask = _mm_set_epi32(0xFFFFFFFF, 0xFFFFFFFF,
+ 0xFFFFFFFF, 0xFFFF07F0);

if (unlikely(nb_pkts < RTE_IXGBE_VPMD_RX_BURST))
return 0;
@@ -234,12 +227,13 @@ _recv_raw_pkts_vec(struct igb_rx_queue *rxq, struct rte_mbuf **rx_pkts,
/* mask to shuffle from desc. to mbuf */
shuf_msk = _mm_set_epi8(
7, 6, 5, 4, /* octet 4~7, 32bits rss */
- 0xFF, 0xFF, /* skip high 16 bits vlan_macip, zero out */
15, 14, /* octet 14~15, low 16 bits vlan_macip */
+ 13, 12, /* octet 12~13, 16 bits data_len */
0xFF, 0xFF, /* skip high 16 bits pkt_len, zero out */
13, 12, /* octet 12~13, low 16 bits pkt_len */
- 13, 12, /* octet 12~13, 16 bits data_len */
- 0xFF, 0xFF /* skip pkt_type field */
+ 0xFF, 0xFF, /* skip high 16 bits pkt_type */
+ 1, /* octet 1, 8 bits pkt_type field */
+ 0 /* octet 0, 4 bits offset 4 pkt_type field */
);

/* Cache is empty -> need to scan the buffer rings, but first move
@@ -248,6 +242,7 @@ _recv_raw_pkts_vec(struct igb_rx_queue *rxq, struct rte_mbuf **rx_pkts,

/*
* A. load 4 packet in one loop
+ * [A*. mask out 4 unused dirty field in desc]
* B. copy 4 mbuf point from swring to rx_pkts
* C. calc the number of DD bits among the 4 packets
* [C*. extract the end-of-packet bit, if requested]
@@ -289,6 +284,14 @@ _recv_raw_pkts_vec(struct igb_rx_queue *rxq, struct rte_mbuf **rx_pkts,
/* B.2 copy 2 mbuf point into rx_pkts */
_mm_storeu_si128((__m128i *)&rx_pkts[pos+2], mbp2);

+ /* A* mask out 0~3 bits RSS type */
+ descs[3] = _mm_and_si128(descs[3], desc_mask);
+ descs[2] = _mm_and_si128(descs[2], desc_mask);
+
+ /* A* mask out 0~3 bits RSS type */
+ descs[1] = _mm_and_si128(descs[1], desc_mask);
+ descs[0] = _mm_and_si128(descs[0], desc_mask);
+
/* avoid compiler reorder optimization */
rte_compiler_barrier();

@@ -301,7 +304,7 @@ _recv_raw_pkts_vec(struct igb_rx_queue *rxq, struct rte_mbuf **rx_pkts,
/* C.1 4=>2 filter staterr info only */
sterr_tmp1 = _mm_unpackhi_epi32(descs[1], descs[0]);

- /* set ol_flags with packet type and vlan tag */
+ /* set ol_flags with vlan packet type */
desc_to_olflags_v(descs, &rx_pkts[pos]);

/* D.2 pkt 3,4 set in_port/nb_seg and remove crc */
--
1.9.3
Helin Zhang
2015-02-09 06:40:39 UTC
Permalink
To unify packet types among all PMDs, bit masks of packet type for
ol_flags are replaced by unified packet type.

Signed-off-by: Helin Zhang <***@intel.com>
---
lib/librte_pmd_i40e/i40e_rxtx.c | 786 ++++++++++++++++++++++++++--------------
1 file changed, 512 insertions(+), 274 deletions(-)

v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.

diff --git a/lib/librte_pmd_i40e/i40e_rxtx.c b/lib/librte_pmd_i40e/i40e_rxtx.c
index 2beae3c..bcb49f0 100644
--- a/lib/librte_pmd_i40e/i40e_rxtx.c
+++ b/lib/librte_pmd_i40e/i40e_rxtx.c
@@ -146,272 +146,511 @@ i40e_rxd_error_to_pkt_flags(uint64_t qword)
return flags;
}

-/* Translate pkt types to pkt flags */
-static inline uint64_t
-i40e_rxd_ptype_to_pkt_flags(uint64_t qword)
+/* For each value it means, datasheet of hardware can tell more details */
+static inline uint32_t
+i40e_rxd_pkt_type_mapping(uint8_t ptype)
{
- uint8_t ptype = (uint8_t)((qword & I40E_RXD_QW1_PTYPE_MASK) >>
- I40E_RXD_QW1_PTYPE_SHIFT);
- static const uint64_t ip_ptype_map[I40E_MAX_PKT_TYPE] = {
- 0, /* PTYPE 0 */
- 0, /* PTYPE 1 */
- 0, /* PTYPE 2 */
- 0, /* PTYPE 3 */
- 0, /* PTYPE 4 */
- 0, /* PTYPE 5 */
- 0, /* PTYPE 6 */
- 0, /* PTYPE 7 */
- 0, /* PTYPE 8 */
- 0, /* PTYPE 9 */
- 0, /* PTYPE 10 */
- 0, /* PTYPE 11 */
- 0, /* PTYPE 12 */
- 0, /* PTYPE 13 */
- 0, /* PTYPE 14 */
- 0, /* PTYPE 15 */
- 0, /* PTYPE 16 */
- 0, /* PTYPE 17 */
- 0, /* PTYPE 18 */
- 0, /* PTYPE 19 */
- 0, /* PTYPE 20 */
- 0, /* PTYPE 21 */
- PKT_RX_IPV4_HDR, /* PTYPE 22 */
- PKT_RX_IPV4_HDR, /* PTYPE 23 */
- PKT_RX_IPV4_HDR, /* PTYPE 24 */
- 0, /* PTYPE 25 */
- PKT_RX_IPV4_HDR, /* PTYPE 26 */
- PKT_RX_IPV4_HDR, /* PTYPE 27 */
- PKT_RX_IPV4_HDR, /* PTYPE 28 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 29 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 30 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 31 */
- 0, /* PTYPE 32 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 33 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 34 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 35 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 36 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 37 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 38 */
- 0, /* PTYPE 39 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 40 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 41 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 42 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 43 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 44 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 45 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 46 */
- 0, /* PTYPE 47 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 48 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 49 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 50 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 51 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 52 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 53 */
- 0, /* PTYPE 54 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 55 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 56 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 57 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 58 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 59 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 60 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 61 */
- 0, /* PTYPE 62 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 63 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 64 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 65 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 66 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 67 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 68 */
- 0, /* PTYPE 69 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 70 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 71 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 72 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 73 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 74 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 75 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 76 */
- 0, /* PTYPE 77 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 78 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 79 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 80 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 81 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 82 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 83 */
- 0, /* PTYPE 84 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 85 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 86 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 87 */
- PKT_RX_IPV6_HDR, /* PTYPE 88 */
- PKT_RX_IPV6_HDR, /* PTYPE 89 */
- PKT_RX_IPV6_HDR, /* PTYPE 90 */
- 0, /* PTYPE 91 */
- PKT_RX_IPV6_HDR, /* PTYPE 92 */
- PKT_RX_IPV6_HDR, /* PTYPE 93 */
- PKT_RX_IPV6_HDR, /* PTYPE 94 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 95 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 96 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 97 */
- 0, /* PTYPE 98 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 99 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 100 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 101 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 102 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 103 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 104 */
- 0, /* PTYPE 105 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 106 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 107 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 108 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 109 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 110 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 111 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 112 */
- 0, /* PTYPE 113 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 114 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 115 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 116 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 117 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 118 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 119 */
- 0, /* PTYPE 120 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 121 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 122 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 123 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 124 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 125 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 126 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 127 */
- 0, /* PTYPE 128 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 129 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 130 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 131 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 132 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 133 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 134 */
- 0, /* PTYPE 135 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 136 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 137 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 138 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 139 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 140 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 141 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 142 */
- 0, /* PTYPE 143 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 144 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 145 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 146 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 147 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 148 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 149 */
- 0, /* PTYPE 150 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 151 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 152 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 153 */
- 0, /* PTYPE 154 */
- 0, /* PTYPE 155 */
- 0, /* PTYPE 156 */
- 0, /* PTYPE 157 */
- 0, /* PTYPE 158 */
- 0, /* PTYPE 159 */
- 0, /* PTYPE 160 */
- 0, /* PTYPE 161 */
- 0, /* PTYPE 162 */
- 0, /* PTYPE 163 */
- 0, /* PTYPE 164 */
- 0, /* PTYPE 165 */
- 0, /* PTYPE 166 */
- 0, /* PTYPE 167 */
- 0, /* PTYPE 168 */
- 0, /* PTYPE 169 */
- 0, /* PTYPE 170 */
- 0, /* PTYPE 171 */
- 0, /* PTYPE 172 */
- 0, /* PTYPE 173 */
- 0, /* PTYPE 174 */
- 0, /* PTYPE 175 */
- 0, /* PTYPE 176 */
- 0, /* PTYPE 177 */
- 0, /* PTYPE 178 */
- 0, /* PTYPE 179 */
- 0, /* PTYPE 180 */
- 0, /* PTYPE 181 */
- 0, /* PTYPE 182 */
- 0, /* PTYPE 183 */
- 0, /* PTYPE 184 */
- 0, /* PTYPE 185 */
- 0, /* PTYPE 186 */
- 0, /* PTYPE 187 */
- 0, /* PTYPE 188 */
- 0, /* PTYPE 189 */
- 0, /* PTYPE 190 */
- 0, /* PTYPE 191 */
- 0, /* PTYPE 192 */
- 0, /* PTYPE 193 */
- 0, /* PTYPE 194 */
- 0, /* PTYPE 195 */
- 0, /* PTYPE 196 */
- 0, /* PTYPE 197 */
- 0, /* PTYPE 198 */
- 0, /* PTYPE 199 */
- 0, /* PTYPE 200 */
- 0, /* PTYPE 201 */
- 0, /* PTYPE 202 */
- 0, /* PTYPE 203 */
- 0, /* PTYPE 204 */
- 0, /* PTYPE 205 */
- 0, /* PTYPE 206 */
- 0, /* PTYPE 207 */
- 0, /* PTYPE 208 */
- 0, /* PTYPE 209 */
- 0, /* PTYPE 210 */
- 0, /* PTYPE 211 */
- 0, /* PTYPE 212 */
- 0, /* PTYPE 213 */
- 0, /* PTYPE 214 */
- 0, /* PTYPE 215 */
- 0, /* PTYPE 216 */
- 0, /* PTYPE 217 */
- 0, /* PTYPE 218 */
- 0, /* PTYPE 219 */
- 0, /* PTYPE 220 */
- 0, /* PTYPE 221 */
- 0, /* PTYPE 222 */
- 0, /* PTYPE 223 */
- 0, /* PTYPE 224 */
- 0, /* PTYPE 225 */
- 0, /* PTYPE 226 */
- 0, /* PTYPE 227 */
- 0, /* PTYPE 228 */
- 0, /* PTYPE 229 */
- 0, /* PTYPE 230 */
- 0, /* PTYPE 231 */
- 0, /* PTYPE 232 */
- 0, /* PTYPE 233 */
- 0, /* PTYPE 234 */
- 0, /* PTYPE 235 */
- 0, /* PTYPE 236 */
- 0, /* PTYPE 237 */
- 0, /* PTYPE 238 */
- 0, /* PTYPE 239 */
- 0, /* PTYPE 240 */
- 0, /* PTYPE 241 */
- 0, /* PTYPE 242 */
- 0, /* PTYPE 243 */
- 0, /* PTYPE 244 */
- 0, /* PTYPE 245 */
- 0, /* PTYPE 246 */
- 0, /* PTYPE 247 */
- 0, /* PTYPE 248 */
- 0, /* PTYPE 249 */
- 0, /* PTYPE 250 */
- 0, /* PTYPE 251 */
- 0, /* PTYPE 252 */
- 0, /* PTYPE 253 */
- 0, /* PTYPE 254 */
- 0, /* PTYPE 255 */
+ static const uint32_t ptype_table[UINT8_MAX] __rte_cache_aligned = {
+ /* L2 types */
+ /* [0] reserved */
+ [1] = RTE_PTYPE_L2_MAC,
+ [2] = RTE_PTYPE_L2_MAC_TIMESYNC,
+ /* [3] - [5] reserved */
+ [6] = RTE_PTYPE_L2_LLDP,
+ /* [7] - [10] reserved */
+ [11] = RTE_PTYPE_L2_ARP,
+ /* [12] - [21] reserved */
+
+ /* Non tunneled IPv4 */
+ [22] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_FRAG,
+ [23] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_NONFRAG,
+ [24] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_UDP,
+ /* [25] reserved */
+ [26] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_TCP,
+ [27] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_SCTP,
+ [28] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_ICMP,
+
+ /* IPv4 --> IPv4 */
+ [29] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [30] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [31] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [32] reserved */
+ [33] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [34] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [35] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> IPv6 */
+ [36] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [37] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [38] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [39] reserved */
+ [40] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [41] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [42] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN */
+ [43] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> IPv4 */
+ [44] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [45] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [46] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [47] reserved */
+ [48] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [49] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [50] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> IPv6 */
+ [51] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [52] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [53] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [54] reserved */
+ [55] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [56] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [57] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC */
+ [58] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC --> IPv4 */
+ [59] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [60] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [61] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [62] reserved */
+ [63] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [64] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [65] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC --> IPv6 */
+ [66] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [67] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [68] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [69] reserved */
+ [70] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [71] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [72] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC/VLAN */
+ [73] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv4 */
+ [74] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [75] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [76] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [77] reserved */
+ [78] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [79] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [80] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv6 */
+ [81] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [82] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [83] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [84] reserved */
+ [85] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [86] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [87] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* Non tunneled IPv6 */
+ [88] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_FRAG,
+ [89] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_NONFRAG,
+ [90] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_UDP,
+ /* [91] reserved */
+ [92] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_TCP,
+ [93] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_SCTP,
+ [94] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_ICMP,
+
+ /* IPv6 --> IPv4 */
+ [95] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [96] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [97] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [98] reserved */
+ [99] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [100] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [101] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> IPv6 */
+ [102] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [103] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [104] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [105] reserved */
+ [106] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [107] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [108] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN */
+ [109] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> IPv4 */
+ [110] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [111] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [112] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [113] reserved */
+ [114] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [115] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [116] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> IPv6 */
+ [117] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [118] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [119] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [120] reserved */
+ [121] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [122] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [123] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC */
+ [124] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC --> IPv4 */
+ [125] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [126] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [127] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [128] reserved */
+ [129] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [130] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [131] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC --> IPv6 */
+ [132] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [133] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [134] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [135] reserved */
+ [136] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [137] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [138] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC/VLAN */
+ [139] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv4 */
+ [140] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [141] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [142] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [143] reserved */
+ [144] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [145] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [146] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv6 */
+ [147] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [148] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [149] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [150] reserved */
+ [151] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [152] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [153] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* All others reserved */
};

- return ip_ptype_map[ptype];
+ return ptype_table[ptype];
}

#define I40E_RX_DESC_EXT_STATUS_FLEXBH_MASK 0x03
@@ -708,11 +947,11 @@ i40e_rx_scan_hw_ring(struct i40e_rx_queue *rxq)
rxdp[j].wb.qword0.lo_dword.l2tag1) : 0;
pkt_flags = i40e_rxd_status_to_pkt_flags(qword1);
pkt_flags |= i40e_rxd_error_to_pkt_flags(qword1);
- pkt_flags |= i40e_rxd_ptype_to_pkt_flags(qword1);

- mb->packet_type = (uint16_t)((qword1 &
- I40E_RXD_QW1_PTYPE_MASK) >>
- I40E_RXD_QW1_PTYPE_SHIFT);
+ mb->packet_type =
+ i40e_rxd_pkt_type_mapping((uint8_t)((qword1 &
+ I40E_RXD_QW1_PTYPE_MASK) >>
+ I40E_RXD_QW1_PTYPE_SHIFT));
if (pkt_flags & PKT_RX_RSS_HASH)
mb->hash.rss = rte_le_to_cpu_32(\
rxdp[j].wb.qword0.hi_dword.rss);
@@ -951,9 +1190,9 @@ i40e_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
rte_le_to_cpu_16(rxd.wb.qword0.lo_dword.l2tag1) : 0;
pkt_flags = i40e_rxd_status_to_pkt_flags(qword1);
pkt_flags |= i40e_rxd_error_to_pkt_flags(qword1);
- pkt_flags |= i40e_rxd_ptype_to_pkt_flags(qword1);
- rxm->packet_type = (uint16_t)((qword1 & I40E_RXD_QW1_PTYPE_MASK) >>
- I40E_RXD_QW1_PTYPE_SHIFT);
+ rxm->packet_type =
+ i40e_rxd_pkt_type_mapping((uint8_t)((qword1 &
+ I40E_RXD_QW1_PTYPE_MASK) >> I40E_RXD_QW1_PTYPE_SHIFT));
if (pkt_flags & PKT_RX_RSS_HASH)
rxm->hash.rss =
rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
@@ -1110,10 +1349,9 @@ i40e_recv_scattered_pkts(void *rx_queue,
rte_le_to_cpu_16(rxd.wb.qword0.lo_dword.l2tag1) : 0;
pkt_flags = i40e_rxd_status_to_pkt_flags(qword1);
pkt_flags |= i40e_rxd_error_to_pkt_flags(qword1);
- pkt_flags |= i40e_rxd_ptype_to_pkt_flags(qword1);
- first_seg->packet_type = (uint16_t)((qword1 &
- I40E_RXD_QW1_PTYPE_MASK) >>
- I40E_RXD_QW1_PTYPE_SHIFT);
+ first_seg->packet_type =
+ i40e_rxd_pkt_type_mapping((uint8_t)((qword1 &
+ I40E_RXD_QW1_PTYPE_MASK) >> I40E_RXD_QW1_PTYPE_SHIFT));
if (pkt_flags & PKT_RX_RSS_HASH)
rxm->hash.rss =
rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
--
1.9.3
Helin Zhang
2015-02-09 06:40:40 UTC
Permalink
To unify packet types among all PMDs, bit masks of packet type for
ol_flags are replaced by unified packet type.

Signed-off-by: Helin Zhang <***@intel.com>
---
lib/librte_pmd_enic/enic_main.c | 14 ++++++++------
1 file changed, 8 insertions(+), 6 deletions(-)

v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.

diff --git a/lib/librte_pmd_enic/enic_main.c b/lib/librte_pmd_enic/enic_main.c
index 48fdca2..9acba9a 100644
--- a/lib/librte_pmd_enic/enic_main.c
+++ b/lib/librte_pmd_enic/enic_main.c
@@ -423,7 +423,7 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
rx_pkt->pkt_len = bytes_written;

if (ipv4) {
- rx_pkt->ol_flags |= PKT_RX_IPV4_HDR;
+ rx_pkt->packet_type = RTE_PTYPE_L3_IPV4;
if (!csum_not_calc) {
if (unlikely(!ipv4_csum_ok))
rx_pkt->ol_flags |= PKT_RX_IP_CKSUM_BAD;
@@ -432,7 +432,7 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
rx_pkt->ol_flags |= PKT_RX_L4_CKSUM_BAD;
}
} else if (ipv6)
- rx_pkt->ol_flags |= PKT_RX_IPV6_HDR;
+ rx_pkt->packet_type = RTE_PTYPE_L3_IPV6;
} else {
/* Header split */
if (sop && !eop) {
@@ -445,7 +445,7 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
*rx_pkt_bucket = rx_pkt;
rx_pkt->pkt_len = bytes_written;
if (ipv4) {
- rx_pkt->ol_flags |= PKT_RX_IPV4_HDR;
+ rx_pkt->packet_type = RTE_PTYPE_L3_IPV4;
if (!csum_not_calc) {
if (unlikely(!ipv4_csum_ok))
rx_pkt->ol_flags |=
@@ -457,13 +457,14 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
PKT_RX_L4_CKSUM_BAD;
}
} else if (ipv6)
- rx_pkt->ol_flags |= PKT_RX_IPV6_HDR;
+ rx_pkt->packet_type = RTE_PTYPE_L3_IPV6;
} else {
/* Payload */
hdr_rx_pkt = *rx_pkt_bucket;
hdr_rx_pkt->pkt_len += bytes_written;
if (ipv4) {
- hdr_rx_pkt->ol_flags |= PKT_RX_IPV4_HDR;
+ hdr_rx_pkt->packet_type =
+ RTE_PTYPE_L3_IPV4;
if (!csum_not_calc) {
if (unlikely(!ipv4_csum_ok))
hdr_rx_pkt->ol_flags |=
@@ -475,7 +476,8 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
PKT_RX_L4_CKSUM_BAD;
}
} else if (ipv6)
- hdr_rx_pkt->ol_flags |= PKT_RX_IPV6_HDR;
+ hdr_rx_pkt->packet_type =
+ RTE_PTYPE_L3_IPV6;

}
}
--
1.9.3
Helin Zhang
2015-02-09 06:40:41 UTC
Permalink
To unify packet types among all PMDs, bit masks of packet type for
ol_flags are replaced by unified packet type.

Signed-off-by: Helin Zhang <***@intel.com>
---
lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.

diff --git a/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c b/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
index 8425f32..c85ebd8 100644
--- a/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
+++ b/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
@@ -650,9 +650,9 @@ vmxnet3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
struct ipv4_hdr *ip = (struct ipv4_hdr *)(eth + 1);

if (((ip->version_ihl & 0xf) << 2) > (int)sizeof(struct ipv4_hdr))
- rxm->ol_flags |= PKT_RX_IPV4_HDR_EXT;
+ rxm->packet_type = RTE_PTYPE_L3_IPV4_EXT;
else
- rxm->ol_flags |= PKT_RX_IPV4_HDR;
+ rxm->packet_type = RTE_PTYPE_L3_IPV4;

if (!rcd->cnc) {
if (!rcd->ipc)
--
1.9.3
Yong Wang
2015-02-11 01:46:14 UTC
Permalink
Post by Helin Zhang
To unify packet types among all PMDs, bit masks of packet type for
ol_flags are replaced by unified packet type.
---
lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
* Used redefined packet types and enlarged packet_type field in mbuf.
diff --git a/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
b/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
index 8425f32..c85ebd8 100644
--- a/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
+++ b/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
@@ -650,9 +650,9 @@ vmxnet3_recv_pkts(void *rx_queue, struct rte_mbuf
**rx_pkts, uint16_t nb_pkts)
struct ipv4_hdr *ip = (struct ipv4_hdr *)(eth + 1);
if (((ip->version_ihl & 0xf) << 2) > (int)sizeof(struct ipv4_hdr))
- rxm->ol_flags |= PKT_RX_IPV4_HDR_EXT;
+ rxm->packet_type = RTE_PTYPE_L3_IPV4_EXT;
else
- rxm->ol_flags |= PKT_RX_IPV4_HDR;
+ rxm->packet_type = RTE_PTYPE_L3_IPV4;
if (!rcd->cnc) {
if (!rcd->ipc)
--
1.9.3
Helin Zhang
2015-02-09 06:40:42 UTC
Permalink
To unify packet types among all PMDs, bit masks and relevant macros
of packet type for ol_flags are replaced by unified packet type and
relevant macros.

Signed-off-by: Helin Zhang <***@intel.com>
---
app/test-pipeline/pipeline_hash.c | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)

v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.

diff --git a/app/test-pipeline/pipeline_hash.c b/app/test-pipeline/pipeline_hash.c
index 4598ad4..548615f 100644
--- a/app/test-pipeline/pipeline_hash.c
+++ b/app/test-pipeline/pipeline_hash.c
@@ -459,20 +459,21 @@ app_main_loop_rx_metadata(void) {
signature = RTE_MBUF_METADATA_UINT32_PTR(m, 0);
key = RTE_MBUF_METADATA_UINT8_PTR(m, 32);

- if (m->ol_flags & PKT_RX_IPV4_HDR) {
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type)) {
ip_hdr = (struct ipv4_hdr *)
&m_data[sizeof(struct ether_hdr)];
ip_dst = ip_hdr->dst_addr;

k32 = (uint32_t *) key;
k32[0] = ip_dst & 0xFFFFFF00;
- } else {
+ } else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
ipv6_hdr = (struct ipv6_hdr *)
&m_data[sizeof(struct ether_hdr)];
ipv6_dst = ipv6_hdr->dst_addr;

memcpy(key, ipv6_dst, 16);
- }
+ } else
+ continue;

*signature = test_hash(key, 0, 0);
}
--
1.9.3
Helin Zhang
2015-02-09 06:40:43 UTC
Permalink
To unify packet types among all PMDs, bit masks and relevant macros
of packet type for ol_flags are replaced by unified packet type and
relevant macros.

Signed-off-by: Helin Zhang <***@intel.com>
Signed-off-by: Jijiang Liu <***@intel.com>
---
app/test-pmd/csumonly.c | 6 +++---
app/test-pmd/rxonly.c | 9 +++------
2 files changed, 6 insertions(+), 9 deletions(-)

v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.

diff --git a/app/test-pmd/csumonly.c b/app/test-pmd/csumonly.c
index 41711fd..5e08272 100644
--- a/app/test-pmd/csumonly.c
+++ b/app/test-pmd/csumonly.c
@@ -319,7 +319,7 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
uint16_t nb_tx;
uint16_t i;
uint64_t ol_flags;
- uint16_t testpmd_ol_flags;
+ uint16_t testpmd_ol_flags, packet_type;
uint8_t l4_proto, l4_tun_len = 0;
uint16_t ethertype = 0, outer_ethertype = 0;
uint16_t l2_len = 0, l3_len = 0, l4_len = 0;
@@ -362,6 +362,7 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
tunnel = 0;
l4_tun_len = 0;
m = pkts_burst[i];
+ packet_type = m->packet_type;

/* Update the L3/L4 checksum error packet statistics */
rx_bad_ip_csum += ((m->ol_flags & PKT_RX_IP_CKSUM_BAD) != 0);
@@ -387,8 +388,7 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)

/* currently, this flag is set by i40e only if the
* packet is vxlan */
- } else if (m->ol_flags & (PKT_RX_TUNNEL_IPV4_HDR |
- PKT_RX_TUNNEL_IPV6_HDR))
+ } else if (RTE_ETH_IS_TUNNEL_PKT(packet_type))
tunnel = 1;

if (tunnel == 1) {
diff --git a/app/test-pmd/rxonly.c b/app/test-pmd/rxonly.c
index fdfe990..8eb68c4 100644
--- a/app/test-pmd/rxonly.c
+++ b/app/test-pmd/rxonly.c
@@ -92,7 +92,7 @@ pkt_burst_receive(struct fwd_stream *fs)
uint64_t ol_flags;
uint16_t nb_rx;
uint16_t i, packet_type;
- uint64_t is_encapsulation;
+ uint16_t is_encapsulation;

#ifdef RTE_TEST_PMD_RECORD_CORE_CYCLES
uint64_t start_tsc;
@@ -135,10 +135,7 @@ pkt_burst_receive(struct fwd_stream *fs)
eth_type = RTE_BE_TO_CPU_16(eth_hdr->ether_type);
ol_flags = mb->ol_flags;
packet_type = mb->packet_type;
-
- is_encapsulation = ol_flags & (PKT_RX_TUNNEL_IPV4_HDR |
- PKT_RX_TUNNEL_IPV6_HDR);
-
+ is_encapsulation = RTE_ETH_IS_TUNNEL_PKT(packet_type);
print_ether_addr(" src=", &eth_hdr->s_addr);
print_ether_addr(" - dst=", &eth_hdr->d_addr);
printf(" - type=0x%04x - length=%u - nb_segs=%d",
@@ -174,7 +171,7 @@ pkt_burst_receive(struct fwd_stream *fs)
l2_len = sizeof(struct ether_hdr);

/* Do not support ipv4 option field */
- if (ol_flags & PKT_RX_TUNNEL_IPV4_HDR) {
+ if (RTE_ETH_IS_IPV4_HDR(packet_type)) {
l3_len = sizeof(struct ipv4_hdr);
ipv4_hdr = (struct ipv4_hdr *) (rte_pktmbuf_mtod(mb,
unsigned char *) + l2_len);
--
1.9.3
Helin Zhang
2015-02-09 06:40:44 UTC
Permalink
To unify packet types among all PMDs, bit masks and relevant macros
of packet type for ol_flags are replaced by unified packet type and
relevant macros.

Signed-off-by: Helin Zhang <***@intel.com>
---
examples/ip_fragmentation/main.c | 7 +++----
1 file changed, 3 insertions(+), 4 deletions(-)

v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.

diff --git a/examples/ip_fragmentation/main.c b/examples/ip_fragmentation/main.c
index eac5427..152844e 100644
--- a/examples/ip_fragmentation/main.c
+++ b/examples/ip_fragmentation/main.c
@@ -286,7 +286,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, struct lcore_queue_conf *qconf,
len = qconf->tx_mbufs[port_out].len;

/* if this is an IPv4 packet */
- if (m->ol_flags & PKT_RX_IPV4_HDR) {
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type)) {
struct ipv4_hdr *ip_hdr;
uint32_t ip_dst;
/* Read the lookup key (i.e. ip_dst) from the input packet */
@@ -320,9 +320,8 @@ l3fwd_simple_forward(struct rte_mbuf *m, struct lcore_queue_conf *qconf,
if (unlikely (len2 < 0))
return;
}
- }
- /* if this is an IPv6 packet */
- else if (m->ol_flags & PKT_RX_IPV6_HDR) {
+ } else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
+ /* if this is an IPv6 packet */
struct ipv6_hdr *ip_hdr;

ipv6 = 1;
--
1.9.3
Helin Zhang
2015-02-09 06:40:45 UTC
Permalink
To unify packet types among all PMDs, bit masks and relevant macros
of packet type for ol_flags are replaced by unified packet type and
relevant macros.

Signed-off-by: Helin Zhang <***@intel.com>
---
examples/ip_reassembly/main.c | 7 +++----
1 file changed, 3 insertions(+), 4 deletions(-)

v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.

diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c
index 8492153..5ef2135 100644
--- a/examples/ip_reassembly/main.c
+++ b/examples/ip_reassembly/main.c
@@ -357,7 +357,7 @@ reassemble(struct rte_mbuf *m, uint8_t portid, uint32_t queue,
dst_port = portid;

/* if packet is IPv4 */
- if (m->ol_flags & (PKT_RX_IPV4_HDR)) {
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type)) {
struct ipv4_hdr *ip_hdr;
uint32_t ip_dst;

@@ -397,9 +397,8 @@ reassemble(struct rte_mbuf *m, uint8_t portid, uint32_t queue,
}

eth_hdr->ether_type = rte_be_to_cpu_16(ETHER_TYPE_IPv4);
- }
- /* if packet is IPv6 */
- else if (m->ol_flags & (PKT_RX_IPV6_HDR | PKT_RX_IPV6_HDR_EXT)) {
+ } else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
+ /* if packet is IPv6 */
struct ipv6_extension_fragment *frag_hdr;
struct ipv6_hdr *ip_hdr;
--
1.9.3
Helin Zhang
2015-02-09 06:40:46 UTC
Permalink
To unify packet types among all PMDs, bit masks and relevant macros
of packet type for ol_flags are replaced by unified packet type and
relevant macros.

Signed-off-by: Helin Zhang <***@intel.com>
---
examples/l3fwd-acl/main.c | 19 ++++++-------------
1 file changed, 6 insertions(+), 13 deletions(-)

v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.

diff --git a/examples/l3fwd-acl/main.c b/examples/l3fwd-acl/main.c
index f1f7601..af70ccd 100644
--- a/examples/l3fwd-acl/main.c
+++ b/examples/l3fwd-acl/main.c
@@ -651,9 +651,7 @@ prepare_one_packet(struct rte_mbuf **pkts_in, struct acl_search_t *acl,
struct ipv4_hdr *ipv4_hdr;
struct rte_mbuf *pkt = pkts_in[index];

- int type = pkt->ol_flags & (PKT_RX_IPV4_HDR | PKT_RX_IPV6_HDR);
-
- if (type == PKT_RX_IPV4_HDR) {
+ if (RTE_ETH_IS_IPV4_HDR(pkt->packet_type)) {

ipv4_hdr = (struct ipv4_hdr *)(rte_pktmbuf_mtod(pkt,
unsigned char *) + sizeof(struct ether_hdr));
@@ -674,8 +672,7 @@ prepare_one_packet(struct rte_mbuf **pkts_in, struct acl_search_t *acl,
rte_pktmbuf_free(pkt);
}

- } else if (type == PKT_RX_IPV6_HDR) {
-
+ } else if (RTE_ETH_IS_IPV6_HDR(pkt->packet_type)) {
/* Fill acl structure */
acl->data_ipv6[acl->num_ipv6] = MBUF_IPV6_2PROTO(pkt);
acl->m_ipv6[(acl->num_ipv6)++] = pkt;
@@ -693,17 +690,13 @@ prepare_one_packet(struct rte_mbuf **pkts_in, struct acl_search_t *acl,
{
struct rte_mbuf *pkt = pkts_in[index];

- int type = pkt->ol_flags & (PKT_RX_IPV4_HDR | PKT_RX_IPV6_HDR);
-
- if (type == PKT_RX_IPV4_HDR) {
-
+ if (RTE_ETH_IS_IPV4_HDR(pkt->packet_type)) {
/* Fill acl structure */
acl->data_ipv4[acl->num_ipv4] = MBUF_IPV4_2PROTO(pkt);
acl->m_ipv4[(acl->num_ipv4)++] = pkt;


- } else if (type == PKT_RX_IPV6_HDR) {
-
+ } else if (RTE_ETH_IS_IPV6_HDR(pkt->packet_type)) {
/* Fill acl structure */
acl->data_ipv6[acl->num_ipv6] = MBUF_IPV6_2PROTO(pkt);
acl->m_ipv6[(acl->num_ipv6)++] = pkt;
@@ -751,9 +744,9 @@ send_one_packet(struct rte_mbuf *m, uint32_t res)
/* in the ACL list, drop it */
#ifdef L3FWDACL_DEBUG
if ((res & ACL_DENY_SIGNATURE) != 0) {
- if (m->ol_flags & PKT_RX_IPV4_HDR)
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type))
dump_acl4_rule(m, res);
- else
+ else if (RTE_ETH_IS_IPV6_HDR(m->packet_type))
dump_acl6_rule(m, res);
}
#endif
--
1.9.3
Helin Zhang
2015-02-09 06:40:47 UTC
Permalink
To unify packet types among all PMDs, bit masks and relevant macros
of packet type for ol_flags are replaced by unified packet type and
relevant macros.

Signed-off-by: Helin Zhang <***@intel.com>
---
examples/l3fwd-power/main.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)

v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.

diff --git a/examples/l3fwd-power/main.c b/examples/l3fwd-power/main.c
index f6b55b9..964e5b9 100644
--- a/examples/l3fwd-power/main.c
+++ b/examples/l3fwd-power/main.c
@@ -638,7 +638,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid,

eth_hdr = rte_pktmbuf_mtod(m, struct ether_hdr *);

- if (m->ol_flags & PKT_RX_IPV4_HDR) {
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type)) {
/* Handle IPv4 headers.*/
ipv4_hdr =
(struct ipv4_hdr *)(rte_pktmbuf_mtod(m, unsigned char*)
@@ -673,8 +673,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid,
ether_addr_copy(&ports_eth_addr[dst_port], &eth_hdr->s_addr);

send_single_packet(m, dst_port);
- }
- else {
+ } else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
/* Handle IPv6 headers.*/
#if (APP_LOOKUP_METHOD == APP_LOOKUP_EXACT_MATCH)
struct ipv6_hdr *ipv6_hdr;
--
1.9.3
Helin Zhang
2015-02-09 06:40:48 UTC
Permalink
To unify packet types among all PMDs, bit masks and relevant macros
of packet type for ol_flags are replaced by unified packet type and
relevant macros.

Signed-off-by: Helin Zhang <***@intel.com>
---
examples/l3fwd/main.c | 64 ++++++++++++++++++++++++++++-----------------------
1 file changed, 35 insertions(+), 29 deletions(-)

v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.

diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
index 6f7d7d4..302322e 100644
--- a/examples/l3fwd/main.c
+++ b/examples/l3fwd/main.c
@@ -958,7 +958,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid, struct lcore_conf *qcon

eth_hdr = rte_pktmbuf_mtod(m, struct ether_hdr *);

- if (m->ol_flags & PKT_RX_IPV4_HDR) {
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type)) {
/* Handle IPv4 headers.*/
ipv4_hdr = (struct ipv4_hdr *)(rte_pktmbuf_mtod(m, unsigned char *) +
sizeof(struct ether_hdr));
@@ -993,7 +993,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid, struct lcore_conf *qcon

send_single_packet(m, dst_port);

- } else {
+ } else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
/* Handle IPv6 headers.*/
struct ipv6_hdr *ipv6_hdr;

@@ -1039,11 +1039,11 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid, struct lcore_conf *qcon
* to BAD_PORT value.
*/
static inline __attribute__((always_inline)) void
-rfc1812_process(struct ipv4_hdr *ipv4_hdr, uint16_t *dp, uint32_t flags)
+rfc1812_process(struct ipv4_hdr *ipv4_hdr, uint16_t *dp, uint16_t ptype)
{
uint8_t ihl;

- if ((flags & PKT_RX_IPV4_HDR) != 0) {
+ if (RTE_ETH_IS_IPV4_HDR(ptype)) {

ihl = ipv4_hdr->version_ihl - IPV4_MIN_VER_IHL;

@@ -1074,11 +1074,11 @@ get_dst_port(const struct lcore_conf *qconf, struct rte_mbuf *pkt,
struct ipv6_hdr *ipv6_hdr;
struct ether_hdr *eth_hdr;

- if (pkt->ol_flags & PKT_RX_IPV4_HDR) {
+ if (RTE_ETH_IS_IPV4_HDR(pkt->packet_type)) {
if (rte_lpm_lookup(qconf->ipv4_lookup_struct, dst_ipv4,
&next_hop) != 0)
next_hop = portid;
- } else if (pkt->ol_flags & PKT_RX_IPV6_HDR) {
+ } else if (RTE_ETH_IS_IPV6_HDR(pkt->packet_type)) {
eth_hdr = rte_pktmbuf_mtod(pkt, struct ether_hdr *);
ipv6_hdr = (struct ipv6_hdr *)(eth_hdr + 1);
if (rte_lpm6_lookup(qconf->ipv6_lookup_struct,
@@ -1112,17 +1112,19 @@ process_packet(struct lcore_conf *qconf, struct rte_mbuf *pkt,
ve = val_eth[dp];

dst_port[0] = dp;
- rfc1812_process(ipv4_hdr, dst_port, pkt->ol_flags);
+ rfc1812_process(ipv4_hdr, dst_port, pkt->packet_type);

te = _mm_blend_epi16(te, ve, MASK_ETH);
_mm_store_si128((__m128i *)eth_hdr, te);
}

/*
- * Read ol_flags and destination IPV4 addresses from 4 mbufs.
+ * Read packet_type and destination IPV4 addresses from 4 mbufs.
*/
static inline void
-processx4_step1(struct rte_mbuf *pkt[FWDSTEP], __m128i *dip, uint32_t *flag)
+processx4_step1(struct rte_mbuf *pkt[FWDSTEP],
+ __m128i *dip,
+ uint32_t *ipv4_flag)
{
struct ipv4_hdr *ipv4_hdr;
struct ether_hdr *eth_hdr;
@@ -1131,22 +1133,20 @@ processx4_step1(struct rte_mbuf *pkt[FWDSTEP], __m128i *dip, uint32_t *flag)
eth_hdr = rte_pktmbuf_mtod(pkt[0], struct ether_hdr *);
ipv4_hdr = (struct ipv4_hdr *)(eth_hdr + 1);
x0 = ipv4_hdr->dst_addr;
- flag[0] = pkt[0]->ol_flags & PKT_RX_IPV4_HDR;

eth_hdr = rte_pktmbuf_mtod(pkt[1], struct ether_hdr *);
ipv4_hdr = (struct ipv4_hdr *)(eth_hdr + 1);
x1 = ipv4_hdr->dst_addr;
- flag[0] &= pkt[1]->ol_flags;

eth_hdr = rte_pktmbuf_mtod(pkt[2], struct ether_hdr *);
ipv4_hdr = (struct ipv4_hdr *)(eth_hdr + 1);
x2 = ipv4_hdr->dst_addr;
- flag[0] &= pkt[2]->ol_flags;

eth_hdr = rte_pktmbuf_mtod(pkt[3], struct ether_hdr *);
ipv4_hdr = (struct ipv4_hdr *)(eth_hdr + 1);
x3 = ipv4_hdr->dst_addr;
- flag[0] &= pkt[3]->ol_flags;
+ *ipv4_flag = pkt[0]->packet_type & pkt[1]->packet_type &
+ pkt[2]->packet_type & pkt[3]->packet_type & RTE_PTYPE_L3_IPV4;

dip[0] = _mm_set_epi32(x3, x2, x1, x0);
}
@@ -1156,8 +1156,12 @@ processx4_step1(struct rte_mbuf *pkt[FWDSTEP], __m128i *dip, uint32_t *flag)
* If lookup fails, use incoming port (portid) as destination port.
*/
static inline void
-processx4_step2(const struct lcore_conf *qconf, __m128i dip, uint32_t flag,
- uint8_t portid, struct rte_mbuf *pkt[FWDSTEP], uint16_t dprt[FWDSTEP])
+processx4_step2(const struct lcore_conf *qconf,
+ __m128i dip,
+ uint32_t ipv4_flag,
+ uint8_t portid,
+ struct rte_mbuf *pkt[FWDSTEP],
+ uint16_t dprt[FWDSTEP])
{
rte_xmm_t dst;
const __m128i bswap_mask = _mm_set_epi8(12, 13, 14, 15, 8, 9, 10, 11,
@@ -1167,7 +1171,7 @@ processx4_step2(const struct lcore_conf *qconf, __m128i dip, uint32_t flag,
dip = _mm_shuffle_epi8(dip, bswap_mask);

/* if all 4 packets are IPV4. */
- if (likely(flag != 0)) {
+ if (likely(ipv4_flag)) {
rte_lpm_lookupx4(qconf->ipv4_lookup_struct, dip, dprt, portid);
} else {
dst.x = dip;
@@ -1218,13 +1222,13 @@ processx4_step3(struct rte_mbuf *pkt[FWDSTEP], uint16_t dst_port[FWDSTEP])
_mm_store_si128(p[3], te[3]);

rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[0] + 1),
- &dst_port[0], pkt[0]->ol_flags);
+ &dst_port[0], pkt[0]->packet_type);
rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[1] + 1),
- &dst_port[1], pkt[1]->ol_flags);
+ &dst_port[1], pkt[1]->packet_type);
rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[2] + 1),
- &dst_port[2], pkt[2]->ol_flags);
+ &dst_port[2], pkt[2]->packet_type);
rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[3] + 1),
- &dst_port[3], pkt[3]->ol_flags);
+ &dst_port[3], pkt[3]->packet_type);
}

/*
@@ -1411,7 +1415,7 @@ main_loop(__attribute__((unused)) void *dummy)
uint16_t *lp;
uint16_t dst_port[MAX_PKT_BURST];
__m128i dip[MAX_PKT_BURST / FWDSTEP];
- uint32_t flag[MAX_PKT_BURST / FWDSTEP];
+ uint32_t ipv4_flag[MAX_PKT_BURST / FWDSTEP];
uint16_t pnum[MAX_PKT_BURST + 1];
#endif

@@ -1481,14 +1485,16 @@ main_loop(__attribute__((unused)) void *dummy)
*/
int32_t n = RTE_ALIGN_FLOOR(nb_rx, 4);
for (j = 0; j < n ; j+=4) {
- uint32_t ol_flag = pkts_burst[j]->ol_flags
- & pkts_burst[j+1]->ol_flags
- & pkts_burst[j+2]->ol_flags
- & pkts_burst[j+3]->ol_flags;
- if (ol_flag & PKT_RX_IPV4_HDR ) {
+ uint32_t pkt_type =
+ pkts_burst[j]->packet_type &
+ pkts_burst[j+1]->packet_type &
+ pkts_burst[j+2]->packet_type &
+ pkts_burst[j+3]->packet_type;
+ if (pkt_type & RTE_PTYPE_L3_IPV4) {
simple_ipv4_fwd_4pkts(&pkts_burst[j],
portid, qconf);
- } else if (ol_flag & PKT_RX_IPV6_HDR) {
+ } else if (pkt_type &
+ RTE_PTYPE_L3_IPV6) {
simple_ipv6_fwd_4pkts(&pkts_burst[j],
portid, qconf);
} else {
@@ -1513,13 +1519,13 @@ main_loop(__attribute__((unused)) void *dummy)
for (j = 0; j != k; j += FWDSTEP) {
processx4_step1(&pkts_burst[j],
&dip[j / FWDSTEP],
- &flag[j / FWDSTEP]);
+ &ipv4_flag[j / FWDSTEP]);
}

k = RTE_ALIGN_FLOOR(nb_rx, FWDSTEP);
for (j = 0; j != k; j += FWDSTEP) {
processx4_step2(qconf, dip[j / FWDSTEP],
- flag[j / FWDSTEP], portid,
+ ipv4_flag[j / FWDSTEP], portid,
&pkts_burst[j], &dst_port[j]);
}
--
1.9.3
Ananyev, Konstantin
2015-02-16 17:04:44 UTC
Permalink
Hi Helin,
-----Original Message-----
From: Zhang, Helin
Sent: Monday, February 09, 2015 6:41 AM
Cc: Cao, Waterman; Liang, Cunming; Liu, Jijiang; Ananyev, Konstantin; Richardson, Bruce; Zhang, Helin
Subject: [PATCH v2 14/15] examples/l3fwd: support of unified packet type
To unify packet types among all PMDs, bit masks and relevant macros
of packet type for ol_flags are replaced by unified packet type and
relevant macros.
---
examples/l3fwd/main.c | 64 ++++++++++++++++++++++++++++-----------------------
1 file changed, 35 insertions(+), 29 deletions(-)
* Used redefined packet types and enlarged packet_type field in mbuf.
diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
index 6f7d7d4..302322e 100644
--- a/examples/l3fwd/main.c
+++ b/examples/l3fwd/main.c
@@ -958,7 +958,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid, struct lcore_conf *qcon
eth_hdr = rte_pktmbuf_mtod(m, struct ether_hdr *);
- if (m->ol_flags & PKT_RX_IPV4_HDR) {
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type)) {
/* Handle IPv4 headers.*/
ipv4_hdr = (struct ipv4_hdr *)(rte_pktmbuf_mtod(m, unsigned char *) +
sizeof(struct ether_hdr));
@@ -993,7 +993,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid, struct lcore_conf *qcon
send_single_packet(m, dst_port);
- } else {
+ } else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
If you changed to from 'else' to ' else if' here, then I suppose you'll need to add another 'else' after it:
to handle case, where input packets are neither IPV4 neither IPv6.
Otherwise you might start 'leaking' such mbufs.
/* Handle IPv6 headers.*/
struct ipv6_hdr *ipv6_hdr;
@@ -1039,11 +1039,11 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid, struct lcore_conf *qcon
* to BAD_PORT value.
*/
static inline __attribute__((always_inline)) void
-rfc1812_process(struct ipv4_hdr *ipv4_hdr, uint16_t *dp, uint32_t flags)
+rfc1812_process(struct ipv4_hdr *ipv4_hdr, uint16_t *dp, uint16_t ptype)
Shouldn't it be 'uint32_t ptype'?
{
uint8_t ihl;
- if ((flags & PKT_RX_IPV4_HDR) != 0) {
+ if (RTE_ETH_IS_IPV4_HDR(ptype)) {
ihl = ipv4_hdr->version_ihl - IPV4_MIN_VER_IHL;
@@ -1074,11 +1074,11 @@ get_dst_port(const struct lcore_conf *qconf, struct rte_mbuf *pkt,
struct ipv6_hdr *ipv6_hdr;
struct ether_hdr *eth_hdr;
- if (pkt->ol_flags & PKT_RX_IPV4_HDR) {
+ if (RTE_ETH_IS_IPV4_HDR(pkt->packet_type)) {
if (rte_lpm_lookup(qconf->ipv4_lookup_struct, dst_ipv4,
&next_hop) != 0)
next_hop = portid;
- } else if (pkt->ol_flags & PKT_RX_IPV6_HDR) {
+ } else if (RTE_ETH_IS_IPV6_HDR(pkt->packet_type)) {
eth_hdr = rte_pktmbuf_mtod(pkt, struct ether_hdr *);
ipv6_hdr = (struct ipv6_hdr *)(eth_hdr + 1);
if (rte_lpm6_lookup(qconf->ipv6_lookup_struct,
@@ -1112,17 +1112,19 @@ process_packet(struct lcore_conf *qconf, struct rte_mbuf *pkt,
ve = val_eth[dp];
dst_port[0] = dp;
- rfc1812_process(ipv4_hdr, dst_port, pkt->ol_flags);
+ rfc1812_process(ipv4_hdr, dst_port, pkt->packet_type);
te = _mm_blend_epi16(te, ve, MASK_ETH);
_mm_store_si128((__m128i *)eth_hdr, te);
}
/*
- * Read ol_flags and destination IPV4 addresses from 4 mbufs.
+ * Read packet_type and destination IPV4 addresses from 4 mbufs.
*/
static inline void
-processx4_step1(struct rte_mbuf *pkt[FWDSTEP], __m128i *dip, uint32_t *flag)
+processx4_step1(struct rte_mbuf *pkt[FWDSTEP],
+ __m128i *dip,
+ uint32_t *ipv4_flag)
{
struct ipv4_hdr *ipv4_hdr;
struct ether_hdr *eth_hdr;
@@ -1131,22 +1133,20 @@ processx4_step1(struct rte_mbuf *pkt[FWDSTEP], __m128i *dip, uint32_t *flag)
eth_hdr = rte_pktmbuf_mtod(pkt[0], struct ether_hdr *);
ipv4_hdr = (struct ipv4_hdr *)(eth_hdr + 1);
x0 = ipv4_hdr->dst_addr;
- flag[0] = pkt[0]->ol_flags & PKT_RX_IPV4_HDR;
eth_hdr = rte_pktmbuf_mtod(pkt[1], struct ether_hdr *);
ipv4_hdr = (struct ipv4_hdr *)(eth_hdr + 1);
x1 = ipv4_hdr->dst_addr;
- flag[0] &= pkt[1]->ol_flags;
eth_hdr = rte_pktmbuf_mtod(pkt[2], struct ether_hdr *);
ipv4_hdr = (struct ipv4_hdr *)(eth_hdr + 1);
x2 = ipv4_hdr->dst_addr;
- flag[0] &= pkt[2]->ol_flags;
eth_hdr = rte_pktmbuf_mtod(pkt[3], struct ether_hdr *);
ipv4_hdr = (struct ipv4_hdr *)(eth_hdr + 1);
x3 = ipv4_hdr->dst_addr;
- flag[0] &= pkt[3]->ol_flags;
+ *ipv4_flag = pkt[0]->packet_type & pkt[1]->packet_type &
+ pkt[2]->packet_type & pkt[3]->packet_type & RTE_PTYPE_L3_IPV4;
Why not as it was before:
flag[0] = pkt[0]->packet-type & ...
...
flag[0] &= pkt[1]->packet_type;
...

Why do you need to unite them?
dip[0] = _mm_set_epi32(x3, x2, x1, x0);
}
@@ -1156,8 +1156,12 @@ processx4_step1(struct rte_mbuf *pkt[FWDSTEP], __m128i *dip, uint32_t *flag)
* If lookup fails, use incoming port (portid) as destination port.
*/
static inline void
-processx4_step2(const struct lcore_conf *qconf, __m128i dip, uint32_t flag,
- uint8_t portid, struct rte_mbuf *pkt[FWDSTEP], uint16_t dprt[FWDSTEP])
+processx4_step2(const struct lcore_conf *qconf,
+ __m128i dip,
+ uint32_t ipv4_flag,
+ uint8_t portid,
+ struct rte_mbuf *pkt[FWDSTEP],
+ uint16_t dprt[FWDSTEP])
{
rte_xmm_t dst;
const __m128i bswap_mask = _mm_set_epi8(12, 13, 14, 15, 8, 9, 10, 11,
@@ -1167,7 +1171,7 @@ processx4_step2(const struct lcore_conf *qconf, __m128i dip, uint32_t flag,
dip = _mm_shuffle_epi8(dip, bswap_mask);
/* if all 4 packets are IPV4. */
- if (likely(flag != 0)) {
+ if (likely(ipv4_flag)) {
rte_lpm_lookupx4(qconf->ipv4_lookup_struct, dip, dprt, portid);
} else {
dst.x = dip;
@@ -1218,13 +1222,13 @@ processx4_step3(struct rte_mbuf *pkt[FWDSTEP], uint16_t dst_port[FWDSTEP])
_mm_store_si128(p[3], te[3]);
rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[0] + 1),
- &dst_port[0], pkt[0]->ol_flags);
+ &dst_port[0], pkt[0]->packet_type);
rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[1] + 1),
- &dst_port[1], pkt[1]->ol_flags);
+ &dst_port[1], pkt[1]->packet_type);
rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[2] + 1),
- &dst_port[2], pkt[2]->ol_flags);
+ &dst_port[2], pkt[2]->packet_type);
rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[3] + 1),
- &dst_port[3], pkt[3]->ol_flags);
+ &dst_port[3], pkt[3]->packet_type);
}
/*
@@ -1411,7 +1415,7 @@ main_loop(__attribute__((unused)) void *dummy)
uint16_t *lp;
uint16_t dst_port[MAX_PKT_BURST];
__m128i dip[MAX_PKT_BURST / FWDSTEP];
- uint32_t flag[MAX_PKT_BURST / FWDSTEP];
+ uint32_t ipv4_flag[MAX_PKT_BURST / FWDSTEP];
uint16_t pnum[MAX_PKT_BURST + 1];
#endif
@@ -1481,14 +1485,16 @@ main_loop(__attribute__((unused)) void *dummy)
*/
int32_t n = RTE_ALIGN_FLOOR(nb_rx, 4);
for (j = 0; j < n ; j+=4) {
- uint32_t ol_flag = pkts_burst[j]->ol_flags
- & pkts_burst[j+1]->ol_flags
- & pkts_burst[j+2]->ol_flags
- & pkts_burst[j+3]->ol_flags;
- if (ol_flag & PKT_RX_IPV4_HDR ) {
+ uint32_t pkt_type =
+ pkts_burst[j]->packet_type &
+ pkts_burst[j+1]->packet_type &
+ pkts_burst[j+2]->packet_type &
+ pkts_burst[j+3]->packet_type;
+ if (pkt_type & RTE_PTYPE_L3_IPV4) {
simple_ipv4_fwd_4pkts(&pkts_burst[j],
portid, qconf);
- } else if (ol_flag & PKT_RX_IPV6_HDR) {
+ } else if (pkt_type &
+ RTE_PTYPE_L3_IPV6) {
simple_ipv6_fwd_4pkts(&pkts_burst[j],
portid, qconf);
} else {
@@ -1513,13 +1519,13 @@ main_loop(__attribute__((unused)) void *dummy)
for (j = 0; j != k; j += FWDSTEP) {
processx4_step1(&pkts_burst[j],
&dip[j / FWDSTEP],
- &flag[j / FWDSTEP]);
+ &ipv4_flag[j / FWDSTEP]);
}
k = RTE_ALIGN_FLOOR(nb_rx, FWDSTEP);
for (j = 0; j != k; j += FWDSTEP) {
processx4_step2(qconf, dip[j / FWDSTEP],
- flag[j / FWDSTEP], portid,
+ ipv4_flag[j / FWDSTEP], portid,
&pkts_burst[j], &dst_port[j]);
}
--
1.9.3
Zhang, Helin
2015-02-17 02:57:21 UTC
Permalink
-----Original Message-----
From: Ananyev, Konstantin
Sent: Tuesday, February 17, 2015 1:05 AM
Cc: Cao, Waterman; Liang, Cunming; Liu, Jijiang; Richardson, Bruce
Subject: RE: [PATCH v2 14/15] examples/l3fwd: support of unified packet type
Hi Helin,
-----Original Message-----
From: Zhang, Helin
Sent: Monday, February 09, 2015 6:41 AM
Cc: Cao, Waterman; Liang, Cunming; Liu, Jijiang; Ananyev, Konstantin;
Richardson, Bruce; Zhang, Helin
Subject: [PATCH v2 14/15] examples/l3fwd: support of unified packet type
To unify packet types among all PMDs, bit masks and relevant macros of
packet type for ol_flags are replaced by unified packet type and
relevant macros.
---
examples/l3fwd/main.c | 64
++++++++++++++++++++++++++++-----------------------
1 file changed, 35 insertions(+), 29 deletions(-)
* Used redefined packet types and enlarged packet_type field in mbuf.
diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c index
6f7d7d4..302322e 100644
--- a/examples/l3fwd/main.c
+++ b/examples/l3fwd/main.c
@@ -958,7 +958,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t
portid, struct lcore_conf *qcon
eth_hdr = rte_pktmbuf_mtod(m, struct ether_hdr *);
- if (m->ol_flags & PKT_RX_IPV4_HDR) {
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type)) {
/* Handle IPv4 headers.*/
ipv4_hdr = (struct ipv4_hdr *)(rte_pktmbuf_mtod(m, unsigned char *)
+
sizeof(struct ether_hdr));
@@ -993,7 +993,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t
portid, struct lcore_conf *qcon
send_single_packet(m, dst_port);
- } else {
+ } else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
If you changed to from 'else' to ' else if' here, then I suppose you'll need to add
to handle case, where input packets are neither IPV4 neither IPv6.
Otherwise you might start 'leaking' such mbufs.
Agree with you, will add code to free mbuf there.
/* Handle IPv6 headers.*/
struct ipv6_hdr *ipv6_hdr;
@@ -1039,11 +1039,11 @@ l3fwd_simple_forward(struct rte_mbuf *m,
uint8_t portid, struct lcore_conf *qcon
* to BAD_PORT value.
*/
static inline __attribute__((always_inline)) void
-rfc1812_process(struct ipv4_hdr *ipv4_hdr, uint16_t *dp, uint32_t flags)
+rfc1812_process(struct ipv4_hdr *ipv4_hdr, uint16_t *dp, uint16_t
+ptype)
Shouldn't it be 'uint32_t ptype'?
Agree with you. Will correct it.
{
uint8_t ihl;
- if ((flags & PKT_RX_IPV4_HDR) != 0) {
+ if (RTE_ETH_IS_IPV4_HDR(ptype)) {
ihl = ipv4_hdr->version_ihl - IPV4_MIN_VER_IHL;
@@ -1074,11 +1074,11 @@ get_dst_port(const struct lcore_conf *qconf,
struct rte_mbuf *pkt,
struct ipv6_hdr *ipv6_hdr;
struct ether_hdr *eth_hdr;
- if (pkt->ol_flags & PKT_RX_IPV4_HDR) {
+ if (RTE_ETH_IS_IPV4_HDR(pkt->packet_type)) {
if (rte_lpm_lookup(qconf->ipv4_lookup_struct, dst_ipv4,
&next_hop) != 0)
next_hop = portid;
- } else if (pkt->ol_flags & PKT_RX_IPV6_HDR) {
+ } else if (RTE_ETH_IS_IPV6_HDR(pkt->packet_type)) {
eth_hdr = rte_pktmbuf_mtod(pkt, struct ether_hdr *);
ipv6_hdr = (struct ipv6_hdr *)(eth_hdr + 1);
if (rte_lpm6_lookup(qconf->ipv6_lookup_struct,
@@ -1112,17 +1112,19 @@ process_packet(struct lcore_conf *qconf, struct
rte_mbuf *pkt,
ve = val_eth[dp];
dst_port[0] = dp;
- rfc1812_process(ipv4_hdr, dst_port, pkt->ol_flags);
+ rfc1812_process(ipv4_hdr, dst_port, pkt->packet_type);
te = _mm_blend_epi16(te, ve, MASK_ETH);
_mm_store_si128((__m128i *)eth_hdr, te); }
/*
- * Read ol_flags and destination IPV4 addresses from 4 mbufs.
+ * Read packet_type and destination IPV4 addresses from 4 mbufs.
*/
static inline void
-processx4_step1(struct rte_mbuf *pkt[FWDSTEP], __m128i *dip, uint32_t *flag)
+processx4_step1(struct rte_mbuf *pkt[FWDSTEP],
+ __m128i *dip,
+ uint32_t *ipv4_flag)
{
struct ipv4_hdr *ipv4_hdr;
struct ether_hdr *eth_hdr;
@@ -1131,22 +1133,20 @@ processx4_step1(struct rte_mbuf
*pkt[FWDSTEP], __m128i *dip, uint32_t *flag)
eth_hdr = rte_pktmbuf_mtod(pkt[0], struct ether_hdr *);
ipv4_hdr = (struct ipv4_hdr *)(eth_hdr + 1);
x0 = ipv4_hdr->dst_addr;
- flag[0] = pkt[0]->ol_flags & PKT_RX_IPV4_HDR;
eth_hdr = rte_pktmbuf_mtod(pkt[1], struct ether_hdr *);
ipv4_hdr = (struct ipv4_hdr *)(eth_hdr + 1);
x1 = ipv4_hdr->dst_addr;
- flag[0] &= pkt[1]->ol_flags;
eth_hdr = rte_pktmbuf_mtod(pkt[2], struct ether_hdr *);
ipv4_hdr = (struct ipv4_hdr *)(eth_hdr + 1);
x2 = ipv4_hdr->dst_addr;
- flag[0] &= pkt[2]->ol_flags;
eth_hdr = rte_pktmbuf_mtod(pkt[3], struct ether_hdr *);
ipv4_hdr = (struct ipv4_hdr *)(eth_hdr + 1);
x3 = ipv4_hdr->dst_addr;
- flag[0] &= pkt[3]->ol_flags;
+ *ipv4_flag = pkt[0]->packet_type & pkt[1]->packet_type &
+ pkt[2]->packet_type & pkt[3]->packet_type & RTE_PTYPE_L3_IPV4;
flag[0] = pkt[0]->packet-type & ...
...
flag[0] &= pkt[1]->packet_type;
...
Why do you need to unite them?
No specific reason, will changed it as before. Thanks!

Regards,
Helin
processx4_step1(struct rte_mbuf *pkt[FWDSTEP], __m128i *dip, uint32_t
*flag)
* If lookup fails, use incoming port (portid) as destination port.
*/
static inline void
-processx4_step2(const struct lcore_conf *qconf, __m128i dip, uint32_t flag,
- uint8_t portid, struct rte_mbuf *pkt[FWDSTEP], uint16_t dprt[FWDSTEP])
+processx4_step2(const struct lcore_conf *qconf,
+ __m128i dip,
+ uint32_t ipv4_flag,
+ uint8_t portid,
+ struct rte_mbuf *pkt[FWDSTEP],
+ uint16_t dprt[FWDSTEP])
{
rte_xmm_t dst;
const __m128i bswap_mask = _mm_set_epi8(12, 13, 14, 15, 8, 9, 10,
*qconf, __m128i dip, uint32_t flag,
dip = _mm_shuffle_epi8(dip, bswap_mask);
/* if all 4 packets are IPV4. */
- if (likely(flag != 0)) {
+ if (likely(ipv4_flag)) {
rte_lpm_lookupx4(qconf->ipv4_lookup_struct, dip, dprt, portid);
} else {
dst.x = dip;
@@ -1218,13 +1222,13 @@ processx4_step3(struct rte_mbuf
*pkt[FWDSTEP], uint16_t dst_port[FWDSTEP])
_mm_store_si128(p[3], te[3]);
rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[0] + 1),
- &dst_port[0], pkt[0]->ol_flags);
+ &dst_port[0], pkt[0]->packet_type);
rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[1] + 1),
- &dst_port[1], pkt[1]->ol_flags);
+ &dst_port[1], pkt[1]->packet_type);
rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[2] + 1),
- &dst_port[2], pkt[2]->ol_flags);
+ &dst_port[2], pkt[2]->packet_type);
rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[3] + 1),
- &dst_port[3], pkt[3]->ol_flags);
+ &dst_port[3], pkt[3]->packet_type);
}
/*
@@ -1411,7 +1415,7 @@ main_loop(__attribute__((unused)) void *dummy)
uint16_t *lp;
uint16_t dst_port[MAX_PKT_BURST];
__m128i dip[MAX_PKT_BURST / FWDSTEP];
- uint32_t flag[MAX_PKT_BURST / FWDSTEP];
+ uint32_t ipv4_flag[MAX_PKT_BURST / FWDSTEP];
uint16_t pnum[MAX_PKT_BURST + 1];
#endif
@@ -1481,14 +1485,16 @@ main_loop(__attribute__((unused)) void
*dummy)
*/
int32_t n = RTE_ALIGN_FLOOR(nb_rx, 4);
for (j = 0; j < n ; j+=4) {
- uint32_t ol_flag = pkts_burst[j]->ol_flags
- & pkts_burst[j+1]->ol_flags
- & pkts_burst[j+2]->ol_flags
- & pkts_burst[j+3]->ol_flags;
- if (ol_flag & PKT_RX_IPV4_HDR ) {
+ uint32_t pkt_type =
+ pkts_burst[j]->packet_type &
+ pkts_burst[j+1]->packet_type &
+ pkts_burst[j+2]->packet_type &
+ pkts_burst[j+3]->packet_type;
+ if (pkt_type & RTE_PTYPE_L3_IPV4) {
simple_ipv4_fwd_4pkts(&pkts_burst[j],
portid, qconf);
- } else if (ol_flag & PKT_RX_IPV6_HDR) {
+ } else if (pkt_type &
+ RTE_PTYPE_L3_IPV6) {
simple_ipv6_fwd_4pkts(&pkts_burst[j],
portid, qconf);
} else {
@@ -1513,13 +1519,13 @@ main_loop(__attribute__((unused)) void
*dummy)
for (j = 0; j != k; j += FWDSTEP) {
processx4_step1(&pkts_burst[j],
&dip[j / FWDSTEP],
- &flag[j / FWDSTEP]);
+ &ipv4_flag[j / FWDSTEP]);
}
k = RTE_ALIGN_FLOOR(nb_rx, FWDSTEP);
for (j = 0; j != k; j += FWDSTEP) {
processx4_step2(qconf, dip[j / FWDSTEP],
- flag[j / FWDSTEP], portid,
+ ipv4_flag[j / FWDSTEP], portid,
&pkts_burst[j], &dst_port[j]);
}
--
1.9.3
Helin Zhang
2015-02-09 06:40:49 UTC
Permalink
As unified packet types are used instead, those old bit masks
and the relevant macros for packet type indication need to be
removed.

Signed-off-by: Helin Zhang <***@intel.com>
---
lib/librte_mbuf/rte_mbuf.c | 6 ------
lib/librte_mbuf/rte_mbuf.h | 14 ++++----------
2 files changed, 4 insertions(+), 16 deletions(-)

v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
* Redefined the bit masks for packet RX offload flags.

diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c
index 1b14e02..8050ccf 100644
--- a/lib/librte_mbuf/rte_mbuf.c
+++ b/lib/librte_mbuf/rte_mbuf.c
@@ -215,14 +215,8 @@ const char *rte_get_rx_ol_flag_name(uint64_t mask)
/* case PKT_RX_HBUF_OVERFLOW: return "PKT_RX_HBUF_OVERFLOW"; */
/* case PKT_RX_RECIP_ERR: return "PKT_RX_RECIP_ERR"; */
/* case PKT_RX_MAC_ERR: return "PKT_RX_MAC_ERR"; */
- case PKT_RX_IPV4_HDR: return "PKT_RX_IPV4_HDR";
- case PKT_RX_IPV4_HDR_EXT: return "PKT_RX_IPV4_HDR_EXT";
- case PKT_RX_IPV6_HDR: return "PKT_RX_IPV6_HDR";
- case PKT_RX_IPV6_HDR_EXT: return "PKT_RX_IPV6_HDR_EXT";
case PKT_RX_IEEE1588_PTP: return "PKT_RX_IEEE1588_PTP";
case PKT_RX_IEEE1588_TMST: return "PKT_RX_IEEE1588_TMST";
- case PKT_RX_TUNNEL_IPV4_HDR: return "PKT_RX_TUNNEL_IPV4_HDR";
- case PKT_RX_TUNNEL_IPV6_HDR: return "PKT_RX_TUNNEL_IPV6_HDR";
default: return NULL;
}
}
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index ee912d6..55336b2 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -90,16 +90,10 @@ extern "C" {
#define PKT_RX_HBUF_OVERFLOW (0ULL << 0) /**< Header buffer overflow. */
#define PKT_RX_RECIP_ERR (0ULL << 0) /**< Hardware processing error. */
#define PKT_RX_MAC_ERR (0ULL << 0) /**< MAC error. */
-#define PKT_RX_IPV4_HDR (1ULL << 5) /**< RX packet with IPv4 header. */
-#define PKT_RX_IPV4_HDR_EXT (1ULL << 6) /**< RX packet with extended IPv4 header. */
-#define PKT_RX_IPV6_HDR (1ULL << 7) /**< RX packet with IPv6 header. */
-#define PKT_RX_IPV6_HDR_EXT (1ULL << 8) /**< RX packet with extended IPv6 header. */
-#define PKT_RX_IEEE1588_PTP (1ULL << 9) /**< RX IEEE1588 L2 Ethernet PT Packet. */
-#define PKT_RX_IEEE1588_TMST (1ULL << 10) /**< RX IEEE1588 L2/L4 timestamped packet.*/
-#define PKT_RX_TUNNEL_IPV4_HDR (1ULL << 11) /**< RX tunnel packet with IPv4 header.*/
-#define PKT_RX_TUNNEL_IPV6_HDR (1ULL << 12) /**< RX tunnel packet with IPv6 header. */
-#define PKT_RX_FDIR_ID (1ULL << 13) /**< FD id reported if FDIR match. */
-#define PKT_RX_FDIR_FLX (1ULL << 14) /**< Flexible bytes reported if FDIR match. */
+#define PKT_RX_IEEE1588_PTP (1ULL << 5) /**< RX IEEE1588 L2 Ethernet PT Packet. */
+#define PKT_RX_IEEE1588_TMST (1ULL << 6) /**< RX IEEE1588 L2/L4 timestamped packet.*/
+#define PKT_RX_FDIR_ID (1ULL << 7) /**< FD id reported if FDIR match. */
+#define PKT_RX_FDIR_FLX (1ULL << 8) /**< Flexible bytes reported if FDIR match. */
/* add new RX flags here */

/* add new TX flags here */
--
1.9.3
Helin Zhang
2015-02-17 06:59:18 UTC
Permalink
Currently only 6 bits which are stored in ol_flags are used to indicate the
packet types. This is not enough, as some NIC hardware can recognize quite
a lot of packet types, e.g i40e hardware can recognize more than 150 packet
types. Hiding those packet types hides hardware offload capabilities which
could be quite useful for improving performance and for end users. So an
unified packet types are needed to support all possible PMDs. A 16 bits
packet_type in mbuf structure can be changed to 32 bits and used for this
purpose. In addition, all packet types stored in ol_flag field should be
deleted at all, and 6 bits of ol_flags can be save as the benifit.

Initially, 32 bits of packet_type can be divided into several sub fields to
indicate different packet type information of a packet. The initial design
is to divide those bits into fields for L2 types, L3 types, L4 types, tunnel
types, inner L2 types, inner L3 types and inner L4 types. All PMDs should
translate the offloaded packet types into these 7 fields of information, for
user applications.

v2 changes:
* Enlarged the packet_type field from 16 bits to 32 bits.
* Redefined the packet type sub-fields.
* Updated the 'struct rte_kni_mbuf' for KNI according to the mbuf changes.
* Used redefined packet types and enlarged packet_type field for all PMDs
and corresponding applications.
* Removed changes in bond and its relevant application, as there is no need
at all according to the recent bond changes.

v3 changes:
* Put the mbuf layout changes into a single patch.
* Put vector ixgbe changes right after mbuf changes.
* Disabled vector ixgbe PMD by default, as mbuf layout changed, and then
re-enabled it after vector ixgbe PMD updated.
* Put the definitions of unified packet type into a single patch.
* Minor bug fixes and enhancements in l3fwd example.

Helin Zhang (16):
mbuf: redefinition of packet_type in rte_mbuf
ixgbe: support of unified packet type for vector
mbuf: add definitions of unified packet types
e1000: support of unified packet type
ixgbe: support of unified packet type
i40e: support of unified packet type
enic: support of unified packet type
vmxnet3: support of unified packet type
app/test-pipeline: support of unified packet type
app/testpmd: support of unified packet type
examples/ip_fragmentation: support of unified packet type
examples/ip_reassembly: support of unified packet type
examples/l3fwd-acl: support of unified packet type
examples/l3fwd-power: support of unified packet type
examples/l3fwd: support of unified packet type
mbuf: remove old packet type bit masks

app/test-pipeline/pipeline_hash.c | 7 +-
app/test-pmd/csumonly.c | 10 +-
app/test-pmd/rxonly.c | 9 +-
examples/ip_fragmentation/main.c | 7 +-
examples/ip_reassembly/main.c | 7 +-
examples/l3fwd-acl/main.c | 19 +-
examples/l3fwd-power/main.c | 5 +-
examples/l3fwd/main.c | 71 +-
.../linuxapp/eal/include/exec-env/rte_kni_common.h | 4 +-
lib/librte_mbuf/rte_mbuf.c | 6 -
lib/librte_mbuf/rte_mbuf.h | 127 +++-
lib/librte_pmd_e1000/igb_rxtx.c | 98 ++-
lib/librte_pmd_enic/enic_main.c | 14 +-
lib/librte_pmd_i40e/i40e_rxtx.c | 786 ++++++++++++++-------
lib/librte_pmd_ixgbe/ixgbe_rxtx.c | 146 +++-
lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c | 49 +-
lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c | 4 +-
17 files changed, 921 insertions(+), 448 deletions(-)
--
1.9.3
Helin Zhang
2015-02-17 06:59:19 UTC
Permalink
In order to unify the packet type, the field of 'packet_type' in
'struct rte_mbuf' needs to be extended from 16 to 32 bits.
Accordingly, some fields in 'struct rte_mbuf' are re-organized to
support this change for Vector PMD. As 'struct rte_kni_mbuf' for
KNI should be right mapped to 'struct rte_mbuf', it should be
modified accordingly. In addition, Vector PMD of ixgbe is disabled
by default, as 'struct rte_mbuf' changed.

Signed-off-by: Helin Zhang <***@intel.com>
Signed-off-by: Cunming Liang <***@intel.com>
---
config/common_linuxapp | 2 +-
.../linuxapp/eal/include/exec-env/rte_kni_common.h | 4 ++--
lib/librte_mbuf/rte_mbuf.h | 23 +++++++++++++++-------
3 files changed, 19 insertions(+), 10 deletions(-)

v2 changes:
* Enlarged the packet_type field from 16 bits to 32 bits.
* Redefined the packet type sub-fields.
* Updated the 'struct rte_kni_mbuf' for KNI according to the mbuf changes.

v3 changes:
* Put the mbuf layout changes into a single patch.
* Disabled vector ixgbe PMD by default, as mbuf layout changed.

diff --git a/config/common_linuxapp b/config/common_linuxapp
index d428f84..7a530b9 100644
--- a/config/common_linuxapp
+++ b/config/common_linuxapp
@@ -160,7 +160,7 @@ CONFIG_RTE_LIBRTE_IXGBE_DEBUG_TX_FREE=n
CONFIG_RTE_LIBRTE_IXGBE_DEBUG_DRIVER=n
CONFIG_RTE_LIBRTE_IXGBE_PF_DISABLE_STRIP_CRC=n
CONFIG_RTE_LIBRTE_IXGBE_RX_ALLOW_BULK_ALLOC=y
-CONFIG_RTE_IXGBE_INC_VECTOR=y
+CONFIG_RTE_IXGBE_INC_VECTOR=n
CONFIG_RTE_IXGBE_RX_OLFLAGS_ENABLE=y

#
diff --git a/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h b/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h
index 1e55c2d..bd1cc09 100644
--- a/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h
+++ b/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h
@@ -117,9 +117,9 @@ struct rte_kni_mbuf {
uint16_t data_off; /**< Start address of data in segment buffer. */
char pad1[4];
uint64_t ol_flags; /**< Offload features. */
- char pad2[2];
- uint16_t data_len; /**< Amount of data in segment buffer. */
+ char pad2[4];
uint32_t pkt_len; /**< Total pkt len: sum of all segment data_len. */
+ uint16_t data_len; /**< Amount of data in segment buffer. */

/* fields on second cache line */
char pad3[8] __attribute__((__aligned__(RTE_CACHE_LINE_SIZE)));
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index e3008c6..6f8e1dd 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -259,17 +259,26 @@ struct rte_mbuf {
/* remaining bytes are set on RX when pulling packet from descriptor */
MARKER rx_descriptor_fields1;

- /**
- * The packet type, which is used to indicate ordinary packet and also
- * tunneled packet format, i.e. each number is represented a type of
- * packet.
+ /*
+ * The packet type, which is the combination of outer/inner L2, L3, L4
+ * and tunnel types.
*/
- uint16_t packet_type;
+ union {
+ uint32_t packet_type; /**< L2/L3/L4 and tunnel information. */
+ struct {
+ uint32_t l2_type:4; /**< (Outer) L2 type. */
+ uint32_t l3_type:4; /**< (Outer) L3 type. */
+ uint32_t l4_type:4; /**< (Outer) L4 type. */
+ uint32_t tun_type:4; /**< Tunnel type. */
+ uint32_t inner_l2_type:4; /**< Inner L2 type. */
+ uint32_t inner_l3_type:4; /**< Inner L3 type. */
+ uint32_t inner_l4_type:4; /**< Inner L4 type. */
+ };
+ };

- uint16_t data_len; /**< Amount of data in segment buffer. */
uint32_t pkt_len; /**< Total pkt len: sum of all segments. */
+ uint16_t data_len; /**< Amount of data in segment buffer. */
uint16_t vlan_tci; /**< VLAN Tag Control Identifier (CPU order) */
- uint16_t reserved;
union {
uint32_t rss; /**< RSS hash result if RSS enabled */
struct {
--
1.9.3
Helin Zhang
2015-02-17 06:59:20 UTC
Permalink
To unify the packet type, bit masks of packet type for ol_flags are
replaced. In addition, more packet types (UDP, TCP and SCTP) are
supported in vectorized ixgbe PMD.
Note that around 2% performance drop (64B) was observed of doing 4
ports (1 port per 82599 card) IO forwarding on the same SNB core.

Signed-off-by: Cunming Liang <***@intel.com>
Signed-off-by: Helin Zhang <***@intel.com>
---
config/common_linuxapp | 2 +-
lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c | 49 +++++++++++++++++++----------------
2 files changed, 27 insertions(+), 24 deletions(-)

v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.

v3 changes:
* Put vector ixgbe changes right after mbuf changes.
* Enabled vector ixgbe PMD by default together with changes for updated
vector PMD.

diff --git a/config/common_linuxapp b/config/common_linuxapp
index 7a530b9..d428f84 100644
--- a/config/common_linuxapp
+++ b/config/common_linuxapp
@@ -160,7 +160,7 @@ CONFIG_RTE_LIBRTE_IXGBE_DEBUG_TX_FREE=n
CONFIG_RTE_LIBRTE_IXGBE_DEBUG_DRIVER=n
CONFIG_RTE_LIBRTE_IXGBE_PF_DISABLE_STRIP_CRC=n
CONFIG_RTE_LIBRTE_IXGBE_RX_ALLOW_BULK_ALLOC=y
-CONFIG_RTE_IXGBE_INC_VECTOR=n
+CONFIG_RTE_IXGBE_INC_VECTOR=y
CONFIG_RTE_IXGBE_RX_OLFLAGS_ENABLE=y

#
diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c b/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
index b54cb19..357eb1d 100644
--- a/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
+++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
@@ -134,44 +134,35 @@ ixgbe_rxq_rearm(struct igb_rx_queue *rxq)
*/
#ifdef RTE_IXGBE_RX_OLFLAGS_ENABLE

-#define OLFLAGS_MASK ((uint16_t)(PKT_RX_VLAN_PKT | PKT_RX_IPV4_HDR |\
- PKT_RX_IPV4_HDR_EXT | PKT_RX_IPV6_HDR |\
- PKT_RX_IPV6_HDR_EXT))
-#define OLFLAGS_MASK_V (((uint64_t)OLFLAGS_MASK << 48) | \
- ((uint64_t)OLFLAGS_MASK << 32) | \
- ((uint64_t)OLFLAGS_MASK << 16) | \
- ((uint64_t)OLFLAGS_MASK))
-#define PTYPE_SHIFT (1)
+#define OLFLAGS_MASK_V (((uint64_t)PKT_RX_VLAN_PKT << 48) | \
+ ((uint64_t)PKT_RX_VLAN_PKT << 32) | \
+ ((uint64_t)PKT_RX_VLAN_PKT << 16) | \
+ ((uint64_t)PKT_RX_VLAN_PKT))
#define VTAG_SHIFT (3)

static inline void
desc_to_olflags_v(__m128i descs[4], struct rte_mbuf **rx_pkts)
{
- __m128i ptype0, ptype1, vtag0, vtag1;
+ __m128i vtag0, vtag1;
union {
uint16_t e[4];
uint64_t dword;
} vol;

- ptype0 = _mm_unpacklo_epi16(descs[0], descs[1]);
- ptype1 = _mm_unpacklo_epi16(descs[2], descs[3]);
vtag0 = _mm_unpackhi_epi16(descs[0], descs[1]);
vtag1 = _mm_unpackhi_epi16(descs[2], descs[3]);

- ptype1 = _mm_unpacklo_epi32(ptype0, ptype1);
vtag1 = _mm_unpacklo_epi32(vtag0, vtag1);
-
- ptype1 = _mm_slli_epi16(ptype1, PTYPE_SHIFT);
vtag1 = _mm_srli_epi16(vtag1, VTAG_SHIFT);

- ptype1 = _mm_or_si128(ptype1, vtag1);
- vol.dword = _mm_cvtsi128_si64(ptype1) & OLFLAGS_MASK_V;
+ vol.dword = _mm_cvtsi128_si64(vtag1) & OLFLAGS_MASK_V;

rx_pkts[0]->ol_flags = vol.e[0];
rx_pkts[1]->ol_flags = vol.e[1];
rx_pkts[2]->ol_flags = vol.e[2];
rx_pkts[3]->ol_flags = vol.e[3];
}
+
#else
#define desc_to_olflags_v(desc, rx_pkts) do {} while (0)
#endif
@@ -197,13 +188,15 @@ _recv_raw_pkts_vec(struct igb_rx_queue *rxq, struct rte_mbuf **rx_pkts,
uint64_t var;
__m128i shuf_msk;
__m128i crc_adjust = _mm_set_epi16(
- 0, 0, 0, 0, /* ignore non-length fields */
+ 0, 0, 0, /* ignore non-length fields */
+ -rxq->crc_len, /* sub crc on data_len */
0, /* ignore high-16bits of pkt_len */
-rxq->crc_len, /* sub crc on pkt_len */
- -rxq->crc_len, /* sub crc on data_len */
- 0 /* ignore pkt_type field */
+ 0, 0 /* ignore pkt_type field */
);
__m128i dd_check, eop_check;
+ __m128i desc_mask = _mm_set_epi32(0xFFFFFFFF, 0xFFFFFFFF,
+ 0xFFFFFFFF, 0xFFFF07F0);

if (unlikely(nb_pkts < RTE_IXGBE_VPMD_RX_BURST))
return 0;
@@ -234,12 +227,13 @@ _recv_raw_pkts_vec(struct igb_rx_queue *rxq, struct rte_mbuf **rx_pkts,
/* mask to shuffle from desc. to mbuf */
shuf_msk = _mm_set_epi8(
7, 6, 5, 4, /* octet 4~7, 32bits rss */
- 0xFF, 0xFF, /* skip high 16 bits vlan_macip, zero out */
15, 14, /* octet 14~15, low 16 bits vlan_macip */
+ 13, 12, /* octet 12~13, 16 bits data_len */
0xFF, 0xFF, /* skip high 16 bits pkt_len, zero out */
13, 12, /* octet 12~13, low 16 bits pkt_len */
- 13, 12, /* octet 12~13, 16 bits data_len */
- 0xFF, 0xFF /* skip pkt_type field */
+ 0xFF, 0xFF, /* skip high 16 bits pkt_type */
+ 1, /* octet 1, 8 bits pkt_type field */
+ 0 /* octet 0, 4 bits offset 4 pkt_type field */
);

/* Cache is empty -> need to scan the buffer rings, but first move
@@ -248,6 +242,7 @@ _recv_raw_pkts_vec(struct igb_rx_queue *rxq, struct rte_mbuf **rx_pkts,

/*
* A. load 4 packet in one loop
+ * [A*. mask out 4 unused dirty field in desc]
* B. copy 4 mbuf point from swring to rx_pkts
* C. calc the number of DD bits among the 4 packets
* [C*. extract the end-of-packet bit, if requested]
@@ -289,6 +284,14 @@ _recv_raw_pkts_vec(struct igb_rx_queue *rxq, struct rte_mbuf **rx_pkts,
/* B.2 copy 2 mbuf point into rx_pkts */
_mm_storeu_si128((__m128i *)&rx_pkts[pos+2], mbp2);

+ /* A* mask out 0~3 bits RSS type */
+ descs[3] = _mm_and_si128(descs[3], desc_mask);
+ descs[2] = _mm_and_si128(descs[2], desc_mask);
+
+ /* A* mask out 0~3 bits RSS type */
+ descs[1] = _mm_and_si128(descs[1], desc_mask);
+ descs[0] = _mm_and_si128(descs[0], desc_mask);
+
/* avoid compiler reorder optimization */
rte_compiler_barrier();

@@ -301,7 +304,7 @@ _recv_raw_pkts_vec(struct igb_rx_queue *rxq, struct rte_mbuf **rx_pkts,
/* C.1 4=>2 filter staterr info only */
sterr_tmp1 = _mm_unpackhi_epi32(descs[1], descs[0]);

- /* set ol_flags with packet type and vlan tag */
+ /* set ol_flags with vlan packet type */
desc_to_olflags_v(descs, &rx_pkts[pos]);

/* D.2 pkt 3,4 set in_port/nb_seg and remove crc */
--
1.9.3
Helin Zhang
2015-02-17 06:59:22 UTC
Permalink
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.

Signed-off-by: Helin Zhang <***@intel.com>
---
lib/librte_pmd_e1000/igb_rxtx.c | 98 ++++++++++++++++++++++++++++++++++-------
1 file changed, 83 insertions(+), 15 deletions(-)

v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.

diff --git a/lib/librte_pmd_e1000/igb_rxtx.c b/lib/librte_pmd_e1000/igb_rxtx.c
index 5c394a9..12a68f4 100644
--- a/lib/librte_pmd_e1000/igb_rxtx.c
+++ b/lib/librte_pmd_e1000/igb_rxtx.c
@@ -602,17 +602,85 @@ eth_igb_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
* RX functions
*
**********************************************************************/
+#define IGB_PACKET_TYPE_IPV4 0X01
+#define IGB_PACKET_TYPE_IPV4_TCP 0X11
+#define IGB_PACKET_TYPE_IPV4_UDP 0X21
+#define IGB_PACKET_TYPE_IPV4_SCTP 0X41
+#define IGB_PACKET_TYPE_IPV4_EXT 0X03
+#define IGB_PACKET_TYPE_IPV4_EXT_SCTP 0X43
+#define IGB_PACKET_TYPE_IPV6 0X04
+#define IGB_PACKET_TYPE_IPV6_TCP 0X14
+#define IGB_PACKET_TYPE_IPV6_UDP 0X24
+#define IGB_PACKET_TYPE_IPV6_EXT 0X0C
+#define IGB_PACKET_TYPE_IPV6_EXT_TCP 0X1C
+#define IGB_PACKET_TYPE_IPV6_EXT_UDP 0X2C
+#define IGB_PACKET_TYPE_IPV4_IPV6 0X05
+#define IGB_PACKET_TYPE_IPV4_IPV6_TCP 0X15
+#define IGB_PACKET_TYPE_IPV4_IPV6_UDP 0X25
+#define IGB_PACKET_TYPE_IPV4_IPV6_EXT 0X0D
+#define IGB_PACKET_TYPE_IPV4_IPV6_EXT_TCP 0X1D
+#define IGB_PACKET_TYPE_IPV4_IPV6_EXT_UDP 0X2D
+#define IGB_PACKET_TYPE_MAX 0X80
+#define IGB_PACKET_TYPE_MASK 0X7F
+#define IGB_PACKET_TYPE_SHIFT 0X04
+static inline uint32_t
+igb_rxd_pkt_info_to_pkt_type(uint16_t pkt_info)
+{
+ static const uint32_t
+ ptype_table[IGB_PACKET_TYPE_MAX] __rte_cache_aligned = {
+ [IGB_PACKET_TYPE_IPV4] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4,
+ [IGB_PACKET_TYPE_IPV4_EXT] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4_EXT,
+ [IGB_PACKET_TYPE_IPV6] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6,
+ [IGB_PACKET_TYPE_IPV4_IPV6] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6,
+ [IGB_PACKET_TYPE_IPV6_EXT] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6_EXT,
+ [IGB_PACKET_TYPE_IPV4_IPV6_EXT] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT,
+ [IGB_PACKET_TYPE_IPV4_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_TCP,
+ [IGB_PACKET_TYPE_IPV6_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_TCP,
+ [IGB_PACKET_TYPE_IPV4_IPV6_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6 | RTE_PTYPE_INNER_L4_TCP,
+ [IGB_PACKET_TYPE_IPV6_EXT_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_TCP,
+ [IGB_PACKET_TYPE_IPV4_IPV6_EXT_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_INNER_L4_TCP,
+ [IGB_PACKET_TYPE_IPV4_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_UDP,
+ [IGB_PACKET_TYPE_IPV6_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_UDP,
+ [IGB_PACKET_TYPE_IPV4_IPV6_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6 | RTE_PTYPE_INNER_L4_UDP,
+ [IGB_PACKET_TYPE_IPV6_EXT_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_UDP,
+ [IGB_PACKET_TYPE_IPV4_IPV6_EXT_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_INNER_L4_UDP,
+ [IGB_PACKET_TYPE_IPV4_SCTP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_SCTP,
+ [IGB_PACKET_TYPE_IPV4_EXT_SCTP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_L4_SCTP,
+ };
+ if (unlikely(pkt_info & E1000_RXDADV_PKTTYPE_ETQF))
+ return RTE_PTYPE_UNKNOWN;
+
+ pkt_info = (pkt_info >> IGB_PACKET_TYPE_SHIFT) & IGB_PACKET_TYPE_MASK;
+
+ return ptype_table[pkt_info];
+}
+
static inline uint64_t
rx_desc_hlen_type_rss_to_pkt_flags(uint32_t hl_tp_rs)
{
- uint64_t pkt_flags;
-
- static uint64_t ip_pkt_types_map[16] = {
- 0, PKT_RX_IPV4_HDR, PKT_RX_IPV4_HDR_EXT, PKT_RX_IPV4_HDR_EXT,
- PKT_RX_IPV6_HDR, 0, 0, 0,
- PKT_RX_IPV6_HDR_EXT, 0, 0, 0,
- PKT_RX_IPV6_HDR_EXT, 0, 0, 0,
- };
+ uint64_t pkt_flags = ((hl_tp_rs & 0x0F) == 0) ? 0 : PKT_RX_RSS_HASH;

#if defined(RTE_LIBRTE_IEEE1588)
static uint32_t ip_pkt_etqf_map[8] = {
@@ -620,14 +688,10 @@ rx_desc_hlen_type_rss_to_pkt_flags(uint32_t hl_tp_rs)
0, 0, 0, 0,
};

- pkt_flags = (hl_tp_rs & E1000_RXDADV_PKTTYPE_ETQF) ?
- ip_pkt_etqf_map[(hl_tp_rs >> 4) & 0x07] :
- ip_pkt_types_map[(hl_tp_rs >> 4) & 0x0F];
-#else
- pkt_flags = (hl_tp_rs & E1000_RXDADV_PKTTYPE_ETQF) ? 0 :
- ip_pkt_types_map[(hl_tp_rs >> 4) & 0x0F];
+ pkt_flags |= ip_pkt_etqf_map[(hl_tp_rs >> 4) & 0x07];
#endif
- return pkt_flags | (((hl_tp_rs & 0x0F) == 0) ? 0 : PKT_RX_RSS_HASH);
+
+ return pkt_flags;
}

static inline uint64_t
@@ -802,6 +866,8 @@ eth_igb_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
pkt_flags = pkt_flags | rx_desc_status_to_pkt_flags(staterr);
pkt_flags = pkt_flags | rx_desc_error_to_pkt_flags(staterr);
rxm->ol_flags = pkt_flags;
+ rxm->packet_type = igb_rxd_pkt_info_to_pkt_type(rxd.wb.lower.
+ lo_dword.hs_rss.pkt_info);

/*
* Store the mbuf address into the next entry of the array
@@ -1036,6 +1102,8 @@ eth_igb_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
pkt_flags = pkt_flags | rx_desc_status_to_pkt_flags(staterr);
pkt_flags = pkt_flags | rx_desc_error_to_pkt_flags(staterr);
first_seg->ol_flags = pkt_flags;
+ first_seg->packet_type = igb_rxd_pkt_info_to_pkt_type(rxd.wb.
+ lower.lo_dword.hs_rss.pkt_info);

/* Prefetch data of first segment, if configured to do so. */
rte_packet_prefetch((char *)first_seg->buf_addr +
--
1.9.3
Helin Zhang
2015-02-17 06:59:21 UTC
Permalink
As there are only 6 bit flags in ol_flags for indicating packet
types, which is not enough to describe all the possible packet
types hardware can recognize. For example, i40e hardware can
recognize more than 150 packet types. Unified packet type is
composed of L2 type, L3 type, L4 type, tunnel type, inner L2 type,
inner L3 type and inner L4 type fields, and can be stored in
'struct rte_mbuf' of 32 bits field 'packet_type'.

Signed-off-by: Helin Zhang <***@intel.com>
---
lib/librte_mbuf/rte_mbuf.h | 90 ++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 90 insertions(+)

v3 changes:
* Put the definitions of unified packet type into a single patch.

diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index 6f8e1dd..2cdf8a0 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -192,6 +192,96 @@ extern "C" {
/* Use final bit of flags to indicate a control mbuf */
#define CTRL_MBUF_FLAG (1ULL << 63) /**< Mbuf contains control data */

+/*
+ * 32 bits are divided into several fields to mark packet types. Note that
+ * each field is indexical.
+ * - Bit 3:0 is for L2 types.
+ * - Bit 7:4 is for L3 or outer L3 (for tunneling case) types.
+ * - Bit 11:8 is for L4 or outer L4 (for tunneling case) types.
+ * - Bit 15:12 is for tunnel types.
+ * - Bit 19:16 is for inner L2 types.
+ * - Bit 23:20 is for inner L3 types.
+ * - Bit 27:24 is for inner L4 types.
+ * - Bit 31:28 is reserved.
+ *
+ * To be compatible with Vector PMD, RTE_PTYPE_L3_IPV4, RTE_PTYPE_L3_IPV4_EXT,
+ * RTE_PTYPE_L3_IPV6, RTE_PTYPE_L3_IPV6_EXT, RTE_PTYPE_L4_TCP, RTE_PTYPE_L4_UDP
+ * and RTE_PTYPE_L4_SCTP should be kept as below in a contiguous 7 bits.
+ *
+ * Note that L3 types values are selected for checking IPV4/IPV6 header from
+ * performance point of view. Reading annotations of RTE_ETH_IS_IPV4_HDR and
+ * RTE_ETH_IS_IPV6_HDR is needed for any future changes of L3 type values.
+ */
+#define RTE_PTYPE_UNKNOWN 0x00000000
+/* bit 3:0 for L2 types */
+#define RTE_PTYPE_L2_MAC 0x00000001
+#define RTE_PTYPE_L2_MAC_TIMESYNC 0x00000002
+#define RTE_PTYPE_L2_ARP 0x00000003
+#define RTE_PTYPE_L2_LLDP 0x00000004
+#define RTE_PTYPE_L2_MASK 0x0000000f
+/* bit 7:4 for L3 types */
+#define RTE_PTYPE_L3_IPV4 0x00000010
+#define RTE_PTYPE_L3_IPV4_EXT 0x00000030
+#define RTE_PTYPE_L3_IPV6 0x00000040
+#define RTE_PTYPE_L3_IPV4_EXT_UNKNOWN 0x00000090
+#define RTE_PTYPE_L3_IPV6_EXT 0x000000c0
+#define RTE_PTYPE_L3_IPV6_EXT_UNKNOWN 0x000000e0
+#define RTE_PTYPE_L3_MASK 0x000000f0
+/* bit 11:8 for L4 types */
+#define RTE_PTYPE_L4_TCP 0x00000100
+#define RTE_PTYPE_L4_UDP 0x00000200
+#define RTE_PTYPE_L4_FRAG 0x00000300
+#define RTE_PTYPE_L4_SCTP 0x00000400
+#define RTE_PTYPE_L4_ICMP 0x00000500
+#define RTE_PTYPE_L4_NONFRAG 0x00000600
+#define RTE_PTYPE_L4_MASK 0x00000f00
+/* bit 15:12 for tunnel types */
+#define RTE_PTYPE_TUNNEL_IP 0x00001000
+#define RTE_PTYPE_TUNNEL_GRE 0x00002000
+#define RTE_PTYPE_TUNNEL_VXLAN 0x00003000
+#define RTE_PTYPE_TUNNEL_NVGRE 0x00004000
+#define RTE_PTYPE_TUNNEL_GENEVE 0x00005000
+#define RTE_PTYPE_TUNNEL_GRENAT 0x00006000
+#define RTE_PTYPE_TUNNEL_MASK 0x0000f000
+/* bit 19:16 for inner L2 types */
+#define RTE_PTYPE_INNER_L2_MAC 0x00010000
+#define RTE_PTYPE_INNER_L2_MAC_VLAN 0x00020000
+#define RTE_PTYPE_INNER_L2_MASK 0x000f0000
+/* bit 23:20 for inner L3 types */
+#define RTE_PTYPE_INNER_L3_IPV4 0x00100000
+#define RTE_PTYPE_INNER_L3_IPV4_EXT 0x00200000
+#define RTE_PTYPE_INNER_L3_IPV6 0x00300000
+#define RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN 0x00400000
+#define RTE_PTYPE_INNER_L3_IPV6_EXT 0x00500000
+#define RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN 0x00600000
+#define RTE_PTYPE_INNER_INNER_L3_MASK 0x00f00000
+/* bit 27:24 for inner L4 types */
+#define RTE_PTYPE_INNER_L4_TCP 0x01000000
+#define RTE_PTYPE_INNER_L4_UDP 0x02000000
+#define RTE_PTYPE_INNER_L4_FRAG 0x03000000
+#define RTE_PTYPE_INNER_L4_SCTP 0x04000000
+#define RTE_PTYPE_INNER_L4_ICMP 0x05000000
+#define RTE_PTYPE_INNER_L4_NONFRAG 0x06000000
+#define RTE_PTYPE_INNER_L4_MASK 0x0f000000
+/* bit 31:28 reserved */
+
+/**
+ * Check if the (outer) L3 header is IPv4. To avoid comparing IPv4 types one by
+ * one, bit 4 is selected to be used for IPv4 only. Then checking bit 4 can
+ * determin if it is an IPV4 packet.
+ */
+#define RTE_ETH_IS_IPV4_HDR(ptype) ((ptype) & RTE_PTYPE_L3_IPV4)
+
+/**
+ * Check if the (outer) L3 header is IPv4. To avoid comparing IPv4 types one by
+ * one, bit 6 is selected to be used for IPv4 only. Then checking bit 6 can
+ * determin if it is an IPV4 packet.
+ */
+#define RTE_ETH_IS_IPV6_HDR(ptype) ((ptype) & RTE_PTYPE_L3_IPV6)
+
+/* Check if it is a tunneling packet */
+#define RTE_ETH_IS_TUNNEL_PKT(ptype) ((ptype) & RTE_PTYPE_TUNNEL_MASK)
+
/**
* Get the name of a RX offload flag
*
--
1.9.3
Olivier MATZ
2015-02-17 09:01:40 UTC
Permalink
Hi Helin,
Post by Helin Zhang
As there are only 6 bit flags in ol_flags for indicating packet
types, which is not enough to describe all the possible packet
types hardware can recognize. For example, i40e hardware can
recognize more than 150 packet types. Unified packet type is
composed of L2 type, L3 type, L4 type, tunnel type, inner L2 type,
inner L3 type and inner L4 type fields, and can be stored in
'struct rte_mbuf' of 32 bits field 'packet_type'.
A formal definition of each flag is still missing. I explained
several times why it's needed. We must be able to answer to these
questions:

- If I'm developing a PMD, what fields should I check in the packet
to set a specific flag?
- If I'm developing an application, if a specific flag is set, what
checks can I skip?

Example with RTE_PTYPE_L3_IPV4:

- IP version field is 4
- no IP options (header size is 20)
- layer 2 identified the packet as IP (ex: ethertype=0x800)

I think we need such a definition for all packet types.

Regards,
Olivier
Zhang, Helin
2015-02-20 14:26:43 UTC
Permalink
-----Original Message-----
Sent: Tuesday, February 17, 2015 5:02 PM
Subject: Re: [dpdk-dev] [PATCH v3 03/16] mbuf: add definitions of unified packet
types
Hi Helin,
Post by Helin Zhang
As there are only 6 bit flags in ol_flags for indicating packet types,
which is not enough to describe all the possible packet types hardware
can recognize. For example, i40e hardware can recognize more than 150
packet types. Unified packet type is composed of L2 type, L3 type, L4
type, tunnel type, inner L2 type, inner L3 type and inner L4 type
fields, and can be stored in 'struct rte_mbuf' of 32 bits field
'packet_type'.
A formal definition of each flag is still missing. I explained several times why it's
needed. We must be able to answer to these
- If I'm developing a PMD, what fields should I check in the packet
to set a specific flag?
- If I'm developing an application, if a specific flag is set, what
checks can I skip?
- IP version field is 4
- no IP options (header size is 20)
- layer 2 identified the packet as IP (ex: ethertype=0x800)
I think we need such a definition for all packet types.
You meant we need a detailed description of each packet type, right?
If yes, I can add those information soon. Thanks for the helps!

Regards,
Helin
Regards,
Olivier
Olivier MATZ
2015-02-24 09:09:11 UTC
Permalink
Hi Helin,
Post by Zhang, Helin
Post by Helin Zhang
As there are only 6 bit flags in ol_flags for indicating packet types,
which is not enough to describe all the possible packet types hardware
can recognize. For example, i40e hardware can recognize more than 150
packet types. Unified packet type is composed of L2 type, L3 type, L4
type, tunnel type, inner L2 type, inner L3 type and inner L4 type
fields, and can be stored in 'struct rte_mbuf' of 32 bits field
'packet_type'.
A formal definition of each flag is still missing. I explained several times why it's
needed. We must be able to answer to these
- If I'm developing a PMD, what fields should I check in the packet
to set a specific flag?
- If I'm developing an application, if a specific flag is set, what
checks can I skip?
- IP version field is 4
- no IP options (header size is 20)
- layer 2 identified the packet as IP (ex: ethertype=0x800)
I think we need such a definition for all packet types.
You meant we need a detailed description of each packet type, right?
If yes, I can add those information soon. Thanks for the helps!
Yes, I think this would be really helpful.

Thank you!
Olivier
Zhang, Helin
2015-02-24 13:38:08 UTC
Permalink
-----Original Message-----
Sent: Tuesday, February 24, 2015 5:09 PM
Subject: Re: [dpdk-dev] [PATCH v3 03/16] mbuf: add definitions of unified packet
types
Hi Helin,
Post by Zhang, Helin
Post by Olivier MATZ
Post by Helin Zhang
As there are only 6 bit flags in ol_flags for indicating packet
types, which is not enough to describe all the possible packet types
hardware can recognize. For example, i40e hardware can recognize
more than 150 packet types. Unified packet type is composed of L2
type, L3 type, L4 type, tunnel type, inner L2 type, inner L3 type
and inner L4 type fields, and can be stored in 'struct rte_mbuf' of
32 bits field 'packet_type'.
A formal definition of each flag is still missing. I explained
several times why it's needed. We must be able to answer to these
- If I'm developing a PMD, what fields should I check in the packet
to set a specific flag?
- If I'm developing an application, if a specific flag is set, what
checks can I skip?
- IP version field is 4
- no IP options (header size is 20)
- layer 2 identified the packet as IP (ex: ethertype=0x800)
I think we need such a definition for all packet types.
You meant we need a detailed description of each packet type, right?
If yes, I can add those information soon. Thanks for the helps!
Yes, I think this would be really helpful.
OK. Got it. I will add them and send out v4 version. Thanks for your good suggestions!

Regards,
Helin
Thank you!
Olivier
Helin Zhang
2015-02-17 06:59:24 UTC
Permalink
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.

Signed-off-by: Helin Zhang <***@intel.com>
---
lib/librte_pmd_i40e/i40e_rxtx.c | 786 ++++++++++++++++++++++++++--------------
1 file changed, 512 insertions(+), 274 deletions(-)

v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.

diff --git a/lib/librte_pmd_i40e/i40e_rxtx.c b/lib/librte_pmd_i40e/i40e_rxtx.c
index c9f1026..25ee9f8 100644
--- a/lib/librte_pmd_i40e/i40e_rxtx.c
+++ b/lib/librte_pmd_i40e/i40e_rxtx.c
@@ -151,272 +151,511 @@ i40e_rxd_error_to_pkt_flags(uint64_t qword)
return flags;
}

-/* Translate pkt types to pkt flags */
-static inline uint64_t
-i40e_rxd_ptype_to_pkt_flags(uint64_t qword)
+/* For each value it means, datasheet of hardware can tell more details */
+static inline uint32_t
+i40e_rxd_pkt_type_mapping(uint8_t ptype)
{
- uint8_t ptype = (uint8_t)((qword & I40E_RXD_QW1_PTYPE_MASK) >>
- I40E_RXD_QW1_PTYPE_SHIFT);
- static const uint64_t ip_ptype_map[I40E_MAX_PKT_TYPE] = {
- 0, /* PTYPE 0 */
- 0, /* PTYPE 1 */
- 0, /* PTYPE 2 */
- 0, /* PTYPE 3 */
- 0, /* PTYPE 4 */
- 0, /* PTYPE 5 */
- 0, /* PTYPE 6 */
- 0, /* PTYPE 7 */
- 0, /* PTYPE 8 */
- 0, /* PTYPE 9 */
- 0, /* PTYPE 10 */
- 0, /* PTYPE 11 */
- 0, /* PTYPE 12 */
- 0, /* PTYPE 13 */
- 0, /* PTYPE 14 */
- 0, /* PTYPE 15 */
- 0, /* PTYPE 16 */
- 0, /* PTYPE 17 */
- 0, /* PTYPE 18 */
- 0, /* PTYPE 19 */
- 0, /* PTYPE 20 */
- 0, /* PTYPE 21 */
- PKT_RX_IPV4_HDR, /* PTYPE 22 */
- PKT_RX_IPV4_HDR, /* PTYPE 23 */
- PKT_RX_IPV4_HDR, /* PTYPE 24 */
- 0, /* PTYPE 25 */
- PKT_RX_IPV4_HDR, /* PTYPE 26 */
- PKT_RX_IPV4_HDR, /* PTYPE 27 */
- PKT_RX_IPV4_HDR, /* PTYPE 28 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 29 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 30 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 31 */
- 0, /* PTYPE 32 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 33 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 34 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 35 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 36 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 37 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 38 */
- 0, /* PTYPE 39 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 40 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 41 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 42 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 43 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 44 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 45 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 46 */
- 0, /* PTYPE 47 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 48 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 49 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 50 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 51 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 52 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 53 */
- 0, /* PTYPE 54 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 55 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 56 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 57 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 58 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 59 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 60 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 61 */
- 0, /* PTYPE 62 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 63 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 64 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 65 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 66 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 67 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 68 */
- 0, /* PTYPE 69 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 70 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 71 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 72 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 73 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 74 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 75 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 76 */
- 0, /* PTYPE 77 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 78 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 79 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 80 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 81 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 82 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 83 */
- 0, /* PTYPE 84 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 85 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 86 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 87 */
- PKT_RX_IPV6_HDR, /* PTYPE 88 */
- PKT_RX_IPV6_HDR, /* PTYPE 89 */
- PKT_RX_IPV6_HDR, /* PTYPE 90 */
- 0, /* PTYPE 91 */
- PKT_RX_IPV6_HDR, /* PTYPE 92 */
- PKT_RX_IPV6_HDR, /* PTYPE 93 */
- PKT_RX_IPV6_HDR, /* PTYPE 94 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 95 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 96 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 97 */
- 0, /* PTYPE 98 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 99 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 100 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 101 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 102 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 103 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 104 */
- 0, /* PTYPE 105 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 106 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 107 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 108 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 109 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 110 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 111 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 112 */
- 0, /* PTYPE 113 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 114 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 115 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 116 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 117 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 118 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 119 */
- 0, /* PTYPE 120 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 121 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 122 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 123 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 124 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 125 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 126 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 127 */
- 0, /* PTYPE 128 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 129 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 130 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 131 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 132 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 133 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 134 */
- 0, /* PTYPE 135 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 136 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 137 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 138 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 139 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 140 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 141 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 142 */
- 0, /* PTYPE 143 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 144 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 145 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 146 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 147 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 148 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 149 */
- 0, /* PTYPE 150 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 151 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 152 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 153 */
- 0, /* PTYPE 154 */
- 0, /* PTYPE 155 */
- 0, /* PTYPE 156 */
- 0, /* PTYPE 157 */
- 0, /* PTYPE 158 */
- 0, /* PTYPE 159 */
- 0, /* PTYPE 160 */
- 0, /* PTYPE 161 */
- 0, /* PTYPE 162 */
- 0, /* PTYPE 163 */
- 0, /* PTYPE 164 */
- 0, /* PTYPE 165 */
- 0, /* PTYPE 166 */
- 0, /* PTYPE 167 */
- 0, /* PTYPE 168 */
- 0, /* PTYPE 169 */
- 0, /* PTYPE 170 */
- 0, /* PTYPE 171 */
- 0, /* PTYPE 172 */
- 0, /* PTYPE 173 */
- 0, /* PTYPE 174 */
- 0, /* PTYPE 175 */
- 0, /* PTYPE 176 */
- 0, /* PTYPE 177 */
- 0, /* PTYPE 178 */
- 0, /* PTYPE 179 */
- 0, /* PTYPE 180 */
- 0, /* PTYPE 181 */
- 0, /* PTYPE 182 */
- 0, /* PTYPE 183 */
- 0, /* PTYPE 184 */
- 0, /* PTYPE 185 */
- 0, /* PTYPE 186 */
- 0, /* PTYPE 187 */
- 0, /* PTYPE 188 */
- 0, /* PTYPE 189 */
- 0, /* PTYPE 190 */
- 0, /* PTYPE 191 */
- 0, /* PTYPE 192 */
- 0, /* PTYPE 193 */
- 0, /* PTYPE 194 */
- 0, /* PTYPE 195 */
- 0, /* PTYPE 196 */
- 0, /* PTYPE 197 */
- 0, /* PTYPE 198 */
- 0, /* PTYPE 199 */
- 0, /* PTYPE 200 */
- 0, /* PTYPE 201 */
- 0, /* PTYPE 202 */
- 0, /* PTYPE 203 */
- 0, /* PTYPE 204 */
- 0, /* PTYPE 205 */
- 0, /* PTYPE 206 */
- 0, /* PTYPE 207 */
- 0, /* PTYPE 208 */
- 0, /* PTYPE 209 */
- 0, /* PTYPE 210 */
- 0, /* PTYPE 211 */
- 0, /* PTYPE 212 */
- 0, /* PTYPE 213 */
- 0, /* PTYPE 214 */
- 0, /* PTYPE 215 */
- 0, /* PTYPE 216 */
- 0, /* PTYPE 217 */
- 0, /* PTYPE 218 */
- 0, /* PTYPE 219 */
- 0, /* PTYPE 220 */
- 0, /* PTYPE 221 */
- 0, /* PTYPE 222 */
- 0, /* PTYPE 223 */
- 0, /* PTYPE 224 */
- 0, /* PTYPE 225 */
- 0, /* PTYPE 226 */
- 0, /* PTYPE 227 */
- 0, /* PTYPE 228 */
- 0, /* PTYPE 229 */
- 0, /* PTYPE 230 */
- 0, /* PTYPE 231 */
- 0, /* PTYPE 232 */
- 0, /* PTYPE 233 */
- 0, /* PTYPE 234 */
- 0, /* PTYPE 235 */
- 0, /* PTYPE 236 */
- 0, /* PTYPE 237 */
- 0, /* PTYPE 238 */
- 0, /* PTYPE 239 */
- 0, /* PTYPE 240 */
- 0, /* PTYPE 241 */
- 0, /* PTYPE 242 */
- 0, /* PTYPE 243 */
- 0, /* PTYPE 244 */
- 0, /* PTYPE 245 */
- 0, /* PTYPE 246 */
- 0, /* PTYPE 247 */
- 0, /* PTYPE 248 */
- 0, /* PTYPE 249 */
- 0, /* PTYPE 250 */
- 0, /* PTYPE 251 */
- 0, /* PTYPE 252 */
- 0, /* PTYPE 253 */
- 0, /* PTYPE 254 */
- 0, /* PTYPE 255 */
+ static const uint32_t ptype_table[UINT8_MAX] __rte_cache_aligned = {
+ /* L2 types */
+ /* [0] reserved */
+ [1] = RTE_PTYPE_L2_MAC,
+ [2] = RTE_PTYPE_L2_MAC_TIMESYNC,
+ /* [3] - [5] reserved */
+ [6] = RTE_PTYPE_L2_LLDP,
+ /* [7] - [10] reserved */
+ [11] = RTE_PTYPE_L2_ARP,
+ /* [12] - [21] reserved */
+
+ /* Non tunneled IPv4 */
+ [22] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_FRAG,
+ [23] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_NONFRAG,
+ [24] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_UDP,
+ /* [25] reserved */
+ [26] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_TCP,
+ [27] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_SCTP,
+ [28] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_ICMP,
+
+ /* IPv4 --> IPv4 */
+ [29] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [30] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [31] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [32] reserved */
+ [33] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [34] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [35] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> IPv6 */
+ [36] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [37] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [38] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [39] reserved */
+ [40] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [41] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [42] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN */
+ [43] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> IPv4 */
+ [44] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [45] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [46] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [47] reserved */
+ [48] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [49] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [50] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> IPv6 */
+ [51] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [52] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [53] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [54] reserved */
+ [55] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [56] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [57] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC */
+ [58] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC --> IPv4 */
+ [59] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [60] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [61] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [62] reserved */
+ [63] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [64] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [65] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC --> IPv6 */
+ [66] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [67] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [68] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [69] reserved */
+ [70] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [71] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [72] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC/VLAN */
+ [73] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv4 */
+ [74] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [75] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [76] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [77] reserved */
+ [78] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [79] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [80] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv6 */
+ [81] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [82] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [83] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [84] reserved */
+ [85] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [86] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [87] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* Non tunneled IPv6 */
+ [88] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_FRAG,
+ [89] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_NONFRAG,
+ [90] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_UDP,
+ /* [91] reserved */
+ [92] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_TCP,
+ [93] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_SCTP,
+ [94] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_ICMP,
+
+ /* IPv6 --> IPv4 */
+ [95] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [96] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [97] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [98] reserved */
+ [99] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [100] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [101] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> IPv6 */
+ [102] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [103] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [104] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [105] reserved */
+ [106] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [107] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [108] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN */
+ [109] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> IPv4 */
+ [110] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [111] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [112] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [113] reserved */
+ [114] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [115] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [116] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> IPv6 */
+ [117] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [118] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [119] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [120] reserved */
+ [121] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [122] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [123] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC */
+ [124] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC --> IPv4 */
+ [125] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [126] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [127] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [128] reserved */
+ [129] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [130] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [131] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC --> IPv6 */
+ [132] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [133] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [134] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [135] reserved */
+ [136] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [137] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [138] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC/VLAN */
+ [139] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv4 */
+ [140] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [141] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [142] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [143] reserved */
+ [144] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [145] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [146] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv6 */
+ [147] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [148] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [149] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [150] reserved */
+ [151] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [152] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [153] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* All others reserved */
};

- return ip_ptype_map[ptype];
+ return ptype_table[ptype];
}

#define I40E_RX_DESC_EXT_STATUS_FLEXBH_MASK 0x03
@@ -702,11 +941,11 @@ i40e_rx_scan_hw_ring(struct i40e_rx_queue *rxq)
rxdp[j].wb.qword0.lo_dword.l2tag1) : 0;
pkt_flags = i40e_rxd_status_to_pkt_flags(qword1);
pkt_flags |= i40e_rxd_error_to_pkt_flags(qword1);
- pkt_flags |= i40e_rxd_ptype_to_pkt_flags(qword1);

- mb->packet_type = (uint16_t)((qword1 &
- I40E_RXD_QW1_PTYPE_MASK) >>
- I40E_RXD_QW1_PTYPE_SHIFT);
+ mb->packet_type =
+ i40e_rxd_pkt_type_mapping((uint8_t)((qword1 &
+ I40E_RXD_QW1_PTYPE_MASK) >>
+ I40E_RXD_QW1_PTYPE_SHIFT));
if (pkt_flags & PKT_RX_RSS_HASH)
mb->hash.rss = rte_le_to_cpu_32(\
rxdp[j].wb.qword0.hi_dword.rss);
@@ -945,9 +1184,9 @@ i40e_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
rte_le_to_cpu_16(rxd.wb.qword0.lo_dword.l2tag1) : 0;
pkt_flags = i40e_rxd_status_to_pkt_flags(qword1);
pkt_flags |= i40e_rxd_error_to_pkt_flags(qword1);
- pkt_flags |= i40e_rxd_ptype_to_pkt_flags(qword1);
- rxm->packet_type = (uint16_t)((qword1 & I40E_RXD_QW1_PTYPE_MASK) >>
- I40E_RXD_QW1_PTYPE_SHIFT);
+ rxm->packet_type =
+ i40e_rxd_pkt_type_mapping((uint8_t)((qword1 &
+ I40E_RXD_QW1_PTYPE_MASK) >> I40E_RXD_QW1_PTYPE_SHIFT));
if (pkt_flags & PKT_RX_RSS_HASH)
rxm->hash.rss =
rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
@@ -1104,10 +1343,9 @@ i40e_recv_scattered_pkts(void *rx_queue,
rte_le_to_cpu_16(rxd.wb.qword0.lo_dword.l2tag1) : 0;
pkt_flags = i40e_rxd_status_to_pkt_flags(qword1);
pkt_flags |= i40e_rxd_error_to_pkt_flags(qword1);
- pkt_flags |= i40e_rxd_ptype_to_pkt_flags(qword1);
- first_seg->packet_type = (uint16_t)((qword1 &
- I40E_RXD_QW1_PTYPE_MASK) >>
- I40E_RXD_QW1_PTYPE_SHIFT);
+ first_seg->packet_type =
+ i40e_rxd_pkt_type_mapping((uint8_t)((qword1 &
+ I40E_RXD_QW1_PTYPE_MASK) >> I40E_RXD_QW1_PTYPE_SHIFT));
if (pkt_flags & PKT_RX_RSS_HASH)
rxm->hash.rss =
rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
--
1.9.3
Helin Zhang
2015-02-17 06:59:25 UTC
Permalink
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.

Signed-off-by: Helin Zhang <***@intel.com>
---
lib/librte_pmd_enic/enic_main.c | 14 ++++++++------
1 file changed, 8 insertions(+), 6 deletions(-)

v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.

diff --git a/lib/librte_pmd_enic/enic_main.c b/lib/librte_pmd_enic/enic_main.c
index 48fdca2..9acba9a 100644
--- a/lib/librte_pmd_enic/enic_main.c
+++ b/lib/librte_pmd_enic/enic_main.c
@@ -423,7 +423,7 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
rx_pkt->pkt_len = bytes_written;

if (ipv4) {
- rx_pkt->ol_flags |= PKT_RX_IPV4_HDR;
+ rx_pkt->packet_type = RTE_PTYPE_L3_IPV4;
if (!csum_not_calc) {
if (unlikely(!ipv4_csum_ok))
rx_pkt->ol_flags |= PKT_RX_IP_CKSUM_BAD;
@@ -432,7 +432,7 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
rx_pkt->ol_flags |= PKT_RX_L4_CKSUM_BAD;
}
} else if (ipv6)
- rx_pkt->ol_flags |= PKT_RX_IPV6_HDR;
+ rx_pkt->packet_type = RTE_PTYPE_L3_IPV6;
} else {
/* Header split */
if (sop && !eop) {
@@ -445,7 +445,7 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
*rx_pkt_bucket = rx_pkt;
rx_pkt->pkt_len = bytes_written;
if (ipv4) {
- rx_pkt->ol_flags |= PKT_RX_IPV4_HDR;
+ rx_pkt->packet_type = RTE_PTYPE_L3_IPV4;
if (!csum_not_calc) {
if (unlikely(!ipv4_csum_ok))
rx_pkt->ol_flags |=
@@ -457,13 +457,14 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
PKT_RX_L4_CKSUM_BAD;
}
} else if (ipv6)
- rx_pkt->ol_flags |= PKT_RX_IPV6_HDR;
+ rx_pkt->packet_type = RTE_PTYPE_L3_IPV6;
} else {
/* Payload */
hdr_rx_pkt = *rx_pkt_bucket;
hdr_rx_pkt->pkt_len += bytes_written;
if (ipv4) {
- hdr_rx_pkt->ol_flags |= PKT_RX_IPV4_HDR;
+ hdr_rx_pkt->packet_type =
+ RTE_PTYPE_L3_IPV4;
if (!csum_not_calc) {
if (unlikely(!ipv4_csum_ok))
hdr_rx_pkt->ol_flags |=
@@ -475,7 +476,8 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
PKT_RX_L4_CKSUM_BAD;
}
} else if (ipv6)
- hdr_rx_pkt->ol_flags |= PKT_RX_IPV6_HDR;
+ hdr_rx_pkt->packet_type =
+ RTE_PTYPE_L3_IPV6;

}
}
--
1.9.3
Helin Zhang
2015-02-17 06:59:27 UTC
Permalink
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.

Signed-off-by: Helin Zhang <***@intel.com>
---
app/test-pipeline/pipeline_hash.c | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)

v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.

diff --git a/app/test-pipeline/pipeline_hash.c b/app/test-pipeline/pipeline_hash.c
index 4598ad4..548615f 100644
--- a/app/test-pipeline/pipeline_hash.c
+++ b/app/test-pipeline/pipeline_hash.c
@@ -459,20 +459,21 @@ app_main_loop_rx_metadata(void) {
signature = RTE_MBUF_METADATA_UINT32_PTR(m, 0);
key = RTE_MBUF_METADATA_UINT8_PTR(m, 32);

- if (m->ol_flags & PKT_RX_IPV4_HDR) {
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type)) {
ip_hdr = (struct ipv4_hdr *)
&m_data[sizeof(struct ether_hdr)];
ip_dst = ip_hdr->dst_addr;

k32 = (uint32_t *) key;
k32[0] = ip_dst & 0xFFFFFF00;
- } else {
+ } else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
ipv6_hdr = (struct ipv6_hdr *)
&m_data[sizeof(struct ether_hdr)];
ipv6_dst = ipv6_hdr->dst_addr;

memcpy(key, ipv6_dst, 16);
- }
+ } else
+ continue;

*signature = test_hash(key, 0, 0);
}
--
1.9.3
Helin Zhang
2015-02-17 06:59:28 UTC
Permalink
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.

Signed-off-by: Helin Zhang <***@intel.com>
Signed-off-by: Jijiang Liu <***@intel.com>
---
app/test-pmd/csumonly.c | 10 +++++-----
app/test-pmd/rxonly.c | 9 +++------
2 files changed, 8 insertions(+), 11 deletions(-)

v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.

diff --git a/app/test-pmd/csumonly.c b/app/test-pmd/csumonly.c
index 0a7af79..ad877c2 100644
--- a/app/test-pmd/csumonly.c
+++ b/app/test-pmd/csumonly.c
@@ -199,8 +199,9 @@ parse_ethernet(struct ether_hdr *eth_hdr, struct testpmd_offload_info *info)

/* Parse a vxlan header */
static void
-parse_vxlan(struct udp_hdr *udp_hdr, struct testpmd_offload_info *info,
- uint64_t mbuf_olflags)
+parse_vxlan(struct udp_hdr *udp_hdr,
+ struct testpmd_offload_info *info,
+ uint32_t pkt_type)
{
struct ether_hdr *eth_hdr;

@@ -208,8 +209,7 @@ parse_vxlan(struct udp_hdr *udp_hdr, struct testpmd_offload_info *info,
* (rfc7348) or that the rx offload flag is set (i40e only
* currently) */
if (udp_hdr->dst_port != _htons(4789) &&
- (mbuf_olflags & (PKT_RX_TUNNEL_IPV4_HDR |
- PKT_RX_TUNNEL_IPV6_HDR)) == 0)
+ RTE_ETH_IS_TUNNEL_PKT(pkt_type) == 0)
return;

info->is_tunnel = 1;
@@ -543,7 +543,7 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
struct udp_hdr *udp_hdr;
udp_hdr = (struct udp_hdr *)((char *)l3_hdr +
info.l3_len);
- parse_vxlan(udp_hdr, &info, m->ol_flags);
+ parse_vxlan(udp_hdr, &info, m->packet_type);
} else if (info.l4_proto == IPPROTO_GRE) {
struct simple_gre_hdr *gre_hdr;
gre_hdr = (struct simple_gre_hdr *)
diff --git a/app/test-pmd/rxonly.c b/app/test-pmd/rxonly.c
index fdfe990..8eb68c4 100644
--- a/app/test-pmd/rxonly.c
+++ b/app/test-pmd/rxonly.c
@@ -92,7 +92,7 @@ pkt_burst_receive(struct fwd_stream *fs)
uint64_t ol_flags;
uint16_t nb_rx;
uint16_t i, packet_type;
- uint64_t is_encapsulation;
+ uint16_t is_encapsulation;

#ifdef RTE_TEST_PMD_RECORD_CORE_CYCLES
uint64_t start_tsc;
@@ -135,10 +135,7 @@ pkt_burst_receive(struct fwd_stream *fs)
eth_type = RTE_BE_TO_CPU_16(eth_hdr->ether_type);
ol_flags = mb->ol_flags;
packet_type = mb->packet_type;
-
- is_encapsulation = ol_flags & (PKT_RX_TUNNEL_IPV4_HDR |
- PKT_RX_TUNNEL_IPV6_HDR);
-
+ is_encapsulation = RTE_ETH_IS_TUNNEL_PKT(packet_type);
print_ether_addr(" src=", &eth_hdr->s_addr);
print_ether_addr(" - dst=", &eth_hdr->d_addr);
printf(" - type=0x%04x - length=%u - nb_segs=%d",
@@ -174,7 +171,7 @@ pkt_burst_receive(struct fwd_stream *fs)
l2_len = sizeof(struct ether_hdr);

/* Do not support ipv4 option field */
- if (ol_flags & PKT_RX_TUNNEL_IPV4_HDR) {
+ if (RTE_ETH_IS_IPV4_HDR(packet_type)) {
l3_len = sizeof(struct ipv4_hdr);
ipv4_hdr = (struct ipv4_hdr *) (rte_pktmbuf_mtod(mb,
unsigned char *) + l2_len);
--
1.9.3
Helin Zhang
2015-02-17 06:59:29 UTC
Permalink
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.

Signed-off-by: Helin Zhang <***@intel.com>
---
examples/ip_fragmentation/main.c | 7 +++----
1 file changed, 3 insertions(+), 4 deletions(-)

v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.

diff --git a/examples/ip_fragmentation/main.c b/examples/ip_fragmentation/main.c
index eac5427..152844e 100644
--- a/examples/ip_fragmentation/main.c
+++ b/examples/ip_fragmentation/main.c
@@ -286,7 +286,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, struct lcore_queue_conf *qconf,
len = qconf->tx_mbufs[port_out].len;

/* if this is an IPv4 packet */
- if (m->ol_flags & PKT_RX_IPV4_HDR) {
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type)) {
struct ipv4_hdr *ip_hdr;
uint32_t ip_dst;
/* Read the lookup key (i.e. ip_dst) from the input packet */
@@ -320,9 +320,8 @@ l3fwd_simple_forward(struct rte_mbuf *m, struct lcore_queue_conf *qconf,
if (unlikely (len2 < 0))
return;
}
- }
- /* if this is an IPv6 packet */
- else if (m->ol_flags & PKT_RX_IPV6_HDR) {
+ } else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
+ /* if this is an IPv6 packet */
struct ipv6_hdr *ip_hdr;

ipv6 = 1;
--
1.9.3
Helin Zhang
2015-02-17 06:59:31 UTC
Permalink
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.

Signed-off-by: Helin Zhang <***@intel.com>
---
examples/l3fwd-acl/main.c | 19 ++++++-------------
1 file changed, 6 insertions(+), 13 deletions(-)

v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.

diff --git a/examples/l3fwd-acl/main.c b/examples/l3fwd-acl/main.c
index f1f7601..af70ccd 100644
--- a/examples/l3fwd-acl/main.c
+++ b/examples/l3fwd-acl/main.c
@@ -651,9 +651,7 @@ prepare_one_packet(struct rte_mbuf **pkts_in, struct acl_search_t *acl,
struct ipv4_hdr *ipv4_hdr;
struct rte_mbuf *pkt = pkts_in[index];

- int type = pkt->ol_flags & (PKT_RX_IPV4_HDR | PKT_RX_IPV6_HDR);
-
- if (type == PKT_RX_IPV4_HDR) {
+ if (RTE_ETH_IS_IPV4_HDR(pkt->packet_type)) {

ipv4_hdr = (struct ipv4_hdr *)(rte_pktmbuf_mtod(pkt,
unsigned char *) + sizeof(struct ether_hdr));
@@ -674,8 +672,7 @@ prepare_one_packet(struct rte_mbuf **pkts_in, struct acl_search_t *acl,
rte_pktmbuf_free(pkt);
}

- } else if (type == PKT_RX_IPV6_HDR) {
-
+ } else if (RTE_ETH_IS_IPV6_HDR(pkt->packet_type)) {
/* Fill acl structure */
acl->data_ipv6[acl->num_ipv6] = MBUF_IPV6_2PROTO(pkt);
acl->m_ipv6[(acl->num_ipv6)++] = pkt;
@@ -693,17 +690,13 @@ prepare_one_packet(struct rte_mbuf **pkts_in, struct acl_search_t *acl,
{
struct rte_mbuf *pkt = pkts_in[index];

- int type = pkt->ol_flags & (PKT_RX_IPV4_HDR | PKT_RX_IPV6_HDR);
-
- if (type == PKT_RX_IPV4_HDR) {
-
+ if (RTE_ETH_IS_IPV4_HDR(pkt->packet_type)) {
/* Fill acl structure */
acl->data_ipv4[acl->num_ipv4] = MBUF_IPV4_2PROTO(pkt);
acl->m_ipv4[(acl->num_ipv4)++] = pkt;


- } else if (type == PKT_RX_IPV6_HDR) {
-
+ } else if (RTE_ETH_IS_IPV6_HDR(pkt->packet_type)) {
/* Fill acl structure */
acl->data_ipv6[acl->num_ipv6] = MBUF_IPV6_2PROTO(pkt);
acl->m_ipv6[(acl->num_ipv6)++] = pkt;
@@ -751,9 +744,9 @@ send_one_packet(struct rte_mbuf *m, uint32_t res)
/* in the ACL list, drop it */
#ifdef L3FWDACL_DEBUG
if ((res & ACL_DENY_SIGNATURE) != 0) {
- if (m->ol_flags & PKT_RX_IPV4_HDR)
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type))
dump_acl4_rule(m, res);
- else
+ else if (RTE_ETH_IS_IPV6_HDR(m->packet_type))
dump_acl6_rule(m, res);
}
#endif
--
1.9.3
Helin Zhang
2015-02-17 06:59:32 UTC
Permalink
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.

Signed-off-by: Helin Zhang <***@intel.com>
---
examples/l3fwd-power/main.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)

v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.

diff --git a/examples/l3fwd-power/main.c b/examples/l3fwd-power/main.c
index f6b55b9..964e5b9 100644
--- a/examples/l3fwd-power/main.c
+++ b/examples/l3fwd-power/main.c
@@ -638,7 +638,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid,

eth_hdr = rte_pktmbuf_mtod(m, struct ether_hdr *);

- if (m->ol_flags & PKT_RX_IPV4_HDR) {
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type)) {
/* Handle IPv4 headers.*/
ipv4_hdr =
(struct ipv4_hdr *)(rte_pktmbuf_mtod(m, unsigned char*)
@@ -673,8 +673,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid,
ether_addr_copy(&ports_eth_addr[dst_port], &eth_hdr->s_addr);

send_single_packet(m, dst_port);
- }
- else {
+ } else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
/* Handle IPv6 headers.*/
#if (APP_LOOKUP_METHOD == APP_LOOKUP_EXACT_MATCH)
struct ipv6_hdr *ipv6_hdr;
--
1.9.3
Helin Zhang
2015-02-17 06:59:34 UTC
Permalink
As unified packet types are used instead, those old bit masks and
the relevant macros for packet type indication need to be removed.

Signed-off-by: Helin Zhang <***@intel.com>
---
lib/librte_mbuf/rte_mbuf.c | 6 ------
lib/librte_mbuf/rte_mbuf.h | 14 ++++----------
2 files changed, 4 insertions(+), 16 deletions(-)

v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
* Redefined the bit masks for packet RX offload flags.

diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c
index 2a4bc8c..6e018c4 100644
--- a/lib/librte_mbuf/rte_mbuf.c
+++ b/lib/librte_mbuf/rte_mbuf.c
@@ -215,14 +215,8 @@ const char *rte_get_rx_ol_flag_name(uint64_t mask)
/* case PKT_RX_HBUF_OVERFLOW: return "PKT_RX_HBUF_OVERFLOW"; */
/* case PKT_RX_RECIP_ERR: return "PKT_RX_RECIP_ERR"; */
/* case PKT_RX_MAC_ERR: return "PKT_RX_MAC_ERR"; */
- case PKT_RX_IPV4_HDR: return "PKT_RX_IPV4_HDR";
- case PKT_RX_IPV4_HDR_EXT: return "PKT_RX_IPV4_HDR_EXT";
- case PKT_RX_IPV6_HDR: return "PKT_RX_IPV6_HDR";
- case PKT_RX_IPV6_HDR_EXT: return "PKT_RX_IPV6_HDR_EXT";
case PKT_RX_IEEE1588_PTP: return "PKT_RX_IEEE1588_PTP";
case PKT_RX_IEEE1588_TMST: return "PKT_RX_IEEE1588_TMST";
- case PKT_RX_TUNNEL_IPV4_HDR: return "PKT_RX_TUNNEL_IPV4_HDR";
- case PKT_RX_TUNNEL_IPV6_HDR: return "PKT_RX_TUNNEL_IPV6_HDR";
default: return NULL;
}
}
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index 2cdf8a0..069a8f7 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -90,16 +90,10 @@ extern "C" {
#define PKT_RX_HBUF_OVERFLOW (0ULL << 0) /**< Header buffer overflow. */
#define PKT_RX_RECIP_ERR (0ULL << 0) /**< Hardware processing error. */
#define PKT_RX_MAC_ERR (0ULL << 0) /**< MAC error. */
-#define PKT_RX_IPV4_HDR (1ULL << 5) /**< RX packet with IPv4 header. */
-#define PKT_RX_IPV4_HDR_EXT (1ULL << 6) /**< RX packet with extended IPv4 header. */
-#define PKT_RX_IPV6_HDR (1ULL << 7) /**< RX packet with IPv6 header. */
-#define PKT_RX_IPV6_HDR_EXT (1ULL << 8) /**< RX packet with extended IPv6 header. */
-#define PKT_RX_IEEE1588_PTP (1ULL << 9) /**< RX IEEE1588 L2 Ethernet PT Packet. */
-#define PKT_RX_IEEE1588_TMST (1ULL << 10) /**< RX IEEE1588 L2/L4 timestamped packet.*/
-#define PKT_RX_TUNNEL_IPV4_HDR (1ULL << 11) /**< RX tunnel packet with IPv4 header.*/
-#define PKT_RX_TUNNEL_IPV6_HDR (1ULL << 12) /**< RX tunnel packet with IPv6 header. */
-#define PKT_RX_FDIR_ID (1ULL << 13) /**< FD id reported if FDIR match. */
-#define PKT_RX_FDIR_FLX (1ULL << 14) /**< Flexible bytes reported if FDIR match. */
+#define PKT_RX_IEEE1588_PTP (1ULL << 5) /**< RX IEEE1588 L2 Ethernet PT Packet. */
+#define PKT_RX_IEEE1588_TMST (1ULL << 6) /**< RX IEEE1588 L2/L4 timestamped packet.*/
+#define PKT_RX_FDIR_ID (1ULL << 7) /**< FD id reported if FDIR match. */
+#define PKT_RX_FDIR_FLX (1ULL << 8) /**< Flexible bytes reported if FDIR match. */
/* add new RX flags here */

/* add new TX flags here */
--
1.9.3
Helin Zhang
2015-02-17 06:59:26 UTC
Permalink
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.

Signed-off-by: Helin Zhang <***@intel.com>
---
lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.

diff --git a/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c b/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
index 8425f32..c85ebd8 100644
--- a/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
+++ b/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
@@ -650,9 +650,9 @@ vmxnet3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
struct ipv4_hdr *ip = (struct ipv4_hdr *)(eth + 1);

if (((ip->version_ihl & 0xf) << 2) > (int)sizeof(struct ipv4_hdr))
- rxm->ol_flags |= PKT_RX_IPV4_HDR_EXT;
+ rxm->packet_type = RTE_PTYPE_L3_IPV4_EXT;
else
- rxm->ol_flags |= PKT_RX_IPV4_HDR;
+ rxm->packet_type = RTE_PTYPE_L3_IPV4;

if (!rcd->cnc) {
if (!rcd->ipc)
--
1.9.3
Thomas Monjalon
2015-02-27 11:25:37 UTC
Permalink
Post by Helin Zhang
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
Helin, this patch was already acked in v2 and you didn't change it.
Please keep the Acked-by line in such case.
Note that Acked-by is still valid after minor changes like typos.

I'd like every developers adopt this rule. Please spread the word.
Zhang, Helin
2015-02-27 12:26:31 UTC
Permalink
-----Original Message-----
Sent: Friday, February 27, 2015 7:26 PM
Subject: Re: [dpdk-dev] [PATCH v3 08/16] vmxnet3: support of unified packet
type
Post by Helin Zhang
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
Helin, this patch was already acked in v2 and you didn't change it.
Please keep the Acked-by line in such case.
Note that Acked-by is still valid after minor changes like typos.
OK. Good to learn that! Thank you very much!
I'd like every developers adopt this rule. Please spread the word.
Yes, I will forward this rule to all the team here. Hopefully it will be helpful for all of us!

Regards,
Helin
Helin Zhang
2015-02-17 06:59:23 UTC
Permalink
To unify packet type among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
Note that around 2.5% performance drop (64B) was observed of doing
4 ports (1 port per 82599 card) IO forwarding on the same SNB core.

Signed-off-by: Helin Zhang <***@intel.com>
---
lib/librte_pmd_ixgbe/ixgbe_rxtx.c | 146 +++++++++++++++++++++++++++++---------
1 file changed, 112 insertions(+), 34 deletions(-)

v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.

diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
index e6766b3..a2e4234 100644
--- a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
+++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
@@ -866,40 +866,107 @@ end_of_tx:
* RX functions
*
**********************************************************************/
-static inline uint64_t
-rx_desc_hlen_type_rss_to_pkt_flags(uint32_t hl_tp_rs)
+#define IXGBE_PACKET_TYPE_IPV4 0X01
+#define IXGBE_PACKET_TYPE_IPV4_TCP 0X11
+#define IXGBE_PACKET_TYPE_IPV4_UDP 0X21
+#define IXGBE_PACKET_TYPE_IPV4_SCTP 0X41
+#define IXGBE_PACKET_TYPE_IPV4_EXT 0X03
+#define IXGBE_PACKET_TYPE_IPV4_EXT_SCTP 0X43
+#define IXGBE_PACKET_TYPE_IPV6 0X04
+#define IXGBE_PACKET_TYPE_IPV6_TCP 0X14
+#define IXGBE_PACKET_TYPE_IPV6_UDP 0X24
+#define IXGBE_PACKET_TYPE_IPV6_EXT 0X0C
+#define IXGBE_PACKET_TYPE_IPV6_EXT_TCP 0X1C
+#define IXGBE_PACKET_TYPE_IPV6_EXT_UDP 0X2C
+#define IXGBE_PACKET_TYPE_IPV4_IPV6 0X05
+#define IXGBE_PACKET_TYPE_IPV4_IPV6_TCP 0X15
+#define IXGBE_PACKET_TYPE_IPV4_IPV6_UDP 0X25
+#define IXGBE_PACKET_TYPE_IPV4_IPV6_EXT 0X0D
+#define IXGBE_PACKET_TYPE_IPV4_IPV6_EXT_TCP 0X1D
+#define IXGBE_PACKET_TYPE_IPV4_IPV6_EXT_UDP 0X2D
+#define IXGBE_PACKET_TYPE_MAX 0X80
+#define IXGBE_PACKET_TYPE_MASK 0X7F
+#define IXGBE_PACKET_TYPE_SHIFT 0X04
+static inline uint32_t
+ixgbe_rxd_pkt_info_to_pkt_type(uint16_t pkt_info)
{
- uint64_t pkt_flags;
-
- static uint64_t ip_pkt_types_map[16] = {
- 0, PKT_RX_IPV4_HDR, PKT_RX_IPV4_HDR_EXT, PKT_RX_IPV4_HDR_EXT,
- PKT_RX_IPV6_HDR, 0, 0, 0,
- PKT_RX_IPV6_HDR_EXT, 0, 0, 0,
- PKT_RX_IPV6_HDR_EXT, 0, 0, 0,
+ static const uint32_t
+ ptype_table[IXGBE_PACKET_TYPE_MAX] __rte_cache_aligned = {
+ [IXGBE_PACKET_TYPE_IPV4] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4,
+ [IXGBE_PACKET_TYPE_IPV4_EXT] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4_EXT,
+ [IXGBE_PACKET_TYPE_IPV6] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6,
+ [IXGBE_PACKET_TYPE_IPV6_EXT] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6_EXT,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6_EXT] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT,
+ [IXGBE_PACKET_TYPE_IPV4_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_TCP,
+ [IXGBE_PACKET_TYPE_IPV6_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_TCP,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6 | RTE_PTYPE_INNER_L4_TCP,
+ [IXGBE_PACKET_TYPE_IPV6_EXT_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_TCP,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6_EXT_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_INNER_L4_TCP,
+ [IXGBE_PACKET_TYPE_IPV4_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_UDP,
+ [IXGBE_PACKET_TYPE_IPV6_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_UDP,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6 | RTE_PTYPE_INNER_L4_UDP,
+ [IXGBE_PACKET_TYPE_IPV6_EXT_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_UDP,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6_EXT_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_INNER_L4_UDP,
+ [IXGBE_PACKET_TYPE_IPV4_SCTP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_SCTP,
+ [IXGBE_PACKET_TYPE_IPV4_EXT_SCTP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_L4_SCTP,
};
+ if (unlikely(pkt_info & IXGBE_RXDADV_PKTTYPE_ETQF))
+ return RTE_PTYPE_UNKNOWN;

- static uint64_t ip_rss_types_map[16] = {
+ pkt_info = (pkt_info >> IXGBE_PACKET_TYPE_SHIFT) &
+ IXGBE_PACKET_TYPE_MASK;
+
+ return ptype_table[pkt_info];
+}
+
+static inline uint64_t
+ixgbe_rxd_pkt_info_to_pkt_flags(uint16_t pkt_info)
+{
+ static uint64_t ip_rss_types_map[16] __rte_cache_aligned = {
0, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH,
0, PKT_RX_RSS_HASH, 0, PKT_RX_RSS_HASH,
PKT_RX_RSS_HASH, 0, 0, 0,
0, 0, 0, PKT_RX_FDIR,
};
-
#ifdef RTE_LIBRTE_IEEE1588
static uint64_t ip_pkt_etqf_map[8] = {
0, 0, 0, PKT_RX_IEEE1588_PTP,
0, 0, 0, 0,
};

- pkt_flags = (hl_tp_rs & IXGBE_RXDADV_PKTTYPE_ETQF) ?
- ip_pkt_etqf_map[(hl_tp_rs >> 4) & 0x07] :
- ip_pkt_types_map[(hl_tp_rs >> 4) & 0x0F];
+ if (likely(pkt_info & IXGBE_RXDADV_PKTTYPE_ETQF))
+ return ip_pkt_etqf_map[(pkt_info >> 4) & 0X07] |
+ ip_rss_types_map[pkt_info & 0xF];
+ else
+ return ip_rss_types_map[pkt_info & 0xF];
#else
- pkt_flags = (hl_tp_rs & IXGBE_RXDADV_PKTTYPE_ETQF) ? 0 :
- ip_pkt_types_map[(hl_tp_rs >> 4) & 0x0F];
-
+ return ip_rss_types_map[pkt_info & 0xF];
#endif
- return pkt_flags | ip_rss_types_map[hl_tp_rs & 0xF];
}

static inline uint64_t
@@ -956,7 +1023,9 @@ ixgbe_rx_scan_hw_ring(struct igb_rx_queue *rxq)
struct rte_mbuf *mb;
uint16_t pkt_len;
uint64_t pkt_flags;
- int s[LOOK_AHEAD], nb_dd;
+ int nb_dd;
+ uint32_t s[LOOK_AHEAD];
+ uint16_t pkt_info[LOOK_AHEAD];
int i, j, nb_rx = 0;


@@ -979,6 +1048,9 @@ ixgbe_rx_scan_hw_ring(struct igb_rx_queue *rxq)
for (j = LOOK_AHEAD-1; j >= 0; --j)
s[j] = rxdp[j].wb.upper.status_error;

+ for (j = LOOK_AHEAD-1; j >= 0; --j)
+ pkt_info[j] = rxdp[j].wb.lower.lo_dword.hs_rss.pkt_info;
+
/* Compute how many status bits were set */
nb_dd = 0;
for (j = 0; j < LOOK_AHEAD; ++j)
@@ -996,12 +1068,13 @@ ixgbe_rx_scan_hw_ring(struct igb_rx_queue *rxq)
mb->vlan_tci = rte_le_to_cpu_16(rxdp[j].wb.upper.vlan);

/* convert descriptor fields to rte mbuf flags */
- pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(
- rxdp[j].wb.lower.lo_dword.data);
- /* reuse status field from scan list */
- pkt_flags |= rx_desc_status_to_pkt_flags(s[j]);
+ pkt_flags = rx_desc_status_to_pkt_flags(s[j]);
pkt_flags |= rx_desc_error_to_pkt_flags(s[j]);
+ pkt_flags |=
+ ixgbe_rxd_pkt_info_to_pkt_flags(pkt_info[j]);
mb->ol_flags = pkt_flags;
+ mb->packet_type =
+ ixgbe_rxd_pkt_info_to_pkt_type(pkt_info[j]);

if (likely(pkt_flags & PKT_RX_RSS_HASH))
mb->hash.rss = rxdp[j].wb.lower.hi_dword.rss;
@@ -1198,7 +1271,7 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
union ixgbe_adv_rx_desc rxd;
uint64_t dma_addr;
uint32_t staterr;
- uint32_t hlen_type_rss;
+ uint32_t pkt_info;
uint16_t pkt_len;
uint16_t rx_id;
uint16_t nb_rx;
@@ -1316,14 +1389,17 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
rxm->data_len = pkt_len;
rxm->port = rxq->port_id;

- hlen_type_rss = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.data);
+ pkt_info = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.hs_rss.
+ pkt_info);
/* Only valid if PKT_RX_VLAN_PKT set in pkt_flags */
rxm->vlan_tci = rte_le_to_cpu_16(rxd.wb.upper.vlan);

- pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(hlen_type_rss);
- pkt_flags = pkt_flags | rx_desc_status_to_pkt_flags(staterr);
+ pkt_flags = rx_desc_status_to_pkt_flags(staterr);
pkt_flags = pkt_flags | rx_desc_error_to_pkt_flags(staterr);
+ pkt_flags = pkt_flags |
+ ixgbe_rxd_pkt_info_to_pkt_flags(pkt_info);
rxm->ol_flags = pkt_flags;
+ rxm->packet_type = ixgbe_rxd_pkt_info_to_pkt_type(pkt_info);

if (likely(pkt_flags & PKT_RX_RSS_HASH))
rxm->hash.rss = rxd.wb.lower.hi_dword.rss;
@@ -1382,7 +1458,7 @@ ixgbe_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
union ixgbe_adv_rx_desc rxd;
uint64_t dma; /* Physical address of mbuf data buffer */
uint32_t staterr;
- uint32_t hlen_type_rss;
+ uint16_t pkt_info;
uint16_t rx_id;
uint16_t nb_rx;
uint16_t nb_hold;
@@ -1561,13 +1637,15 @@ ixgbe_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
* set in the pkt_flags field.
*/
first_seg->vlan_tci = rte_le_to_cpu_16(rxd.wb.upper.vlan);
- hlen_type_rss = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.data);
- pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(hlen_type_rss);
- pkt_flags = (pkt_flags |
- rx_desc_status_to_pkt_flags(staterr));
- pkt_flags = (pkt_flags |
- rx_desc_error_to_pkt_flags(staterr));
+ pkt_info = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.hs_rss.
+ pkt_info);
+ pkt_flags = rx_desc_status_to_pkt_flags(staterr);
+ pkt_flags = pkt_flags | rx_desc_error_to_pkt_flags(staterr);
+ pkt_flags = pkt_flags |
+ ixgbe_rxd_pkt_info_to_pkt_flags(pkt_info);
first_seg->ol_flags = pkt_flags;
+ first_seg->packet_type =
+ ixgbe_rxd_pkt_info_to_pkt_type(pkt_info);

if (likely(pkt_flags & PKT_RX_RSS_HASH))
first_seg->hash.rss = rxd.wb.lower.hi_dword.rss;
--
1.9.3
Helin Zhang
2015-02-17 06:59:33 UTC
Permalink
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.

Signed-off-by: Helin Zhang <***@intel.com>
---
examples/l3fwd/main.c | 71 +++++++++++++++++++++++++++++----------------------
1 file changed, 40 insertions(+), 31 deletions(-)

v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.

v3 changes:
* Minor bug fixes and enhancements.

diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
index 6f7d7d4..49000f3 100644
--- a/examples/l3fwd/main.c
+++ b/examples/l3fwd/main.c
@@ -958,7 +958,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid, struct lcore_conf *qcon

eth_hdr = rte_pktmbuf_mtod(m, struct ether_hdr *);

- if (m->ol_flags & PKT_RX_IPV4_HDR) {
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type)) {
/* Handle IPv4 headers.*/
ipv4_hdr = (struct ipv4_hdr *)(rte_pktmbuf_mtod(m, unsigned char *) +
sizeof(struct ether_hdr));
@@ -993,7 +993,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid, struct lcore_conf *qcon

send_single_packet(m, dst_port);

- } else {
+ } else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
/* Handle IPv6 headers.*/
struct ipv6_hdr *ipv6_hdr;

@@ -1014,8 +1014,9 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid, struct lcore_conf *qcon
ether_addr_copy(&ports_eth_addr[dst_port], &eth_hdr->s_addr);

send_single_packet(m, dst_port);
- }
-
+ } else
+ /* Free the mbuf that contains non-IPV4/IPV6 packet */
+ rte_pktmbuf_free(m);
}

#ifdef DO_RFC_1812_CHECKS
@@ -1039,11 +1040,11 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid, struct lcore_conf *qcon
* to BAD_PORT value.
*/
static inline __attribute__((always_inline)) void
-rfc1812_process(struct ipv4_hdr *ipv4_hdr, uint16_t *dp, uint32_t flags)
+rfc1812_process(struct ipv4_hdr *ipv4_hdr, uint16_t *dp, uint32_t ptype)
{
uint8_t ihl;

- if ((flags & PKT_RX_IPV4_HDR) != 0) {
+ if (RTE_ETH_IS_IPV4_HDR(ptype)) {

ihl = ipv4_hdr->version_ihl - IPV4_MIN_VER_IHL;

@@ -1074,11 +1075,11 @@ get_dst_port(const struct lcore_conf *qconf, struct rte_mbuf *pkt,
struct ipv6_hdr *ipv6_hdr;
struct ether_hdr *eth_hdr;

- if (pkt->ol_flags & PKT_RX_IPV4_HDR) {
+ if (RTE_ETH_IS_IPV4_HDR(pkt->packet_type)) {
if (rte_lpm_lookup(qconf->ipv4_lookup_struct, dst_ipv4,
&next_hop) != 0)
next_hop = portid;
- } else if (pkt->ol_flags & PKT_RX_IPV6_HDR) {
+ } else if (RTE_ETH_IS_IPV6_HDR(pkt->packet_type)) {
eth_hdr = rte_pktmbuf_mtod(pkt, struct ether_hdr *);
ipv6_hdr = (struct ipv6_hdr *)(eth_hdr + 1);
if (rte_lpm6_lookup(qconf->ipv6_lookup_struct,
@@ -1112,17 +1113,19 @@ process_packet(struct lcore_conf *qconf, struct rte_mbuf *pkt,
ve = val_eth[dp];

dst_port[0] = dp;
- rfc1812_process(ipv4_hdr, dst_port, pkt->ol_flags);
+ rfc1812_process(ipv4_hdr, dst_port, pkt->packet_type);

te = _mm_blend_epi16(te, ve, MASK_ETH);
_mm_store_si128((__m128i *)eth_hdr, te);
}

/*
- * Read ol_flags and destination IPV4 addresses from 4 mbufs.
+ * Read packet_type and destination IPV4 addresses from 4 mbufs.
*/
static inline void
-processx4_step1(struct rte_mbuf *pkt[FWDSTEP], __m128i *dip, uint32_t *flag)
+processx4_step1(struct rte_mbuf *pkt[FWDSTEP],
+ __m128i *dip,
+ uint32_t *ipv4_flag)
{
struct ipv4_hdr *ipv4_hdr;
struct ether_hdr *eth_hdr;
@@ -1131,22 +1134,22 @@ processx4_step1(struct rte_mbuf *pkt[FWDSTEP], __m128i *dip, uint32_t *flag)
eth_hdr = rte_pktmbuf_mtod(pkt[0], struct ether_hdr *);
ipv4_hdr = (struct ipv4_hdr *)(eth_hdr + 1);
x0 = ipv4_hdr->dst_addr;
- flag[0] = pkt[0]->ol_flags & PKT_RX_IPV4_HDR;
+ ipv4_flag[0] = pkt[0]->packet_type & RTE_PTYPE_L3_IPV4;

eth_hdr = rte_pktmbuf_mtod(pkt[1], struct ether_hdr *);
ipv4_hdr = (struct ipv4_hdr *)(eth_hdr + 1);
x1 = ipv4_hdr->dst_addr;
- flag[0] &= pkt[1]->ol_flags;
+ ipv4_flag[0] &= pkt[1]->packet_type;

eth_hdr = rte_pktmbuf_mtod(pkt[2], struct ether_hdr *);
ipv4_hdr = (struct ipv4_hdr *)(eth_hdr + 1);
x2 = ipv4_hdr->dst_addr;
- flag[0] &= pkt[2]->ol_flags;
+ ipv4_flag[0] &= pkt[2]->packet_type;

eth_hdr = rte_pktmbuf_mtod(pkt[3], struct ether_hdr *);
ipv4_hdr = (struct ipv4_hdr *)(eth_hdr + 1);
x3 = ipv4_hdr->dst_addr;
- flag[0] &= pkt[3]->ol_flags;
+ ipv4_flag[0] &= pkt[3]->packet_type;

dip[0] = _mm_set_epi32(x3, x2, x1, x0);
}
@@ -1156,8 +1159,12 @@ processx4_step1(struct rte_mbuf *pkt[FWDSTEP], __m128i *dip, uint32_t *flag)
* If lookup fails, use incoming port (portid) as destination port.
*/
static inline void
-processx4_step2(const struct lcore_conf *qconf, __m128i dip, uint32_t flag,
- uint8_t portid, struct rte_mbuf *pkt[FWDSTEP], uint16_t dprt[FWDSTEP])
+processx4_step2(const struct lcore_conf *qconf,
+ __m128i dip,
+ uint32_t ipv4_flag,
+ uint8_t portid,
+ struct rte_mbuf *pkt[FWDSTEP],
+ uint16_t dprt[FWDSTEP])
{
rte_xmm_t dst;
const __m128i bswap_mask = _mm_set_epi8(12, 13, 14, 15, 8, 9, 10, 11,
@@ -1167,7 +1174,7 @@ processx4_step2(const struct lcore_conf *qconf, __m128i dip, uint32_t flag,
dip = _mm_shuffle_epi8(dip, bswap_mask);

/* if all 4 packets are IPV4. */
- if (likely(flag != 0)) {
+ if (likely(ipv4_flag)) {
rte_lpm_lookupx4(qconf->ipv4_lookup_struct, dip, dprt, portid);
} else {
dst.x = dip;
@@ -1218,13 +1225,13 @@ processx4_step3(struct rte_mbuf *pkt[FWDSTEP], uint16_t dst_port[FWDSTEP])
_mm_store_si128(p[3], te[3]);

rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[0] + 1),
- &dst_port[0], pkt[0]->ol_flags);
+ &dst_port[0], pkt[0]->packet_type);
rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[1] + 1),
- &dst_port[1], pkt[1]->ol_flags);
+ &dst_port[1], pkt[1]->packet_type);
rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[2] + 1),
- &dst_port[2], pkt[2]->ol_flags);
+ &dst_port[2], pkt[2]->packet_type);
rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[3] + 1),
- &dst_port[3], pkt[3]->ol_flags);
+ &dst_port[3], pkt[3]->packet_type);
}

/*
@@ -1411,7 +1418,7 @@ main_loop(__attribute__((unused)) void *dummy)
uint16_t *lp;
uint16_t dst_port[MAX_PKT_BURST];
__m128i dip[MAX_PKT_BURST / FWDSTEP];
- uint32_t flag[MAX_PKT_BURST / FWDSTEP];
+ uint32_t ipv4_flag[MAX_PKT_BURST / FWDSTEP];
uint16_t pnum[MAX_PKT_BURST + 1];
#endif

@@ -1481,14 +1488,16 @@ main_loop(__attribute__((unused)) void *dummy)
*/
int32_t n = RTE_ALIGN_FLOOR(nb_rx, 4);
for (j = 0; j < n ; j+=4) {
- uint32_t ol_flag = pkts_burst[j]->ol_flags
- & pkts_burst[j+1]->ol_flags
- & pkts_burst[j+2]->ol_flags
- & pkts_burst[j+3]->ol_flags;
- if (ol_flag & PKT_RX_IPV4_HDR ) {
+ uint32_t pkt_type =
+ pkts_burst[j]->packet_type &
+ pkts_burst[j+1]->packet_type &
+ pkts_burst[j+2]->packet_type &
+ pkts_burst[j+3]->packet_type;
+ if (pkt_type & RTE_PTYPE_L3_IPV4) {
simple_ipv4_fwd_4pkts(&pkts_burst[j],
portid, qconf);
- } else if (ol_flag & PKT_RX_IPV6_HDR) {
+ } else if (pkt_type &
+ RTE_PTYPE_L3_IPV6) {
simple_ipv6_fwd_4pkts(&pkts_burst[j],
portid, qconf);
} else {
@@ -1513,13 +1522,13 @@ main_loop(__attribute__((unused)) void *dummy)
for (j = 0; j != k; j += FWDSTEP) {
processx4_step1(&pkts_burst[j],
&dip[j / FWDSTEP],
- &flag[j / FWDSTEP]);
+ &ipv4_flag[j / FWDSTEP]);
}

k = RTE_ALIGN_FLOOR(nb_rx, FWDSTEP);
for (j = 0; j != k; j += FWDSTEP) {
processx4_step2(qconf, dip[j / FWDSTEP],
- flag[j / FWDSTEP], portid,
+ ipv4_flag[j / FWDSTEP], portid,
&pkts_burst[j], &dst_port[j]);
}
--
1.9.3
Liang, Cunming
2015-02-17 07:03:31 UTC
Permalink
-----Original Message-----
From: Zhang, Helin
Sent: Tuesday, February 17, 2015 2:59 PM
Cc: Cao, Waterman; Liang, Cunming; Liu, Jijiang; Ananyev, Konstantin; Richardson,
Bruce; Zhang, Helin
Subject: [PATCH v3 00/16] unified packet type
Currently only 6 bits which are stored in ol_flags are used to indicate the
packet types. This is not enough, as some NIC hardware can recognize quite
a lot of packet types, e.g i40e hardware can recognize more than 150 packet
types. Hiding those packet types hides hardware offload capabilities which
could be quite useful for improving performance and for end users. So an
unified packet types are needed to support all possible PMDs. A 16 bits
packet_type in mbuf structure can be changed to 32 bits and used for this
purpose. In addition, all packet types stored in ol_flag field should be
deleted at all, and 6 bits of ol_flags can be save as the benifit.
Initially, 32 bits of packet_type can be divided into several sub fields to
indicate different packet type information of a packet. The initial design
is to divide those bits into fields for L2 types, L3 types, L4 types, tunnel
types, inner L2 types, inner L3 types and inner L4 types. All PMDs should
translate the offloaded packet types into these 7 fields of information, for
user applications.
* Enlarged the packet_type field from 16 bits to 32 bits.
* Redefined the packet type sub-fields.
* Updated the 'struct rte_kni_mbuf' for KNI according to the mbuf changes.
* Used redefined packet types and enlarged packet_type field for all PMDs
and corresponding applications.
* Removed changes in bond and its relevant application, as there is no need
at all according to the recent bond changes.
* Put the mbuf layout changes into a single patch.
* Put vector ixgbe changes right after mbuf changes.
* Disabled vector ixgbe PMD by default, as mbuf layout changed, and then
re-enabled it after vector ixgbe PMD updated.
* Put the definitions of unified packet type into a single patch.
* Minor bug fixes and enhancements in l3fwd example.
mbuf: redefinition of packet_type in rte_mbuf
ixgbe: support of unified packet type for vector
mbuf: add definitions of unified packet types
e1000: support of unified packet type
ixgbe: support of unified packet type
i40e: support of unified packet type
enic: support of unified packet type
vmxnet3: support of unified packet type
app/test-pipeline: support of unified packet type
app/testpmd: support of unified packet type
examples/ip_fragmentation: support of unified packet type
examples/ip_reassembly: support of unified packet type
examples/l3fwd-acl: support of unified packet type
examples/l3fwd-power: support of unified packet type
examples/l3fwd: support of unified packet type
mbuf: remove old packet type bit masks
app/test-pipeline/pipeline_hash.c | 7 +-
app/test-pmd/csumonly.c | 10 +-
app/test-pmd/rxonly.c | 9 +-
examples/ip_fragmentation/main.c | 7 +-
examples/ip_reassembly/main.c | 7 +-
examples/l3fwd-acl/main.c | 19 +-
examples/l3fwd-power/main.c | 5 +-
examples/l3fwd/main.c | 71 +-
.../linuxapp/eal/include/exec-env/rte_kni_common.h | 4 +-
lib/librte_mbuf/rte_mbuf.c | 6 -
lib/librte_mbuf/rte_mbuf.h | 127 +++-
lib/librte_pmd_e1000/igb_rxtx.c | 98 ++-
lib/librte_pmd_enic/enic_main.c | 14 +-
lib/librte_pmd_i40e/i40e_rxtx.c | 786 ++++++++++++++-------
lib/librte_pmd_ixgbe/ixgbe_rxtx.c | 146 +++-
lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c | 49 +-
lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c | 4 +-
17 files changed, 921 insertions(+), 448 deletions(-)
--
1.9.3
Acked-by: Cunming Liang <***@intel.com>
Helin Zhang
2015-02-17 06:59:30 UTC
Permalink
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.

Signed-off-by: Helin Zhang <***@intel.com>
---
examples/ip_reassembly/main.c | 7 +++----
1 file changed, 3 insertions(+), 4 deletions(-)

v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.

diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c
index 8492153..5ef2135 100644
--- a/examples/ip_reassembly/main.c
+++ b/examples/ip_reassembly/main.c
@@ -357,7 +357,7 @@ reassemble(struct rte_mbuf *m, uint8_t portid, uint32_t queue,
dst_port = portid;

/* if packet is IPv4 */
- if (m->ol_flags & (PKT_RX_IPV4_HDR)) {
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type)) {
struct ipv4_hdr *ip_hdr;
uint32_t ip_dst;

@@ -397,9 +397,8 @@ reassemble(struct rte_mbuf *m, uint8_t portid, uint32_t queue,
}

eth_hdr->ether_type = rte_be_to_cpu_16(ETHER_TYPE_IPv4);
- }
- /* if packet is IPv6 */
- else if (m->ol_flags & (PKT_RX_IPV6_HDR | PKT_RX_IPV6_HDR_EXT)) {
+ } else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
+ /* if packet is IPv6 */
struct ipv6_extension_fragment *frag_hdr;
struct ipv6_hdr *ip_hdr;
--
1.9.3
Ananyev, Konstantin
2015-02-17 09:46:49 UTC
Permalink
-----Original Message-----
From: Zhang, Helin
Sent: Tuesday, February 17, 2015 6:59 AM
Cc: Cao, Waterman; Liang, Cunming; Liu, Jijiang; Ananyev, Konstantin; Richardson, Bruce; Zhang, Helin
Subject: [PATCH v3 00/16] unified packet type
Currently only 6 bits which are stored in ol_flags are used to indicate the
packet types. This is not enough, as some NIC hardware can recognize quite
a lot of packet types, e.g i40e hardware can recognize more than 150 packet
types. Hiding those packet types hides hardware offload capabilities which
could be quite useful for improving performance and for end users. So an
unified packet types are needed to support all possible PMDs. A 16 bits
packet_type in mbuf structure can be changed to 32 bits and used for this
purpose. In addition, all packet types stored in ol_flag field should be
deleted at all, and 6 bits of ol_flags can be save as the benifit.
Initially, 32 bits of packet_type can be divided into several sub fields to
indicate different packet type information of a packet. The initial design
is to divide those bits into fields for L2 types, L3 types, L4 types, tunnel
types, inner L2 types, inner L3 types and inner L4 types. All PMDs should
translate the offloaded packet types into these 7 fields of information, for
user applications.
* Enlarged the packet_type field from 16 bits to 32 bits.
* Redefined the packet type sub-fields.
* Updated the 'struct rte_kni_mbuf' for KNI according to the mbuf changes.
* Used redefined packet types and enlarged packet_type field for all PMDs
and corresponding applications.
* Removed changes in bond and its relevant application, as there is no need
at all according to the recent bond changes.
* Put the mbuf layout changes into a single patch.
* Put vector ixgbe changes right after mbuf changes.
* Disabled vector ixgbe PMD by default, as mbuf layout changed, and then
re-enabled it after vector ixgbe PMD updated.
* Put the definitions of unified packet type into a single patch.
* Minor bug fixes and enhancements in l3fwd example.
mbuf: redefinition of packet_type in rte_mbuf
ixgbe: support of unified packet type for vector
mbuf: add definitions of unified packet types
e1000: support of unified packet type
ixgbe: support of unified packet type
i40e: support of unified packet type
enic: support of unified packet type
vmxnet3: support of unified packet type
app/test-pipeline: support of unified packet type
app/testpmd: support of unified packet type
examples/ip_fragmentation: support of unified packet type
examples/ip_reassembly: support of unified packet type
examples/l3fwd-acl: support of unified packet type
examples/l3fwd-power: support of unified packet type
examples/l3fwd: support of unified packet type
mbuf: remove old packet type bit masks
app/test-pipeline/pipeline_hash.c | 7 +-
app/test-pmd/csumonly.c | 10 +-
app/test-pmd/rxonly.c | 9 +-
examples/ip_fragmentation/main.c | 7 +-
examples/ip_reassembly/main.c | 7 +-
examples/l3fwd-acl/main.c | 19 +-
examples/l3fwd-power/main.c | 5 +-
examples/l3fwd/main.c | 71 +-
.../linuxapp/eal/include/exec-env/rte_kni_common.h | 4 +-
lib/librte_mbuf/rte_mbuf.c | 6 -
lib/librte_mbuf/rte_mbuf.h | 127 +++-
lib/librte_pmd_e1000/igb_rxtx.c | 98 ++-
lib/librte_pmd_enic/enic_main.c | 14 +-
lib/librte_pmd_i40e/i40e_rxtx.c | 786 ++++++++++++++-------
lib/librte_pmd_ixgbe/ixgbe_rxtx.c | 146 +++-
lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c | 49 +-
lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c | 4 +-
17 files changed, 921 insertions(+), 448 deletions(-)
--
1.9.3
Helin Zhang
2015-02-27 13:11:18 UTC
Permalink
Currently only 6 bits which are stored in ol_flags are used to indicate the
packet types. This is not enough, as some NIC hardware can recognize quite
a lot of packet types, e.g i40e hardware can recognize more than 150 packet
types. Hiding those packet types hides hardware offload capabilities which
could be quite useful for improving performance and for end users. So an
unified packet types are needed to support all possible PMDs. A 16 bits
packet_type in mbuf structure can be changed to 32 bits and used for this
purpose. In addition, all packet types stored in ol_flag field should be
deleted at all, and 6 bits of ol_flags can be save as the benifit.

Initially, 32 bits of packet_type can be divided into several sub fields to
indicate different packet type information of a packet. The initial design
is to divide those bits into fields for L2 types, L3 types, L4 types, tunnel
types, inner L2 types, inner L3 types and inner L4 types. All PMDs should
translate the offloaded packet types into these 7 fields of information, for
user applications.

v2 changes:
* Enlarged the packet_type field from 16 bits to 32 bits.
* Redefined the packet type sub-fields.
* Updated the 'struct rte_kni_mbuf' for KNI according to the mbuf changes.
* Used redefined packet types and enlarged packet_type field for all PMDs
and corresponding applications.
* Removed changes in bond and its relevant application, as there is no need
at all according to the recent bond changes.

v3 changes:
* Put the mbuf layout changes into a single patch.
* Put vector ixgbe changes right after mbuf changes.
* Disabled vector ixgbe PMD by default, as mbuf layout changed, and then
re-enabled it after vector ixgbe PMD updated.
* Put the definitions of unified packet type into a single patch.
* Minor bug fixes and enhancements in l3fwd example.

v4 changes:
* Added detailed description of each packet types.
* Supported unified packet type of fm10k.
* Added printing logs of packet types of each received packet for rxonly
mode in testpmd.
* Removed several useless code lines which block packet type unification from
app/test/packet_burst_generator.c.

Helin Zhang (18):
mbuf: redefinition of packet_type in rte_mbuf
ixgbe: support of unified packet type for vector
mbuf: add definitions of unified packet types
e1000: support of unified packet type
ixgbe: support of unified packet type
i40e: support of unified packet type
enic: support of unified packet type
vmxnet3: support of unified packet type
fm10k: support of unified packet type
app/test-pipeline: support of unified packet type
app/testpmd: support of unified packet type
app/test: Remove useless code
examples/ip_fragmentation: support of unified packet type
examples/ip_reassembly: support of unified packet type
examples/l3fwd-acl: support of unified packet type
examples/l3fwd-power: support of unified packet type
examples/l3fwd: support of unified packet type
mbuf: remove old packet type bit masks

app/test-pipeline/pipeline_hash.c | 7 +-
app/test-pmd/csumonly.c | 10 +-
app/test-pmd/rxonly.c | 178 ++++-
app/test/packet_burst_generator.c | 10 -
examples/ip_fragmentation/main.c | 7 +-
examples/ip_reassembly/main.c | 7 +-
examples/l3fwd-acl/main.c | 19 +-
examples/l3fwd-power/main.c | 5 +-
examples/l3fwd/main.c | 71 +-
.../linuxapp/eal/include/exec-env/rte_kni_common.h | 4 +-
lib/librte_mbuf/rte_mbuf.c | 6 -
lib/librte_mbuf/rte_mbuf.h | 290 +++++++-
lib/librte_pmd_e1000/igb_rxtx.c | 98 ++-
lib/librte_pmd_enic/enic_main.c | 14 +-
lib/librte_pmd_fm10k/fm10k_rxtx.c | 30 +-
lib/librte_pmd_i40e/i40e_rxtx.c | 786 ++++++++++++++-------
lib/librte_pmd_ixgbe/ixgbe_rxtx.c | 146 +++-
lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c | 49 +-
lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c | 4 +-
19 files changed, 1274 insertions(+), 467 deletions(-)
--
1.9.3
Helin Zhang
2015-02-27 13:11:22 UTC
Permalink
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.

Signed-off-by: Helin Zhang <***@intel.com>
---
lib/librte_pmd_e1000/igb_rxtx.c | 98 ++++++++++++++++++++++++++++++++++-------
1 file changed, 83 insertions(+), 15 deletions(-)

v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.

diff --git a/lib/librte_pmd_e1000/igb_rxtx.c b/lib/librte_pmd_e1000/igb_rxtx.c
index cdf2cac..4fa3ede 100644
--- a/lib/librte_pmd_e1000/igb_rxtx.c
+++ b/lib/librte_pmd_e1000/igb_rxtx.c
@@ -591,17 +591,85 @@ eth_igb_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
* RX functions
*
**********************************************************************/
+#define IGB_PACKET_TYPE_IPV4 0X01
+#define IGB_PACKET_TYPE_IPV4_TCP 0X11
+#define IGB_PACKET_TYPE_IPV4_UDP 0X21
+#define IGB_PACKET_TYPE_IPV4_SCTP 0X41
+#define IGB_PACKET_TYPE_IPV4_EXT 0X03
+#define IGB_PACKET_TYPE_IPV4_EXT_SCTP 0X43
+#define IGB_PACKET_TYPE_IPV6 0X04
+#define IGB_PACKET_TYPE_IPV6_TCP 0X14
+#define IGB_PACKET_TYPE_IPV6_UDP 0X24
+#define IGB_PACKET_TYPE_IPV6_EXT 0X0C
+#define IGB_PACKET_TYPE_IPV6_EXT_TCP 0X1C
+#define IGB_PACKET_TYPE_IPV6_EXT_UDP 0X2C
+#define IGB_PACKET_TYPE_IPV4_IPV6 0X05
+#define IGB_PACKET_TYPE_IPV4_IPV6_TCP 0X15
+#define IGB_PACKET_TYPE_IPV4_IPV6_UDP 0X25
+#define IGB_PACKET_TYPE_IPV4_IPV6_EXT 0X0D
+#define IGB_PACKET_TYPE_IPV4_IPV6_EXT_TCP 0X1D
+#define IGB_PACKET_TYPE_IPV4_IPV6_EXT_UDP 0X2D
+#define IGB_PACKET_TYPE_MAX 0X80
+#define IGB_PACKET_TYPE_MASK 0X7F
+#define IGB_PACKET_TYPE_SHIFT 0X04
+static inline uint32_t
+igb_rxd_pkt_info_to_pkt_type(uint16_t pkt_info)
+{
+ static const uint32_t
+ ptype_table[IGB_PACKET_TYPE_MAX] __rte_cache_aligned = {
+ [IGB_PACKET_TYPE_IPV4] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4,
+ [IGB_PACKET_TYPE_IPV4_EXT] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4_EXT,
+ [IGB_PACKET_TYPE_IPV6] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6,
+ [IGB_PACKET_TYPE_IPV4_IPV6] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6,
+ [IGB_PACKET_TYPE_IPV6_EXT] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6_EXT,
+ [IGB_PACKET_TYPE_IPV4_IPV6_EXT] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT,
+ [IGB_PACKET_TYPE_IPV4_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_TCP,
+ [IGB_PACKET_TYPE_IPV6_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_TCP,
+ [IGB_PACKET_TYPE_IPV4_IPV6_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6 | RTE_PTYPE_INNER_L4_TCP,
+ [IGB_PACKET_TYPE_IPV6_EXT_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_TCP,
+ [IGB_PACKET_TYPE_IPV4_IPV6_EXT_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_INNER_L4_TCP,
+ [IGB_PACKET_TYPE_IPV4_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_UDP,
+ [IGB_PACKET_TYPE_IPV6_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_UDP,
+ [IGB_PACKET_TYPE_IPV4_IPV6_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6 | RTE_PTYPE_INNER_L4_UDP,
+ [IGB_PACKET_TYPE_IPV6_EXT_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_UDP,
+ [IGB_PACKET_TYPE_IPV4_IPV6_EXT_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_INNER_L4_UDP,
+ [IGB_PACKET_TYPE_IPV4_SCTP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_SCTP,
+ [IGB_PACKET_TYPE_IPV4_EXT_SCTP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_L4_SCTP,
+ };
+ if (unlikely(pkt_info & E1000_RXDADV_PKTTYPE_ETQF))
+ return RTE_PTYPE_UNKNOWN;
+
+ pkt_info = (pkt_info >> IGB_PACKET_TYPE_SHIFT) & IGB_PACKET_TYPE_MASK;
+
+ return ptype_table[pkt_info];
+}
+
static inline uint64_t
rx_desc_hlen_type_rss_to_pkt_flags(uint32_t hl_tp_rs)
{
- uint64_t pkt_flags;
-
- static uint64_t ip_pkt_types_map[16] = {
- 0, PKT_RX_IPV4_HDR, PKT_RX_IPV4_HDR_EXT, PKT_RX_IPV4_HDR_EXT,
- PKT_RX_IPV6_HDR, 0, 0, 0,
- PKT_RX_IPV6_HDR_EXT, 0, 0, 0,
- PKT_RX_IPV6_HDR_EXT, 0, 0, 0,
- };
+ uint64_t pkt_flags = ((hl_tp_rs & 0x0F) == 0) ? 0 : PKT_RX_RSS_HASH;

#if defined(RTE_LIBRTE_IEEE1588)
static uint32_t ip_pkt_etqf_map[8] = {
@@ -609,14 +677,10 @@ rx_desc_hlen_type_rss_to_pkt_flags(uint32_t hl_tp_rs)
0, 0, 0, 0,
};

- pkt_flags = (hl_tp_rs & E1000_RXDADV_PKTTYPE_ETQF) ?
- ip_pkt_etqf_map[(hl_tp_rs >> 4) & 0x07] :
- ip_pkt_types_map[(hl_tp_rs >> 4) & 0x0F];
-#else
- pkt_flags = (hl_tp_rs & E1000_RXDADV_PKTTYPE_ETQF) ? 0 :
- ip_pkt_types_map[(hl_tp_rs >> 4) & 0x0F];
+ pkt_flags |= ip_pkt_etqf_map[(hl_tp_rs >> 4) & 0x07];
#endif
- return pkt_flags | (((hl_tp_rs & 0x0F) == 0) ? 0 : PKT_RX_RSS_HASH);
+
+ return pkt_flags;
}

static inline uint64_t
@@ -791,6 +855,8 @@ eth_igb_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
pkt_flags = pkt_flags | rx_desc_status_to_pkt_flags(staterr);
pkt_flags = pkt_flags | rx_desc_error_to_pkt_flags(staterr);
rxm->ol_flags = pkt_flags;
+ rxm->packet_type = igb_rxd_pkt_info_to_pkt_type(rxd.wb.lower.
+ lo_dword.hs_rss.pkt_info);

/*
* Store the mbuf address into the next entry of the array
@@ -1025,6 +1091,8 @@ eth_igb_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
pkt_flags = pkt_flags | rx_desc_status_to_pkt_flags(staterr);
pkt_flags = pkt_flags | rx_desc_error_to_pkt_flags(staterr);
first_seg->ol_flags = pkt_flags;
+ first_seg->packet_type = igb_rxd_pkt_info_to_pkt_type(rxd.wb.
+ lower.lo_dword.hs_rss.pkt_info);

/* Prefetch data of first segment, if configured to do so. */
rte_packet_prefetch((char *)first_seg->buf_addr +
--
1.9.3
Helin Zhang
2015-02-27 13:11:23 UTC
Permalink
To unify packet type among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
Note that around 2.5% performance drop (64B) was observed of doing
4 ports (1 port per 82599 card) IO forwarding on the same SNB core.

Signed-off-by: Helin Zhang <***@intel.com>
---
lib/librte_pmd_ixgbe/ixgbe_rxtx.c | 146 +++++++++++++++++++++++++++++---------
1 file changed, 112 insertions(+), 34 deletions(-)

v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.

diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
index 3059375..a8d99be 100644
--- a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
+++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
@@ -855,40 +855,107 @@ end_of_tx:
* RX functions
*
**********************************************************************/
-static inline uint64_t
-rx_desc_hlen_type_rss_to_pkt_flags(uint32_t hl_tp_rs)
+#define IXGBE_PACKET_TYPE_IPV4 0X01
+#define IXGBE_PACKET_TYPE_IPV4_TCP 0X11
+#define IXGBE_PACKET_TYPE_IPV4_UDP 0X21
+#define IXGBE_PACKET_TYPE_IPV4_SCTP 0X41
+#define IXGBE_PACKET_TYPE_IPV4_EXT 0X03
+#define IXGBE_PACKET_TYPE_IPV4_EXT_SCTP 0X43
+#define IXGBE_PACKET_TYPE_IPV6 0X04
+#define IXGBE_PACKET_TYPE_IPV6_TCP 0X14
+#define IXGBE_PACKET_TYPE_IPV6_UDP 0X24
+#define IXGBE_PACKET_TYPE_IPV6_EXT 0X0C
+#define IXGBE_PACKET_TYPE_IPV6_EXT_TCP 0X1C
+#define IXGBE_PACKET_TYPE_IPV6_EXT_UDP 0X2C
+#define IXGBE_PACKET_TYPE_IPV4_IPV6 0X05
+#define IXGBE_PACKET_TYPE_IPV4_IPV6_TCP 0X15
+#define IXGBE_PACKET_TYPE_IPV4_IPV6_UDP 0X25
+#define IXGBE_PACKET_TYPE_IPV4_IPV6_EXT 0X0D
+#define IXGBE_PACKET_TYPE_IPV4_IPV6_EXT_TCP 0X1D
+#define IXGBE_PACKET_TYPE_IPV4_IPV6_EXT_UDP 0X2D
+#define IXGBE_PACKET_TYPE_MAX 0X80
+#define IXGBE_PACKET_TYPE_MASK 0X7F
+#define IXGBE_PACKET_TYPE_SHIFT 0X04
+static inline uint32_t
+ixgbe_rxd_pkt_info_to_pkt_type(uint16_t pkt_info)
{
- uint64_t pkt_flags;
-
- static uint64_t ip_pkt_types_map[16] = {
- 0, PKT_RX_IPV4_HDR, PKT_RX_IPV4_HDR_EXT, PKT_RX_IPV4_HDR_EXT,
- PKT_RX_IPV6_HDR, 0, 0, 0,
- PKT_RX_IPV6_HDR_EXT, 0, 0, 0,
- PKT_RX_IPV6_HDR_EXT, 0, 0, 0,
+ static const uint32_t
+ ptype_table[IXGBE_PACKET_TYPE_MAX] __rte_cache_aligned = {
+ [IXGBE_PACKET_TYPE_IPV4] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4,
+ [IXGBE_PACKET_TYPE_IPV4_EXT] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4_EXT,
+ [IXGBE_PACKET_TYPE_IPV6] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6,
+ [IXGBE_PACKET_TYPE_IPV6_EXT] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6_EXT,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6_EXT] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT,
+ [IXGBE_PACKET_TYPE_IPV4_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_TCP,
+ [IXGBE_PACKET_TYPE_IPV6_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_TCP,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6 | RTE_PTYPE_INNER_L4_TCP,
+ [IXGBE_PACKET_TYPE_IPV6_EXT_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_TCP,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6_EXT_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_INNER_L4_TCP,
+ [IXGBE_PACKET_TYPE_IPV4_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_UDP,
+ [IXGBE_PACKET_TYPE_IPV6_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_UDP,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6 | RTE_PTYPE_INNER_L4_UDP,
+ [IXGBE_PACKET_TYPE_IPV6_EXT_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_UDP,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6_EXT_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_INNER_L4_UDP,
+ [IXGBE_PACKET_TYPE_IPV4_SCTP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_SCTP,
+ [IXGBE_PACKET_TYPE_IPV4_EXT_SCTP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_L4_SCTP,
};
+ if (unlikely(pkt_info & IXGBE_RXDADV_PKTTYPE_ETQF))
+ return RTE_PTYPE_UNKNOWN;

- static uint64_t ip_rss_types_map[16] = {
+ pkt_info = (pkt_info >> IXGBE_PACKET_TYPE_SHIFT) &
+ IXGBE_PACKET_TYPE_MASK;
+
+ return ptype_table[pkt_info];
+}
+
+static inline uint64_t
+ixgbe_rxd_pkt_info_to_pkt_flags(uint16_t pkt_info)
+{
+ static uint64_t ip_rss_types_map[16] __rte_cache_aligned = {
0, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH,
0, PKT_RX_RSS_HASH, 0, PKT_RX_RSS_HASH,
PKT_RX_RSS_HASH, 0, 0, 0,
0, 0, 0, PKT_RX_FDIR,
};
-
#ifdef RTE_LIBRTE_IEEE1588
static uint64_t ip_pkt_etqf_map[8] = {
0, 0, 0, PKT_RX_IEEE1588_PTP,
0, 0, 0, 0,
};

- pkt_flags = (hl_tp_rs & IXGBE_RXDADV_PKTTYPE_ETQF) ?
- ip_pkt_etqf_map[(hl_tp_rs >> 4) & 0x07] :
- ip_pkt_types_map[(hl_tp_rs >> 4) & 0x0F];
+ if (likely(pkt_info & IXGBE_RXDADV_PKTTYPE_ETQF))
+ return ip_pkt_etqf_map[(pkt_info >> 4) & 0X07] |
+ ip_rss_types_map[pkt_info & 0xF];
+ else
+ return ip_rss_types_map[pkt_info & 0xF];
#else
- pkt_flags = (hl_tp_rs & IXGBE_RXDADV_PKTTYPE_ETQF) ? 0 :
- ip_pkt_types_map[(hl_tp_rs >> 4) & 0x0F];
-
+ return ip_rss_types_map[pkt_info & 0xF];
#endif
- return pkt_flags | ip_rss_types_map[hl_tp_rs & 0xF];
}

static inline uint64_t
@@ -945,7 +1012,9 @@ ixgbe_rx_scan_hw_ring(struct igb_rx_queue *rxq)
struct rte_mbuf *mb;
uint16_t pkt_len;
uint64_t pkt_flags;
- int s[LOOK_AHEAD], nb_dd;
+ int nb_dd;
+ uint32_t s[LOOK_AHEAD];
+ uint16_t pkt_info[LOOK_AHEAD];
int i, j, nb_rx = 0;


@@ -968,6 +1037,9 @@ ixgbe_rx_scan_hw_ring(struct igb_rx_queue *rxq)
for (j = LOOK_AHEAD-1; j >= 0; --j)
s[j] = rxdp[j].wb.upper.status_error;

+ for (j = LOOK_AHEAD-1; j >= 0; --j)
+ pkt_info[j] = rxdp[j].wb.lower.lo_dword.hs_rss.pkt_info;
+
/* Compute how many status bits were set */
nb_dd = 0;
for (j = 0; j < LOOK_AHEAD; ++j)
@@ -985,12 +1057,13 @@ ixgbe_rx_scan_hw_ring(struct igb_rx_queue *rxq)
mb->vlan_tci = rte_le_to_cpu_16(rxdp[j].wb.upper.vlan);

/* convert descriptor fields to rte mbuf flags */
- pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(
- rxdp[j].wb.lower.lo_dword.data);
- /* reuse status field from scan list */
- pkt_flags |= rx_desc_status_to_pkt_flags(s[j]);
+ pkt_flags = rx_desc_status_to_pkt_flags(s[j]);
pkt_flags |= rx_desc_error_to_pkt_flags(s[j]);
+ pkt_flags |=
+ ixgbe_rxd_pkt_info_to_pkt_flags(pkt_info[j]);
mb->ol_flags = pkt_flags;
+ mb->packet_type =
+ ixgbe_rxd_pkt_info_to_pkt_type(pkt_info[j]);

if (likely(pkt_flags & PKT_RX_RSS_HASH))
mb->hash.rss = rxdp[j].wb.lower.hi_dword.rss;
@@ -1187,7 +1260,7 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
union ixgbe_adv_rx_desc rxd;
uint64_t dma_addr;
uint32_t staterr;
- uint32_t hlen_type_rss;
+ uint32_t pkt_info;
uint16_t pkt_len;
uint16_t rx_id;
uint16_t nb_rx;
@@ -1305,14 +1378,17 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
rxm->data_len = pkt_len;
rxm->port = rxq->port_id;

- hlen_type_rss = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.data);
+ pkt_info = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.hs_rss.
+ pkt_info);
/* Only valid if PKT_RX_VLAN_PKT set in pkt_flags */
rxm->vlan_tci = rte_le_to_cpu_16(rxd.wb.upper.vlan);

- pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(hlen_type_rss);
- pkt_flags = pkt_flags | rx_desc_status_to_pkt_flags(staterr);
+ pkt_flags = rx_desc_status_to_pkt_flags(staterr);
pkt_flags = pkt_flags | rx_desc_error_to_pkt_flags(staterr);
+ pkt_flags = pkt_flags |
+ ixgbe_rxd_pkt_info_to_pkt_flags(pkt_info);
rxm->ol_flags = pkt_flags;
+ rxm->packet_type = ixgbe_rxd_pkt_info_to_pkt_type(pkt_info);

if (likely(pkt_flags & PKT_RX_RSS_HASH))
rxm->hash.rss = rxd.wb.lower.hi_dword.rss;
@@ -1371,7 +1447,7 @@ ixgbe_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
union ixgbe_adv_rx_desc rxd;
uint64_t dma; /* Physical address of mbuf data buffer */
uint32_t staterr;
- uint32_t hlen_type_rss;
+ uint16_t pkt_info;
uint16_t rx_id;
uint16_t nb_rx;
uint16_t nb_hold;
@@ -1550,13 +1626,15 @@ ixgbe_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
* set in the pkt_flags field.
*/
first_seg->vlan_tci = rte_le_to_cpu_16(rxd.wb.upper.vlan);
- hlen_type_rss = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.data);
- pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(hlen_type_rss);
- pkt_flags = (pkt_flags |
- rx_desc_status_to_pkt_flags(staterr));
- pkt_flags = (pkt_flags |
- rx_desc_error_to_pkt_flags(staterr));
+ pkt_info = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.hs_rss.
+ pkt_info);
+ pkt_flags = rx_desc_status_to_pkt_flags(staterr);
+ pkt_flags = pkt_flags | rx_desc_error_to_pkt_flags(staterr);
+ pkt_flags = pkt_flags |
+ ixgbe_rxd_pkt_info_to_pkt_flags(pkt_info);
first_seg->ol_flags = pkt_flags;
+ first_seg->packet_type =
+ ixgbe_rxd_pkt_info_to_pkt_type(pkt_info);

if (likely(pkt_flags & PKT_RX_RSS_HASH))
first_seg->hash.rss = rxd.wb.lower.hi_dword.rss;
--
1.9.3
Helin Zhang
2015-02-27 13:11:19 UTC
Permalink
In order to unify the packet type, the field of 'packet_type' in
'struct rte_mbuf' needs to be extended from 16 to 32 bits.
Accordingly, some fields in 'struct rte_mbuf' are re-organized to
support this change for Vector PMD. As 'struct rte_kni_mbuf' for
KNI should be right mapped to 'struct rte_mbuf', it should be
modified accordingly. In addition, Vector PMD of ixgbe is disabled
by default, as 'struct rte_mbuf' changed.

Signed-off-by: Helin Zhang <***@intel.com>
Signed-off-by: Cunming Liang <***@intel.com>
---
config/common_linuxapp | 2 +-
.../linuxapp/eal/include/exec-env/rte_kni_common.h | 4 ++--
lib/librte_mbuf/rte_mbuf.h | 23 +++++++++++++++-------
3 files changed, 19 insertions(+), 10 deletions(-)

v2 changes:
* Enlarged the packet_type field from 16 bits to 32 bits.
* Redefined the packet type sub-fields.
* Updated the 'struct rte_kni_mbuf' for KNI according to the mbuf changes.

v3 changes:
* Put the mbuf layout changes into a single patch.
* Disabled vector ixgbe PMD by default, as mbuf layout changed.

diff --git a/config/common_linuxapp b/config/common_linuxapp
index 97f1c9e..97d7bae 100644
--- a/config/common_linuxapp
+++ b/config/common_linuxapp
@@ -166,7 +166,7 @@ CONFIG_RTE_LIBRTE_IXGBE_DEBUG_TX_FREE=n
CONFIG_RTE_LIBRTE_IXGBE_DEBUG_DRIVER=n
CONFIG_RTE_LIBRTE_IXGBE_PF_DISABLE_STRIP_CRC=n
CONFIG_RTE_LIBRTE_IXGBE_RX_ALLOW_BULK_ALLOC=y
-CONFIG_RTE_IXGBE_INC_VECTOR=y
+CONFIG_RTE_IXGBE_INC_VECTOR=n
CONFIG_RTE_IXGBE_RX_OLFLAGS_ENABLE=y

#
diff --git a/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h b/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h
index 1e55c2d..bd1cc09 100644
--- a/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h
+++ b/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h
@@ -117,9 +117,9 @@ struct rte_kni_mbuf {
uint16_t data_off; /**< Start address of data in segment buffer. */
char pad1[4];
uint64_t ol_flags; /**< Offload features. */
- char pad2[2];
- uint16_t data_len; /**< Amount of data in segment buffer. */
+ char pad2[4];
uint32_t pkt_len; /**< Total pkt len: sum of all segment data_len. */
+ uint16_t data_len; /**< Amount of data in segment buffer. */

/* fields on second cache line */
char pad3[8] __attribute__((__aligned__(RTE_CACHE_LINE_SIZE)));
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index 17ba791..f5b7a8b 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -258,17 +258,26 @@ struct rte_mbuf {
/* remaining bytes are set on RX when pulling packet from descriptor */
MARKER rx_descriptor_fields1;

- /**
- * The packet type, which is used to indicate ordinary packet and also
- * tunneled packet format, i.e. each number is represented a type of
- * packet.
+ /*
+ * The packet type, which is the combination of outer/inner L2, L3, L4
+ * and tunnel types.
*/
- uint16_t packet_type;
+ union {
+ uint32_t packet_type; /**< L2/L3/L4 and tunnel information. */
+ struct {
+ uint32_t l2_type:4; /**< (Outer) L2 type. */
+ uint32_t l3_type:4; /**< (Outer) L3 type. */
+ uint32_t l4_type:4; /**< (Outer) L4 type. */
+ uint32_t tun_type:4; /**< Tunnel type. */
+ uint32_t inner_l2_type:4; /**< Inner L2 type. */
+ uint32_t inner_l3_type:4; /**< Inner L3 type. */
+ uint32_t inner_l4_type:4; /**< Inner L4 type. */
+ };
+ };

- uint16_t data_len; /**< Amount of data in segment buffer. */
uint32_t pkt_len; /**< Total pkt len: sum of all segments. */
+ uint16_t data_len; /**< Amount of data in segment buffer. */
uint16_t vlan_tci; /**< VLAN Tag Control Identifier (CPU order) */
- uint16_t reserved;
union {
uint32_t rss; /**< RSS hash result if RSS enabled */
struct {
--
1.9.3
Helin Zhang
2015-02-27 13:11:20 UTC
Permalink
To unify the packet type, bit masks of packet type for ol_flags are
replaced. In addition, more packet types (UDP, TCP and SCTP) are
supported in vectorized ixgbe PMD.
Note that around 2% performance drop (64B) was observed of doing 4
ports (1 port per 82599 card) IO forwarding on the same SNB core.

Signed-off-by: Cunming Liang <***@intel.com>
Signed-off-by: Helin Zhang <***@intel.com>
---
config/common_linuxapp | 2 +-
lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c | 49 +++++++++++++++++++----------------
2 files changed, 27 insertions(+), 24 deletions(-)

v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.

v3 changes:
* Put vector ixgbe changes right after mbuf changes.
* Enabled vector ixgbe PMD by default together with changes for updated
vector PMD.

diff --git a/config/common_linuxapp b/config/common_linuxapp
index 97d7bae..97f1c9e 100644
--- a/config/common_linuxapp
+++ b/config/common_linuxapp
@@ -166,7 +166,7 @@ CONFIG_RTE_LIBRTE_IXGBE_DEBUG_TX_FREE=n
CONFIG_RTE_LIBRTE_IXGBE_DEBUG_DRIVER=n
CONFIG_RTE_LIBRTE_IXGBE_PF_DISABLE_STRIP_CRC=n
CONFIG_RTE_LIBRTE_IXGBE_RX_ALLOW_BULK_ALLOC=y
-CONFIG_RTE_IXGBE_INC_VECTOR=n
+CONFIG_RTE_IXGBE_INC_VECTOR=y
CONFIG_RTE_IXGBE_RX_OLFLAGS_ENABLE=y

#
diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c b/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
index 1f46f0f..eeb0ffb 100644
--- a/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
+++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
@@ -134,44 +134,35 @@ ixgbe_rxq_rearm(struct igb_rx_queue *rxq)
*/
#ifdef RTE_IXGBE_RX_OLFLAGS_ENABLE

-#define OLFLAGS_MASK ((uint16_t)(PKT_RX_VLAN_PKT | PKT_RX_IPV4_HDR |\
- PKT_RX_IPV4_HDR_EXT | PKT_RX_IPV6_HDR |\
- PKT_RX_IPV6_HDR_EXT))
-#define OLFLAGS_MASK_V (((uint64_t)OLFLAGS_MASK << 48) | \
- ((uint64_t)OLFLAGS_MASK << 32) | \
- ((uint64_t)OLFLAGS_MASK << 16) | \
- ((uint64_t)OLFLAGS_MASK))
-#define PTYPE_SHIFT (1)
+#define OLFLAGS_MASK_V (((uint64_t)PKT_RX_VLAN_PKT << 48) | \
+ ((uint64_t)PKT_RX_VLAN_PKT << 32) | \
+ ((uint64_t)PKT_RX_VLAN_PKT << 16) | \
+ ((uint64_t)PKT_RX_VLAN_PKT))
#define VTAG_SHIFT (3)

static inline void
desc_to_olflags_v(__m128i descs[4], struct rte_mbuf **rx_pkts)
{
- __m128i ptype0, ptype1, vtag0, vtag1;
+ __m128i vtag0, vtag1;
union {
uint16_t e[4];
uint64_t dword;
} vol;

- ptype0 = _mm_unpacklo_epi16(descs[0], descs[1]);
- ptype1 = _mm_unpacklo_epi16(descs[2], descs[3]);
vtag0 = _mm_unpackhi_epi16(descs[0], descs[1]);
vtag1 = _mm_unpackhi_epi16(descs[2], descs[3]);

- ptype1 = _mm_unpacklo_epi32(ptype0, ptype1);
vtag1 = _mm_unpacklo_epi32(vtag0, vtag1);
-
- ptype1 = _mm_slli_epi16(ptype1, PTYPE_SHIFT);
vtag1 = _mm_srli_epi16(vtag1, VTAG_SHIFT);

- ptype1 = _mm_or_si128(ptype1, vtag1);
- vol.dword = _mm_cvtsi128_si64(ptype1) & OLFLAGS_MASK_V;
+ vol.dword = _mm_cvtsi128_si64(vtag1) & OLFLAGS_MASK_V;

rx_pkts[0]->ol_flags = vol.e[0];
rx_pkts[1]->ol_flags = vol.e[1];
rx_pkts[2]->ol_flags = vol.e[2];
rx_pkts[3]->ol_flags = vol.e[3];
}
+
#else
#define desc_to_olflags_v(desc, rx_pkts) do {} while (0)
#endif
@@ -197,13 +188,15 @@ _recv_raw_pkts_vec(struct igb_rx_queue *rxq, struct rte_mbuf **rx_pkts,
uint64_t var;
__m128i shuf_msk;
__m128i crc_adjust = _mm_set_epi16(
- 0, 0, 0, 0, /* ignore non-length fields */
+ 0, 0, 0, /* ignore non-length fields */
+ -rxq->crc_len, /* sub crc on data_len */
0, /* ignore high-16bits of pkt_len */
-rxq->crc_len, /* sub crc on pkt_len */
- -rxq->crc_len, /* sub crc on data_len */
- 0 /* ignore pkt_type field */
+ 0, 0 /* ignore pkt_type field */
);
__m128i dd_check, eop_check;
+ __m128i desc_mask = _mm_set_epi32(0xFFFFFFFF, 0xFFFFFFFF,
+ 0xFFFFFFFF, 0xFFFF07F0);

if (unlikely(nb_pkts < RTE_IXGBE_VPMD_RX_BURST))
return 0;
@@ -234,12 +227,13 @@ _recv_raw_pkts_vec(struct igb_rx_queue *rxq, struct rte_mbuf **rx_pkts,
/* mask to shuffle from desc. to mbuf */
shuf_msk = _mm_set_epi8(
7, 6, 5, 4, /* octet 4~7, 32bits rss */
- 0xFF, 0xFF, /* skip high 16 bits vlan_macip, zero out */
15, 14, /* octet 14~15, low 16 bits vlan_macip */
+ 13, 12, /* octet 12~13, 16 bits data_len */
0xFF, 0xFF, /* skip high 16 bits pkt_len, zero out */
13, 12, /* octet 12~13, low 16 bits pkt_len */
- 13, 12, /* octet 12~13, 16 bits data_len */
- 0xFF, 0xFF /* skip pkt_type field */
+ 0xFF, 0xFF, /* skip high 16 bits pkt_type */
+ 1, /* octet 1, 8 bits pkt_type field */
+ 0 /* octet 0, 4 bits offset 4 pkt_type field */
);

/* Cache is empty -> need to scan the buffer rings, but first move
@@ -248,6 +242,7 @@ _recv_raw_pkts_vec(struct igb_rx_queue *rxq, struct rte_mbuf **rx_pkts,

/*
* A. load 4 packet in one loop
+ * [A*. mask out 4 unused dirty field in desc]
* B. copy 4 mbuf point from swring to rx_pkts
* C. calc the number of DD bits among the 4 packets
* [C*. extract the end-of-packet bit, if requested]
@@ -289,6 +284,14 @@ _recv_raw_pkts_vec(struct igb_rx_queue *rxq, struct rte_mbuf **rx_pkts,
/* B.2 copy 2 mbuf point into rx_pkts */
_mm_storeu_si128((__m128i *)&rx_pkts[pos+2], mbp2);

+ /* A* mask out 0~3 bits RSS type */
+ descs[3] = _mm_and_si128(descs[3], desc_mask);
+ descs[2] = _mm_and_si128(descs[2], desc_mask);
+
+ /* A* mask out 0~3 bits RSS type */
+ descs[1] = _mm_and_si128(descs[1], desc_mask);
+ descs[0] = _mm_and_si128(descs[0], desc_mask);
+
/* avoid compiler reorder optimization */
rte_compiler_barrier();

@@ -301,7 +304,7 @@ _recv_raw_pkts_vec(struct igb_rx_queue *rxq, struct rte_mbuf **rx_pkts,
/* C.1 4=>2 filter staterr info only */
sterr_tmp1 = _mm_unpackhi_epi32(descs[1], descs[0]);

- /* set ol_flags with packet type and vlan tag */
+ /* set ol_flags with vlan packet type */
desc_to_olflags_v(descs, &rx_pkts[pos]);

/* D.2 pkt 3,4 set in_port/nb_seg and remove crc */
--
1.9.3
Helin Zhang
2015-02-27 13:11:25 UTC
Permalink
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.

Signed-off-by: Helin Zhang <***@intel.com>
---
lib/librte_pmd_enic/enic_main.c | 14 ++++++++------
1 file changed, 8 insertions(+), 6 deletions(-)

v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.

diff --git a/lib/librte_pmd_enic/enic_main.c b/lib/librte_pmd_enic/enic_main.c
index c66f139..701d506 100644
--- a/lib/librte_pmd_enic/enic_main.c
+++ b/lib/librte_pmd_enic/enic_main.c
@@ -423,7 +423,7 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
rx_pkt->pkt_len = bytes_written;

if (ipv4) {
- rx_pkt->ol_flags |= PKT_RX_IPV4_HDR;
+ rx_pkt->packet_type = RTE_PTYPE_L3_IPV4;
if (!csum_not_calc) {
if (unlikely(!ipv4_csum_ok))
rx_pkt->ol_flags |= PKT_RX_IP_CKSUM_BAD;
@@ -432,7 +432,7 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
rx_pkt->ol_flags |= PKT_RX_L4_CKSUM_BAD;
}
} else if (ipv6)
- rx_pkt->ol_flags |= PKT_RX_IPV6_HDR;
+ rx_pkt->packet_type = RTE_PTYPE_L3_IPV6;
} else {
/* Header split */
if (sop && !eop) {
@@ -445,7 +445,7 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
*rx_pkt_bucket = rx_pkt;
rx_pkt->pkt_len = bytes_written;
if (ipv4) {
- rx_pkt->ol_flags |= PKT_RX_IPV4_HDR;
+ rx_pkt->packet_type = RTE_PTYPE_L3_IPV4;
if (!csum_not_calc) {
if (unlikely(!ipv4_csum_ok))
rx_pkt->ol_flags |=
@@ -457,13 +457,14 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
PKT_RX_L4_CKSUM_BAD;
}
} else if (ipv6)
- rx_pkt->ol_flags |= PKT_RX_IPV6_HDR;
+ rx_pkt->packet_type = RTE_PTYPE_L3_IPV6;
} else {
/* Payload */
hdr_rx_pkt = *rx_pkt_bucket;
hdr_rx_pkt->pkt_len += bytes_written;
if (ipv4) {
- hdr_rx_pkt->ol_flags |= PKT_RX_IPV4_HDR;
+ hdr_rx_pkt->packet_type =
+ RTE_PTYPE_L3_IPV4;
if (!csum_not_calc) {
if (unlikely(!ipv4_csum_ok))
hdr_rx_pkt->ol_flags |=
@@ -475,7 +476,8 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
PKT_RX_L4_CKSUM_BAD;
}
} else if (ipv6)
- hdr_rx_pkt->ol_flags |= PKT_RX_IPV6_HDR;
+ hdr_rx_pkt->packet_type =
+ RTE_PTYPE_L3_IPV6;

}
}
--
1.9.3
Helin Zhang
2015-02-27 13:11:30 UTC
Permalink
Severl useless code lines are added accidenly, which blocks packet
type unification. They should be deleted at all.

Signed-off-by: Helin Zhang <***@intel.com>
---
app/test/packet_burst_generator.c | 10 ----------
1 file changed, 10 deletions(-)

v4 changes:
* Removed several useless code lines which block packet type unification.

diff --git a/app/test/packet_burst_generator.c b/app/test/packet_burst_generator.c
index b46eed7..b9f8f1a 100644
--- a/app/test/packet_burst_generator.c
+++ b/app/test/packet_burst_generator.c
@@ -272,19 +272,9 @@ nomore_mbuf:
if (ipv4) {
pkt->vlan_tci = ETHER_TYPE_IPv4;
pkt->l3_len = sizeof(struct ipv4_hdr);
-
- if (vlan_enabled)
- pkt->ol_flags = PKT_RX_IPV4_HDR | PKT_RX_VLAN_PKT;
- else
- pkt->ol_flags = PKT_RX_IPV4_HDR;
} else {
pkt->vlan_tci = ETHER_TYPE_IPv6;
pkt->l3_len = sizeof(struct ipv6_hdr);
-
- if (vlan_enabled)
- pkt->ol_flags = PKT_RX_IPV6_HDR | PKT_RX_VLAN_PKT;
- else
- pkt->ol_flags = PKT_RX_IPV6_HDR;
}

pkts_burst[nb_pkt] = pkt;
--
1.9.3
Helin Zhang
2015-02-27 13:11:33 UTC
Permalink
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.

Signed-off-by: Helin Zhang <***@intel.com>
---
examples/l3fwd-acl/main.c | 19 ++++++-------------
1 file changed, 6 insertions(+), 13 deletions(-)

v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.

diff --git a/examples/l3fwd-acl/main.c b/examples/l3fwd-acl/main.c
index e851768..5df2e83 100644
--- a/examples/l3fwd-acl/main.c
+++ b/examples/l3fwd-acl/main.c
@@ -648,9 +648,7 @@ prepare_one_packet(struct rte_mbuf **pkts_in, struct acl_search_t *acl,
struct ipv4_hdr *ipv4_hdr;
struct rte_mbuf *pkt = pkts_in[index];

- int type = pkt->ol_flags & (PKT_RX_IPV4_HDR | PKT_RX_IPV6_HDR);
-
- if (type == PKT_RX_IPV4_HDR) {
+ if (RTE_ETH_IS_IPV4_HDR(pkt->packet_type)) {

ipv4_hdr = (struct ipv4_hdr *)(rte_pktmbuf_mtod(pkt,
unsigned char *) + sizeof(struct ether_hdr));
@@ -671,8 +669,7 @@ prepare_one_packet(struct rte_mbuf **pkts_in, struct acl_search_t *acl,
rte_pktmbuf_free(pkt);
}

- } else if (type == PKT_RX_IPV6_HDR) {
-
+ } else if (RTE_ETH_IS_IPV6_HDR(pkt->packet_type)) {
/* Fill acl structure */
acl->data_ipv6[acl->num_ipv6] = MBUF_IPV6_2PROTO(pkt);
acl->m_ipv6[(acl->num_ipv6)++] = pkt;
@@ -690,17 +687,13 @@ prepare_one_packet(struct rte_mbuf **pkts_in, struct acl_search_t *acl,
{
struct rte_mbuf *pkt = pkts_in[index];

- int type = pkt->ol_flags & (PKT_RX_IPV4_HDR | PKT_RX_IPV6_HDR);
-
- if (type == PKT_RX_IPV4_HDR) {
-
+ if (RTE_ETH_IS_IPV4_HDR(pkt->packet_type)) {
/* Fill acl structure */
acl->data_ipv4[acl->num_ipv4] = MBUF_IPV4_2PROTO(pkt);
acl->m_ipv4[(acl->num_ipv4)++] = pkt;


- } else if (type == PKT_RX_IPV6_HDR) {
-
+ } else if (RTE_ETH_IS_IPV6_HDR(pkt->packet_type)) {
/* Fill acl structure */
acl->data_ipv6[acl->num_ipv6] = MBUF_IPV6_2PROTO(pkt);
acl->m_ipv6[(acl->num_ipv6)++] = pkt;
@@ -748,9 +741,9 @@ send_one_packet(struct rte_mbuf *m, uint32_t res)
/* in the ACL list, drop it */
#ifdef L3FWDACL_DEBUG
if ((res & ACL_DENY_SIGNATURE) != 0) {
- if (m->ol_flags & PKT_RX_IPV4_HDR)
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type))
dump_acl4_rule(m, res);
- else
+ else if (RTE_ETH_IS_IPV6_HDR(m->packet_type))
dump_acl6_rule(m, res);
}
#endif
--
1.9.3
Helin Zhang
2015-02-27 13:11:21 UTC
Permalink
As there are only 6 bit flags in ol_flags for indicating packet
types, which is not enough to describe all the possible packet
types hardware can recognize. For example, i40e hardware can
recognize more than 150 packet types. Unified packet type is
composed of L2 type, L3 type, L4 type, tunnel type, inner L2 type,
inner L3 type and inner L4 type fields, and can be stored in
'struct rte_mbuf' of 32 bits field 'packet_type'.

Signed-off-by: Helin Zhang <***@intel.com>
---
lib/librte_mbuf/rte_mbuf.h | 253 +++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 253 insertions(+)

v3 changes:
* Put the definitions of unified packet type into a single patch.

v4 changes:
* Added detailed description of each packet types.

diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index f5b7a8b..8de57fd 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -194,6 +194,259 @@ extern "C" {
/* Use final bit of flags to indicate a control mbuf */
#define CTRL_MBUF_FLAG (1ULL << 63) /**< Mbuf contains control data */

+/*
+ * 32 bits are divided into several fields to mark packet types. Note that
+ * each field is indexical.
+ * - Bit 3:0 is for L2 types.
+ * - Bit 7:4 is for L3 or outer L3 (for tunneling case) types.
+ * - Bit 11:8 is for L4 or outer L4 (for tunneling case) types.
+ * - Bit 15:12 is for tunnel types.
+ * - Bit 19:16 is for inner L2 types.
+ * - Bit 23:20 is for inner L3 types.
+ * - Bit 27:24 is for inner L4 types.
+ * - Bit 31:28 is reserved.
+ *
+ * To be compatible with Vector PMD, RTE_PTYPE_L3_IPV4, RTE_PTYPE_L3_IPV4_EXT,
+ * RTE_PTYPE_L3_IPV6, RTE_PTYPE_L3_IPV6_EXT, RTE_PTYPE_L4_TCP, RTE_PTYPE_L4_UDP
+ * and RTE_PTYPE_L4_SCTP should be kept as below in a contiguous 7 bits.
+ *
+ * Note that L3 types values are selected for checking IPV4/IPV6 header from
+ * performance point of view. Reading annotations of RTE_ETH_IS_IPV4_HDR and
+ * RTE_ETH_IS_IPV6_HDR is needed for any future changes of L3 type values.
+ */
+#define RTE_PTYPE_UNKNOWN 0x00000000
+/**
+ * MAC (Media Access Control) packet type.
+ * It is used for outer packet for tunneling cases.
+ */
+#define RTE_PTYPE_L2_MAC 0x00000001
+/**
+ * MAC (Media Access Control) packet type for time sync.
+ */
+#define RTE_PTYPE_L2_MAC_TIMESYNC 0x00000002
+/**
+ * ARP (Address Resolution Protocol) packet type.
+ */
+#define RTE_PTYPE_L2_ARP 0x00000003
+/**
+ * LLDP (Link Layer Discovery Protocol) packet type.
+ */
+#define RTE_PTYPE_L2_LLDP 0x00000004
+/**
+ * Mask of layer 2 packet types.
+ * It is used for outer packet for tunneling cases.
+ */
+#define RTE_PTYPE_L2_MASK 0x0000000f
+/**
+ * IP (Internet Protocol) version 4 packet type.
+ * It is used for outer packet for tunneling cases, and does not contain any
+ * header option.
+ */
+#define RTE_PTYPE_L3_IPV4 0x00000010
+/**
+ * IP (Internet Protocol) version 4 packet type.
+ * It is used for outer packet for tunneling cases, and contains header
+ * options.
+ */
+#define RTE_PTYPE_L3_IPV4_EXT 0x00000030
+/**
+ * IP (Internet Protocol) version 6 packet type.
+ * It is used for outer packet for tunneling cases, and does not contain any
+ * extension header.
+ */
+#define RTE_PTYPE_L3_IPV6 0x00000040
+/**
+ * IP (Internet Protocol) version 4 packet type.
+ * It is used for outer packet for tunneling cases, and may or maynot contain
+ * header options.
+ */
+#define RTE_PTYPE_L3_IPV4_EXT_UNKNOWN 0x00000090
+/**
+ * IP (Internet Protocol) version 6 packet type.
+ * It is used for outer packet for tunneling cases, and contains extension
+ * headers.
+ */
+#define RTE_PTYPE_L3_IPV6_EXT 0x000000c0
+/**
+ * IP (Internet Protocol) version 6 packet type.
+ * It is used for outer packet for tunneling cases, and may or maynot contain
+ * extension headers.
+ */
+#define RTE_PTYPE_L3_IPV6_EXT_UNKNOWN 0x000000e0
+/**
+ * Mask of layer 3 packet types.
+ * It is used for outer packet for tunneling cases.
+ */
+#define RTE_PTYPE_L3_MASK 0x000000f0
+/**
+ * TCP (Transmission Control Protocol) packet type.
+ * It is used for outer packet for tunneling cases.
+ */
+#define RTE_PTYPE_L4_TCP 0x00000100
+/**
+ * UDP (User Datagram Protocol) packet type.
+ * It is used for outer packet for tunneling cases.
+ */
+#define RTE_PTYPE_L4_UDP 0x00000200
+/**
+ * Fragmented IP (Internet Protocol) packet type.
+ * It is used for outer packet for tunneling cases.
+ */
+#define RTE_PTYPE_L4_FRAG 0x00000300
+/**
+ * SCTP (Stream Control Transmission Protocol) packet type.
+ * It is used for outer packet for tunneling cases.
+ */
+#define RTE_PTYPE_L4_SCTP 0x00000400
+/**
+ * ICMP (Internet Control Message Protocol) packet type.
+ * It is used for outer packet for tunneling cases.
+ */
+#define RTE_PTYPE_L4_ICMP 0x00000500
+/**
+ * Non-fragmented IP (Internet Protocol) packet type.
+ * It is used for outer packet for tunneling cases.
+ */
+#define RTE_PTYPE_L4_NONFRAG 0x00000600
+/**
+ * Mask of layer 4 packet types.
+ * It is used for outer packet for tunneling cases.
+ */
+#define RTE_PTYPE_L4_MASK 0x00000f00
+/**
+ * IP (Internet Protocol) in IP (Internet Protocol) tunneling packet type.
+ */
+#define RTE_PTYPE_TUNNEL_IP 0x00001000
+/**
+ * GRE (Generic Routing Encapsulation) tunneling packet type.
+ */
+#define RTE_PTYPE_TUNNEL_GRE 0x00002000
+/**
+ * VXLAN (Virtual eXtensible Local Area Network) tunneling packet type.
+ */
+#define RTE_PTYPE_TUNNEL_VXLAN 0x00003000
+/**
+ * NVGRE (Network Virtualization using Generic Routing Encapsulation) tunneling
+ * packet type.
+ */
+#define RTE_PTYPE_TUNNEL_NVGRE 0x00004000
+/**
+ * GENEVE (Generic Network Virtualization Encapsulation) tunneling packet type.
+ */
+#define RTE_PTYPE_TUNNEL_GENEVE 0x00005000
+/**
+ * Tunneling packet type of Teredo, VXLAN (Virtual eXtensible Local Area
+ * Network) or GRE (Generic Routing Encapsulation).
+ * It is used for tunneling packet type, which is unknown but must be one of
+ * Teredo, VXLAN or GRE.
+ */
+#define RTE_PTYPE_TUNNEL_GRENAT 0x00006000
+/**
+ * Mask of tunneling packet types.
+ */
+#define RTE_PTYPE_TUNNEL_MASK 0x0000f000
+/**
+ * MAC (Media Access Control) packet type.
+ * It is used for inner packet type only.
+ */
+#define RTE_PTYPE_INNER_L2_MAC 0x00010000
+/**
+ * MAC (Media Access Control) packet type with VLAN (Virtual Local Area
+ * Network) tag.
+ */
+#define RTE_PTYPE_INNER_L2_MAC_VLAN 0x00020000
+/**
+ * Mask of inner layer 2 packet types.
+ */
+#define RTE_PTYPE_INNER_L2_MASK 0x000f0000
+/**
+ * IP (Internet Protocol) version 4 packet type.
+ * It is used for inner packet only, and does not contain any header option.
+ */
+#define RTE_PTYPE_INNER_L3_IPV4 0x00100000
+/**
+ * IP (Internet Protocol) version 4 packet type.
+ * It is used for inner packet only, and contains header options.
+ */
+#define RTE_PTYPE_INNER_L3_IPV4_EXT 0x00200000
+/**
+ * IP (Internet Protocol) version 6 packet type.
+ * It is used for inner packet only, and does not contain any extension header.
+ */
+#define RTE_PTYPE_INNER_L3_IPV6 0x00300000
+/**
+ * IP (Internet Protocol) version 4 packet type.
+ * It is used for inner packet only, and may or maynot contain header options.
+ */
+#define RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN 0x00400000
+/**
+ * IP (Internet Protocol) version 6 packet type.
+ * It is used for inner packet only, and contains extension headers.
+ */
+#define RTE_PTYPE_INNER_L3_IPV6_EXT 0x00500000
+/**
+ * IP (Internet Protocol) version 6 packet type.
+ * It is used for inner packet only, and may or maynot contain extension
+ * headers.
+ */
+#define RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN 0x00600000
+/**
+ * Mask of inner layer 3 packet types.
+ */
+#define RTE_PTYPE_INNER_INNER_L3_MASK 0x00f00000
+/**
+ * TCP (Transmission Control Protocol) packet type.
+ * It is used for inner packet only.
+ */
+#define RTE_PTYPE_INNER_L4_TCP 0x01000000
+/**
+ * UDP (User Datagram Protocol) packet type.
+ * It is used for inner packet only.
+ */
+#define RTE_PTYPE_INNER_L4_UDP 0x02000000
+/**
+ * Fragmented IP (Internet Protocol) packet type.
+ * It is used for inner packet only, and may or maynot have layer 4 packet.
+ */
+#define RTE_PTYPE_INNER_L4_FRAG 0x03000000
+/**
+ * SCTP (Stream Control Transmission Protocol) packet type.
+ * It is used for inner packet only.
+ */
+#define RTE_PTYPE_INNER_L4_SCTP 0x04000000
+/**
+ * ICMP (Internet Control Message Protocol) packet type.
+ * It is used for inner packet only.
+ */
+#define RTE_PTYPE_INNER_L4_ICMP 0x05000000
+/**
+ * Non-fragmented IP (Internet Protocol) packet type.
+ * It is used for inner packet only, and may or maynot have other unknown layer
+ * 4 packet types.
+ */
+#define RTE_PTYPE_INNER_L4_NONFRAG 0x06000000
+/**
+ * Mask of inner layer 4 packet types.
+ */
+#define RTE_PTYPE_INNER_L4_MASK 0x0f000000
+
+/**
+ * Check if the (outer) L3 header is IPv4. To avoid comparing IPv4 types one by
+ * one, bit 4 is selected to be used for IPv4 only. Then checking bit 4 can
+ * determin if it is an IPV4 packet.
+ */
+#define RTE_ETH_IS_IPV4_HDR(ptype) ((ptype) & RTE_PTYPE_L3_IPV4)
+
+/**
+ * Check if the (outer) L3 header is IPv4. To avoid comparing IPv4 types one by
+ * one, bit 6 is selected to be used for IPv4 only. Then checking bit 6 can
+ * determin if it is an IPV4 packet.
+ */
+#define RTE_ETH_IS_IPV6_HDR(ptype) ((ptype) & RTE_PTYPE_L3_IPV6)
+
+/* Check if it is a tunneling packet */
+#define RTE_ETH_IS_TUNNEL_PKT(ptype) ((ptype) & RTE_PTYPE_TUNNEL_MASK)
+
/**
* Get the name of a RX offload flag
*
--
1.9.3
Helin Zhang
2015-02-27 13:11:27 UTC
Permalink
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.

Signed-off-by: Helin Zhang <***@intel.com>
---
lib/librte_pmd_fm10k/fm10k_rxtx.c | 30 +++++++++++++++++++++---------
1 file changed, 21 insertions(+), 9 deletions(-)

v4 changes:
* Supported unified packet type of fm10k from v4.

diff --git a/lib/librte_pmd_fm10k/fm10k_rxtx.c b/lib/librte_pmd_fm10k/fm10k_rxtx.c
index 83bddfc..2a2e778 100644
--- a/lib/librte_pmd_fm10k/fm10k_rxtx.c
+++ b/lib/librte_pmd_fm10k/fm10k_rxtx.c
@@ -65,13 +65,29 @@ static inline void dump_rxd(union fm10k_rx_desc *rxd)
static inline void
rx_desc_to_ol_flags(struct rte_mbuf *m, const union fm10k_rx_desc *d)
{
- uint16_t ptype;
- static const uint16_t pt_lut[] = { 0,
- PKT_RX_IPV4_HDR, PKT_RX_IPV4_HDR_EXT,
- PKT_RX_IPV6_HDR, PKT_RX_IPV6_HDR_EXT,
- 0, 0, 0
+ static const uint32_t
+ ptype_table[FM10K_RXD_PKTTYPE_MASK >> FM10K_RXD_PKTTYPE_SHIFT]
+ __rte_cache_aligned = {
+ [FM10K_PKTTYPE_OTHER] = RTE_PTYPE_L2_MAC,
+ [FM10K_PKTTYPE_IPV4] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4,
+ [FM10K_PKTTYPE_IPV4_EX] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4_EXT,
+ [FM10K_PKTTYPE_IPV6] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6,
+ [FM10K_PKTTYPE_IPV6_EX] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6_EXT,
+ [FM10K_PKTTYPE_IPV4 | FM10K_PKTTYPE_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_TCP,
+ [FM10K_PKTTYPE_IPV6 | FM10K_PKTTYPE_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_TCP,
+ [FM10K_PKTTYPE_IPV4 | FM10K_PKTTYPE_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_UDP,
+ [FM10K_PKTTYPE_IPV6 | FM10K_PKTTYPE_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_UDP,
};

+ m->packet_type = ptype_table[(d->w.pkt_info & FM10K_RXD_PKTTYPE_MASK)
+ >> FM10K_RXD_PKTTYPE_SHIFT];
+
if (d->w.pkt_info & FM10K_RXD_RSSTYPE_MASK)
m->ol_flags |= PKT_RX_RSS_HASH;

@@ -93,10 +109,6 @@ rx_desc_to_ol_flags(struct rte_mbuf *m, const union fm10k_rx_desc *d)

if (unlikely(d->d.staterr & FM10K_RXD_STATUS_RXE))
m->ol_flags |= PKT_RX_RECIP_ERR;
-
- ptype = (d->d.data & FM10K_RXD_PKTTYPE_MASK_L3) >>
- FM10K_RXD_PKTTYPE_SHIFT;
- m->ol_flags |= pt_lut[(uint8_t)ptype];
}

uint16_t
--
1.9.3
Helin Zhang
2015-02-27 13:11:24 UTC
Permalink
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.

Signed-off-by: Helin Zhang <***@intel.com>
---
lib/librte_pmd_i40e/i40e_rxtx.c | 786 ++++++++++++++++++++++++++--------------
1 file changed, 512 insertions(+), 274 deletions(-)

v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.

diff --git a/lib/librte_pmd_i40e/i40e_rxtx.c b/lib/librte_pmd_i40e/i40e_rxtx.c
index 12c0831..6764978 100644
--- a/lib/librte_pmd_i40e/i40e_rxtx.c
+++ b/lib/librte_pmd_i40e/i40e_rxtx.c
@@ -151,272 +151,511 @@ i40e_rxd_error_to_pkt_flags(uint64_t qword)
return flags;
}

-/* Translate pkt types to pkt flags */
-static inline uint64_t
-i40e_rxd_ptype_to_pkt_flags(uint64_t qword)
+/* For each value it means, datasheet of hardware can tell more details */
+static inline uint32_t
+i40e_rxd_pkt_type_mapping(uint8_t ptype)
{
- uint8_t ptype = (uint8_t)((qword & I40E_RXD_QW1_PTYPE_MASK) >>
- I40E_RXD_QW1_PTYPE_SHIFT);
- static const uint64_t ip_ptype_map[I40E_MAX_PKT_TYPE] = {
- 0, /* PTYPE 0 */
- 0, /* PTYPE 1 */
- 0, /* PTYPE 2 */
- 0, /* PTYPE 3 */
- 0, /* PTYPE 4 */
- 0, /* PTYPE 5 */
- 0, /* PTYPE 6 */
- 0, /* PTYPE 7 */
- 0, /* PTYPE 8 */
- 0, /* PTYPE 9 */
- 0, /* PTYPE 10 */
- 0, /* PTYPE 11 */
- 0, /* PTYPE 12 */
- 0, /* PTYPE 13 */
- 0, /* PTYPE 14 */
- 0, /* PTYPE 15 */
- 0, /* PTYPE 16 */
- 0, /* PTYPE 17 */
- 0, /* PTYPE 18 */
- 0, /* PTYPE 19 */
- 0, /* PTYPE 20 */
- 0, /* PTYPE 21 */
- PKT_RX_IPV4_HDR, /* PTYPE 22 */
- PKT_RX_IPV4_HDR, /* PTYPE 23 */
- PKT_RX_IPV4_HDR, /* PTYPE 24 */
- 0, /* PTYPE 25 */
- PKT_RX_IPV4_HDR, /* PTYPE 26 */
- PKT_RX_IPV4_HDR, /* PTYPE 27 */
- PKT_RX_IPV4_HDR, /* PTYPE 28 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 29 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 30 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 31 */
- 0, /* PTYPE 32 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 33 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 34 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 35 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 36 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 37 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 38 */
- 0, /* PTYPE 39 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 40 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 41 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 42 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 43 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 44 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 45 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 46 */
- 0, /* PTYPE 47 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 48 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 49 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 50 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 51 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 52 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 53 */
- 0, /* PTYPE 54 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 55 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 56 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 57 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 58 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 59 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 60 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 61 */
- 0, /* PTYPE 62 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 63 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 64 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 65 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 66 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 67 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 68 */
- 0, /* PTYPE 69 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 70 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 71 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 72 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 73 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 74 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 75 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 76 */
- 0, /* PTYPE 77 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 78 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 79 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 80 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 81 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 82 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 83 */
- 0, /* PTYPE 84 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 85 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 86 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 87 */
- PKT_RX_IPV6_HDR, /* PTYPE 88 */
- PKT_RX_IPV6_HDR, /* PTYPE 89 */
- PKT_RX_IPV6_HDR, /* PTYPE 90 */
- 0, /* PTYPE 91 */
- PKT_RX_IPV6_HDR, /* PTYPE 92 */
- PKT_RX_IPV6_HDR, /* PTYPE 93 */
- PKT_RX_IPV6_HDR, /* PTYPE 94 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 95 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 96 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 97 */
- 0, /* PTYPE 98 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 99 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 100 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 101 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 102 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 103 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 104 */
- 0, /* PTYPE 105 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 106 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 107 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 108 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 109 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 110 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 111 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 112 */
- 0, /* PTYPE 113 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 114 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 115 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 116 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 117 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 118 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 119 */
- 0, /* PTYPE 120 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 121 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 122 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 123 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 124 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 125 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 126 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 127 */
- 0, /* PTYPE 128 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 129 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 130 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 131 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 132 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 133 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 134 */
- 0, /* PTYPE 135 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 136 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 137 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 138 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 139 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 140 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 141 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 142 */
- 0, /* PTYPE 143 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 144 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 145 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 146 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 147 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 148 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 149 */
- 0, /* PTYPE 150 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 151 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 152 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 153 */
- 0, /* PTYPE 154 */
- 0, /* PTYPE 155 */
- 0, /* PTYPE 156 */
- 0, /* PTYPE 157 */
- 0, /* PTYPE 158 */
- 0, /* PTYPE 159 */
- 0, /* PTYPE 160 */
- 0, /* PTYPE 161 */
- 0, /* PTYPE 162 */
- 0, /* PTYPE 163 */
- 0, /* PTYPE 164 */
- 0, /* PTYPE 165 */
- 0, /* PTYPE 166 */
- 0, /* PTYPE 167 */
- 0, /* PTYPE 168 */
- 0, /* PTYPE 169 */
- 0, /* PTYPE 170 */
- 0, /* PTYPE 171 */
- 0, /* PTYPE 172 */
- 0, /* PTYPE 173 */
- 0, /* PTYPE 174 */
- 0, /* PTYPE 175 */
- 0, /* PTYPE 176 */
- 0, /* PTYPE 177 */
- 0, /* PTYPE 178 */
- 0, /* PTYPE 179 */
- 0, /* PTYPE 180 */
- 0, /* PTYPE 181 */
- 0, /* PTYPE 182 */
- 0, /* PTYPE 183 */
- 0, /* PTYPE 184 */
- 0, /* PTYPE 185 */
- 0, /* PTYPE 186 */
- 0, /* PTYPE 187 */
- 0, /* PTYPE 188 */
- 0, /* PTYPE 189 */
- 0, /* PTYPE 190 */
- 0, /* PTYPE 191 */
- 0, /* PTYPE 192 */
- 0, /* PTYPE 193 */
- 0, /* PTYPE 194 */
- 0, /* PTYPE 195 */
- 0, /* PTYPE 196 */
- 0, /* PTYPE 197 */
- 0, /* PTYPE 198 */
- 0, /* PTYPE 199 */
- 0, /* PTYPE 200 */
- 0, /* PTYPE 201 */
- 0, /* PTYPE 202 */
- 0, /* PTYPE 203 */
- 0, /* PTYPE 204 */
- 0, /* PTYPE 205 */
- 0, /* PTYPE 206 */
- 0, /* PTYPE 207 */
- 0, /* PTYPE 208 */
- 0, /* PTYPE 209 */
- 0, /* PTYPE 210 */
- 0, /* PTYPE 211 */
- 0, /* PTYPE 212 */
- 0, /* PTYPE 213 */
- 0, /* PTYPE 214 */
- 0, /* PTYPE 215 */
- 0, /* PTYPE 216 */
- 0, /* PTYPE 217 */
- 0, /* PTYPE 218 */
- 0, /* PTYPE 219 */
- 0, /* PTYPE 220 */
- 0, /* PTYPE 221 */
- 0, /* PTYPE 222 */
- 0, /* PTYPE 223 */
- 0, /* PTYPE 224 */
- 0, /* PTYPE 225 */
- 0, /* PTYPE 226 */
- 0, /* PTYPE 227 */
- 0, /* PTYPE 228 */
- 0, /* PTYPE 229 */
- 0, /* PTYPE 230 */
- 0, /* PTYPE 231 */
- 0, /* PTYPE 232 */
- 0, /* PTYPE 233 */
- 0, /* PTYPE 234 */
- 0, /* PTYPE 235 */
- 0, /* PTYPE 236 */
- 0, /* PTYPE 237 */
- 0, /* PTYPE 238 */
- 0, /* PTYPE 239 */
- 0, /* PTYPE 240 */
- 0, /* PTYPE 241 */
- 0, /* PTYPE 242 */
- 0, /* PTYPE 243 */
- 0, /* PTYPE 244 */
- 0, /* PTYPE 245 */
- 0, /* PTYPE 246 */
- 0, /* PTYPE 247 */
- 0, /* PTYPE 248 */
- 0, /* PTYPE 249 */
- 0, /* PTYPE 250 */
- 0, /* PTYPE 251 */
- 0, /* PTYPE 252 */
- 0, /* PTYPE 253 */
- 0, /* PTYPE 254 */
- 0, /* PTYPE 255 */
+ static const uint32_t ptype_table[UINT8_MAX] __rte_cache_aligned = {
+ /* L2 types */
+ /* [0] reserved */
+ [1] = RTE_PTYPE_L2_MAC,
+ [2] = RTE_PTYPE_L2_MAC_TIMESYNC,
+ /* [3] - [5] reserved */
+ [6] = RTE_PTYPE_L2_LLDP,
+ /* [7] - [10] reserved */
+ [11] = RTE_PTYPE_L2_ARP,
+ /* [12] - [21] reserved */
+
+ /* Non tunneled IPv4 */
+ [22] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_FRAG,
+ [23] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_NONFRAG,
+ [24] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_UDP,
+ /* [25] reserved */
+ [26] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_TCP,
+ [27] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_SCTP,
+ [28] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_ICMP,
+
+ /* IPv4 --> IPv4 */
+ [29] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [30] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [31] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [32] reserved */
+ [33] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [34] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [35] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> IPv6 */
+ [36] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [37] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [38] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [39] reserved */
+ [40] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [41] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [42] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN */
+ [43] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> IPv4 */
+ [44] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [45] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [46] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [47] reserved */
+ [48] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [49] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [50] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> IPv6 */
+ [51] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [52] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [53] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [54] reserved */
+ [55] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [56] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [57] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC */
+ [58] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC --> IPv4 */
+ [59] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [60] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [61] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [62] reserved */
+ [63] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [64] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [65] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC --> IPv6 */
+ [66] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [67] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [68] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [69] reserved */
+ [70] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [71] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [72] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC/VLAN */
+ [73] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv4 */
+ [74] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [75] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [76] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [77] reserved */
+ [78] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [79] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [80] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv6 */
+ [81] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [82] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [83] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [84] reserved */
+ [85] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [86] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [87] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* Non tunneled IPv6 */
+ [88] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_FRAG,
+ [89] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_NONFRAG,
+ [90] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_UDP,
+ /* [91] reserved */
+ [92] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_TCP,
+ [93] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_SCTP,
+ [94] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_ICMP,
+
+ /* IPv6 --> IPv4 */
+ [95] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [96] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [97] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [98] reserved */
+ [99] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [100] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [101] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> IPv6 */
+ [102] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [103] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [104] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [105] reserved */
+ [106] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [107] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [108] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN */
+ [109] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> IPv4 */
+ [110] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [111] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [112] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [113] reserved */
+ [114] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [115] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [116] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> IPv6 */
+ [117] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [118] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [119] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [120] reserved */
+ [121] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [122] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [123] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC */
+ [124] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC --> IPv4 */
+ [125] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [126] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [127] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [128] reserved */
+ [129] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [130] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [131] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC --> IPv6 */
+ [132] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [133] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [134] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [135] reserved */
+ [136] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [137] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [138] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC/VLAN */
+ [139] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv4 */
+ [140] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [141] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [142] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [143] reserved */
+ [144] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [145] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [146] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv6 */
+ [147] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [148] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [149] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [150] reserved */
+ [151] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [152] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [153] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* All others reserved */
};

- return ip_ptype_map[ptype];
+ return ptype_table[ptype];
}

#define I40E_RX_DESC_EXT_STATUS_FLEXBH_MASK 0x03
@@ -702,11 +941,11 @@ i40e_rx_scan_hw_ring(struct i40e_rx_queue *rxq)
rxdp[j].wb.qword0.lo_dword.l2tag1) : 0;
pkt_flags = i40e_rxd_status_to_pkt_flags(qword1);
pkt_flags |= i40e_rxd_error_to_pkt_flags(qword1);
- pkt_flags |= i40e_rxd_ptype_to_pkt_flags(qword1);

- mb->packet_type = (uint16_t)((qword1 &
- I40E_RXD_QW1_PTYPE_MASK) >>
- I40E_RXD_QW1_PTYPE_SHIFT);
+ mb->packet_type =
+ i40e_rxd_pkt_type_mapping((uint8_t)((qword1 &
+ I40E_RXD_QW1_PTYPE_MASK) >>
+ I40E_RXD_QW1_PTYPE_SHIFT));
if (pkt_flags & PKT_RX_RSS_HASH)
mb->hash.rss = rte_le_to_cpu_32(\
rxdp[j].wb.qword0.hi_dword.rss);
@@ -945,9 +1184,9 @@ i40e_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
rte_le_to_cpu_16(rxd.wb.qword0.lo_dword.l2tag1) : 0;
pkt_flags = i40e_rxd_status_to_pkt_flags(qword1);
pkt_flags |= i40e_rxd_error_to_pkt_flags(qword1);
- pkt_flags |= i40e_rxd_ptype_to_pkt_flags(qword1);
- rxm->packet_type = (uint16_t)((qword1 & I40E_RXD_QW1_PTYPE_MASK) >>
- I40E_RXD_QW1_PTYPE_SHIFT);
+ rxm->packet_type =
+ i40e_rxd_pkt_type_mapping((uint8_t)((qword1 &
+ I40E_RXD_QW1_PTYPE_MASK) >> I40E_RXD_QW1_PTYPE_SHIFT));
if (pkt_flags & PKT_RX_RSS_HASH)
rxm->hash.rss =
rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
@@ -1104,10 +1343,9 @@ i40e_recv_scattered_pkts(void *rx_queue,
rte_le_to_cpu_16(rxd.wb.qword0.lo_dword.l2tag1) : 0;
pkt_flags = i40e_rxd_status_to_pkt_flags(qword1);
pkt_flags |= i40e_rxd_error_to_pkt_flags(qword1);
- pkt_flags |= i40e_rxd_ptype_to_pkt_flags(qword1);
- first_seg->packet_type = (uint16_t)((qword1 &
- I40E_RXD_QW1_PTYPE_MASK) >>
- I40E_RXD_QW1_PTYPE_SHIFT);
+ first_seg->packet_type =
+ i40e_rxd_pkt_type_mapping((uint8_t)((qword1 &
+ I40E_RXD_QW1_PTYPE_MASK) >> I40E_RXD_QW1_PTYPE_SHIFT));
if (pkt_flags & PKT_RX_RSS_HASH)
rxm->hash.rss =
rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
--
1.9.3
Helin Zhang
2015-02-27 13:11:36 UTC
Permalink
As unified packet types are used instead, those old bit masks and
the relevant macros for packet type indication need to be removed.

Signed-off-by: Helin Zhang <***@intel.com>
---
lib/librte_mbuf/rte_mbuf.c | 6 ------
lib/librte_mbuf/rte_mbuf.h | 14 ++++----------
2 files changed, 4 insertions(+), 16 deletions(-)

v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
* Redefined the bit masks for packet RX offload flags.

diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c
index 4c940bd..9650099 100644
--- a/lib/librte_mbuf/rte_mbuf.c
+++ b/lib/librte_mbuf/rte_mbuf.c
@@ -213,14 +213,8 @@ const char *rte_get_rx_ol_flag_name(uint64_t mask)
/* case PKT_RX_HBUF_OVERFLOW: return "PKT_RX_HBUF_OVERFLOW"; */
/* case PKT_RX_RECIP_ERR: return "PKT_RX_RECIP_ERR"; */
/* case PKT_RX_MAC_ERR: return "PKT_RX_MAC_ERR"; */
- case PKT_RX_IPV4_HDR: return "PKT_RX_IPV4_HDR";
- case PKT_RX_IPV4_HDR_EXT: return "PKT_RX_IPV4_HDR_EXT";
- case PKT_RX_IPV6_HDR: return "PKT_RX_IPV6_HDR";
- case PKT_RX_IPV6_HDR_EXT: return "PKT_RX_IPV6_HDR_EXT";
case PKT_RX_IEEE1588_PTP: return "PKT_RX_IEEE1588_PTP";
case PKT_RX_IEEE1588_TMST: return "PKT_RX_IEEE1588_TMST";
- case PKT_RX_TUNNEL_IPV4_HDR: return "PKT_RX_TUNNEL_IPV4_HDR";
- case PKT_RX_TUNNEL_IPV6_HDR: return "PKT_RX_TUNNEL_IPV6_HDR";
default: return NULL;
}
}
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index 8de57fd..fb30354 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -90,16 +90,10 @@ extern "C" {
#define PKT_RX_HBUF_OVERFLOW (0ULL << 0) /**< Header buffer overflow. */
#define PKT_RX_RECIP_ERR (0ULL << 0) /**< Hardware processing error. */
#define PKT_RX_MAC_ERR (0ULL << 0) /**< MAC error. */
-#define PKT_RX_IPV4_HDR (1ULL << 5) /**< RX packet with IPv4 header. */
-#define PKT_RX_IPV4_HDR_EXT (1ULL << 6) /**< RX packet with extended IPv4 header. */
-#define PKT_RX_IPV6_HDR (1ULL << 7) /**< RX packet with IPv6 header. */
-#define PKT_RX_IPV6_HDR_EXT (1ULL << 8) /**< RX packet with extended IPv6 header. */
-#define PKT_RX_IEEE1588_PTP (1ULL << 9) /**< RX IEEE1588 L2 Ethernet PT Packet. */
-#define PKT_RX_IEEE1588_TMST (1ULL << 10) /**< RX IEEE1588 L2/L4 timestamped packet.*/
-#define PKT_RX_TUNNEL_IPV4_HDR (1ULL << 11) /**< RX tunnel packet with IPv4 header.*/
-#define PKT_RX_TUNNEL_IPV6_HDR (1ULL << 12) /**< RX tunnel packet with IPv6 header. */
-#define PKT_RX_FDIR_ID (1ULL << 13) /**< FD id reported if FDIR match. */
-#define PKT_RX_FDIR_FLX (1ULL << 14) /**< Flexible bytes reported if FDIR match. */
+#define PKT_RX_IEEE1588_PTP (1ULL << 5) /**< RX IEEE1588 L2 Ethernet PT Packet. */
+#define PKT_RX_IEEE1588_TMST (1ULL << 6) /**< RX IEEE1588 L2/L4 timestamped packet.*/
+#define PKT_RX_FDIR_ID (1ULL << 7) /**< FD id reported if FDIR match. */
+#define PKT_RX_FDIR_FLX (1ULL << 8) /**< Flexible bytes reported if FDIR match. */
/* add new RX flags here */

/* add new TX flags here */
--
1.9.3
Helin Zhang
2015-02-27 13:11:26 UTC
Permalink
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.

Signed-off-by: Helin Zhang <***@intel.com>
Acked-by: Yong Wang <***@vmware.com>
---
lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.

diff --git a/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c b/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
index 4d8a010..831e676 100644
--- a/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
+++ b/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
@@ -650,9 +650,9 @@ vmxnet3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
struct ipv4_hdr *ip = (struct ipv4_hdr *)(eth + 1);

if (((ip->version_ihl & 0xf) << 2) > (int)sizeof(struct ipv4_hdr))
- rxm->ol_flags |= PKT_RX_IPV4_HDR_EXT;
+ rxm->packet_type = RTE_PTYPE_L3_IPV4_EXT;
else
- rxm->ol_flags |= PKT_RX_IPV4_HDR;
+ rxm->packet_type = RTE_PTYPE_L3_IPV4;

if (!rcd->cnc) {
if (!rcd->ipc)
--
1.9.3
Helin Zhang
2015-02-27 13:11:28 UTC
Permalink
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.

Signed-off-by: Helin Zhang <***@intel.com>
---
app/test-pipeline/pipeline_hash.c | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)

v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.

diff --git a/app/test-pipeline/pipeline_hash.c b/app/test-pipeline/pipeline_hash.c
index 4598ad4..548615f 100644
--- a/app/test-pipeline/pipeline_hash.c
+++ b/app/test-pipeline/pipeline_hash.c
@@ -459,20 +459,21 @@ app_main_loop_rx_metadata(void) {
signature = RTE_MBUF_METADATA_UINT32_PTR(m, 0);
key = RTE_MBUF_METADATA_UINT8_PTR(m, 32);

- if (m->ol_flags & PKT_RX_IPV4_HDR) {
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type)) {
ip_hdr = (struct ipv4_hdr *)
&m_data[sizeof(struct ether_hdr)];
ip_dst = ip_hdr->dst_addr;

k32 = (uint32_t *) key;
k32[0] = ip_dst & 0xFFFFFF00;
- } else {
+ } else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
ipv6_hdr = (struct ipv6_hdr *)
&m_data[sizeof(struct ether_hdr)];
ipv6_dst = ipv6_hdr->dst_addr;

memcpy(key, ipv6_dst, 16);
- }
+ } else
+ continue;

*signature = test_hash(key, 0, 0);
}
--
1.9.3
Continue reading on narkive:
Loading...