Discussion:
[dpdk-dev] [PATCH 0/5] net/qede: add enhancements and fixes
(too old to reply)
Rasesh Mody
2017-11-24 20:35:40 UTC
Permalink
Hi,

This patch set adds enhancements and fixes for qede PMD.

Thanks!
Rasesh

Harish Patil (3):
net/qede: fix to enable LRO over tunnels
examples/kni: add optional parameter to enable LRO
net/qede: fix to reject config with no Rx queue

Shahed Shaikh (2):
app/testpmd: add configuration for udp port tunnel type
net/qede: add support for GENEVE tunneling offload

app/test-pmd/cmdline.c | 28 ++-
drivers/net/qede/qede_ethdev.c | 530 ++++++++++++++++++++++++++--------------
drivers/net/qede/qede_ethdev.h | 10 +-
drivers/net/qede/qede_rxtx.c | 4 +-
drivers/net/qede/qede_rxtx.h | 4 +-
examples/kni/main.c | 15 +-
6 files changed, 392 insertions(+), 199 deletions(-)
--
1.7.10.3
Rasesh Mody
2017-11-24 20:35:41 UTC
Permalink
From: Harish Patil <***@cavium.com>

Enable LRO feature to work with tunnel encapsulation protocols.

Fixes: 29540be7efce ("net/qede: support LRO/TSO offloads")
Cc: ***@dpdk.org

Signed-off-by: Harish Patil <***@cavium.com>
---
drivers/net/qede/qede_ethdev.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index 6f5ba2a..cc473d6 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -482,8 +482,8 @@ int qede_activate_vport(struct rte_eth_dev *eth_dev, bool flg)
/* Enable LRO in split mode */
sge_tpa_params->tpa_ipv4_en_flg = enable;
sge_tpa_params->tpa_ipv6_en_flg = enable;
- sge_tpa_params->tpa_ipv4_tunn_en_flg = false;
- sge_tpa_params->tpa_ipv6_tunn_en_flg = false;
+ sge_tpa_params->tpa_ipv4_tunn_en_flg = enable;
+ sge_tpa_params->tpa_ipv6_tunn_en_flg = enable;
/* set if tpa enable changes */
sge_tpa_params->update_tpa_en_flg = 1;
/* set if tpa parameters should be handled */
--
1.7.10.3
Rasesh Mody
2017-11-24 20:35:42 UTC
Permalink
From: Harish Patil <***@cavium.com>

Add an optional cmdline parameter to enable LRO. This is useful to test
LRO feature by being able to run linux utils like iperf over KNI interface
which generates consistent packet aggregations.

Signed-off-by: Harish Patil <***@cavium.com>
---
examples/kni/main.c | 15 +++++++++++++--
1 file changed, 13 insertions(+), 2 deletions(-)

diff --git a/examples/kni/main.c b/examples/kni/main.c
index 3f17385..1cedaff 100644
--- a/examples/kni/main.c
+++ b/examples/kni/main.c
@@ -143,6 +143,9 @@ struct kni_port_params {
/* Ports set in promiscuous mode off by default. */
static int promiscuous_on = 0;

+/* Enable LRO offload, off by default. */
+static int enable_lro = 0;
+
/* Structure type for recording kni interface specific stats */
struct kni_interface_stats {
/* number of pkts received from NIC, and sent to KNI */
@@ -360,11 +363,12 @@ struct kni_interface_stats {
static void
print_usage(const char *prgname)
{
- RTE_LOG(INFO, APP, "\nUsage: %s [EAL options] -- -p PORTMASK -P "
+ RTE_LOG(INFO, APP, "\nUsage: %s [EAL options] -- -p PORTMASK -P -l "
"[--config (port,lcore_rx,lcore_tx,lcore_kthread...)"
"[,(port,lcore_rx,lcore_tx,lcore_kthread...)]]\n"
" -p PORTMASK: hex bitmask of ports to use\n"
" -P : enable promiscuous mode\n"
+ " -l : enable LRO\n"
" --config (port,lcore_rx,lcore_tx,lcore_kthread...): "
"port and lcore configurations\n",
prgname);
@@ -545,7 +549,7 @@ struct kni_interface_stats {
opterr = 0;

/* Parse command line */
- while ((opt = getopt_long(argc, argv, "p:P", longopts,
+ while ((opt = getopt_long(argc, argv, "p:Pl", longopts,
&longindex)) != EOF) {
switch (opt) {
case 'p':
@@ -554,6 +558,9 @@ struct kni_interface_stats {
case 'P':
promiscuous_on = 1;
break;
+ case 'l':
+ enable_lro = 1;
+ break;
case 0:
if (!strncmp(longopts[longindex].name,
CMDLINE_OPT_CONFIG,
@@ -611,6 +618,10 @@ struct kni_interface_stats {
/* Initialise device and RX/TX queues */
RTE_LOG(INFO, APP, "Initialising port %u ...\n", (unsigned)port);
fflush(stdout);
+
+ if (enable_lro)
+ port_conf.rxmode.enable_lro = 1;
+
ret = rte_eth_dev_configure(port, 1, 1, &port_conf);
if (ret < 0)
rte_exit(EXIT_FAILURE, "Could not configure port%u (%d)\n",
--
1.7.10.3
Ferruh Yigit
2017-12-04 23:25:08 UTC
Permalink
Post by Rasesh Mody
Add an optional cmdline parameter to enable LRO. This is useful to test
LRO feature by being able to run linux utils like iperf over KNI interface
which generates consistent packet aggregations.
Acked-by: Ferruh Yigit <***@intel.com>

I think this patch has no dependency for rest of the patchset and can be
separately applied.
Shahaf Shuler
2017-12-05 06:05:49 UTC
Permalink
Post by Ferruh Yigit
Post by Rasesh Mody
Add an optional cmdline parameter to enable LRO. This is useful to
test LRO feature by being able to run linux utils like iperf over KNI
interface which generates consistent packet aggregations.
I think this patch has no dependency for rest of the patchset and can be
separately applied.
Snipped from the patch [1]

Please use the new ethdev offloads API. there is already a series to convert the examples including kni[2], I think it is better to work on top of it.

[1]
@@ -611,6 +618,10 @@ struct kni_interface_stats {
/* Initialise device and RX/TX queues */
RTE_LOG(INFO, APP, "Initialising port %u ...\n", (unsigned)port);
fflush(stdout);
+
+ if (enable_lro)
+ port_conf.rxmode.enable_lro = 1;
+

[2]
Patil, Harish
2017-12-05 19:43:35 UTC
Permalink
-----Original Message-----
From: Shahaf Shuler <***@mellanox.com>
Date: Monday, December 4, 2017 at 10:05 PM
To: Ferruh Yigit <***@intel.com>, "Mody, Rasesh"
<***@cavium.com>, "***@dpdk.org" <***@dpdk.org>
Cc: Harish Patil <***@cavium.com>, Dept-Eng DPDK Dev
<Dept-***@cavium.com>
Subject: RE: [dpdk-dev] [PATCH 2/5] examples/kni: add optional parameter
to enable LRO
Post by Shahaf Shuler
Post by Ferruh Yigit
Post by Rasesh Mody
Add an optional cmdline parameter to enable LRO. This is useful to
test LRO feature by being able to run linux utils like iperf over KNI
interface which generates consistent packet aggregations.
I think this patch has no dependency for rest of the patchset and can be
separately applied.
Snipped from the patch [1]
Please use the new ethdev offloads API. there is already a series to
convert the examples including kni[2], I think it is better to work on
top of it.
[1]
@@ -611,6 +618,10 @@ struct kni_interface_stats {
/* Initialise device and RX/TX queues */
RTE_LOG(INFO, APP, "Initialising port %u ...\n", (unsigned)port);
fflush(stdout);
+
+ if (enable_lro)
+ port_conf.rxmode.enable_lro = 1;
+
[2] http://dpdk.org/dev/patchwork/patch/31558/
Okay, I will
Patil, Harish
2017-12-05 19:42:32 UTC
Permalink
Rasesh Mody
2017-11-24 20:35:43 UTC
Permalink
From: Harish Patil <***@cavium.com>

The qede firmware expects minimum one RX queue to be created, otherwise
it results in firmware exception. So a check is added to prevent that.

Fixes: ec94dbc57362 ("qede: add base driver")
Cc: ***@dpdk.org

Signed-off-by: Harish Patil <***@cavium.com>
---
drivers/net/qede/qede_ethdev.c | 8 ++++++++
1 file changed, 8 insertions(+)

diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index cc473d6..0128cec 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -1233,6 +1233,14 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
}
}

+ /* We need to have min 1 RX queue.There is no min check in
+ * rte_eth_dev_configure(), so we are checking it here.
+ */
+ if (eth_dev->data->nb_rx_queues == 0) {
+ DP_ERR(edev, "Minimum one RX queue is required\n");
+ return -EINVAL;
+ }
+
/* Sanity checks and throw warnings */
if (rxmode->enable_scatter)
eth_dev->data->scattered_rx = 1;
--
1.7.10.3
Rasesh Mody
2017-11-24 20:35:44 UTC
Permalink
From: Shahed Shaikh <***@cavium.com>

Replace rx_vxlan_port command with rx_tunnel_udp_port to support both VXLAN
and GENEVE udp ports.

Signed-off-by: Shahed Shaikh <***@cavium.com>
---
app/test-pmd/cmdline.c | 28 +++++++++++++++++++---------
1 file changed, 19 insertions(+), 9 deletions(-)

diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index f71d963..4b5a8cd 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -402,11 +402,11 @@ static void cmd_help_long_parsed(void *parsed_result,
"imac-tenid|imac|omac-imac-tenid|oip|iip) (tenant_id) (queue_id)\n"
" remove a tunnel filter of a port.\n\n"

- "rx_vxlan_port add (udp_port) (port_id)\n"
- " Add an UDP port for VXLAN packet filter on a port\n\n"
+ "rx_tunnel_udp_port add vxlan|geneve (udp_port) (port_id)\n"
+ " Add an UDP port for VXLAN/GENEVE packet filter on a port\n\n"

- "rx_vxlan_port rm (udp_port) (port_id)\n"
- " Remove an UDP port for VXLAN packet filter on a port\n\n"
+ "rx_tunnel_udp_port rm vxlan|geneve (udp_port) (port_id)\n"
+ " Remove an UDP port for VXLAN/GENEVE packet filter on a port\n\n"

"tx_vlan set (port_id) vlan_id[, vlan_id_outer]\n"
" Set hardware insertion of VLAN IDs (single or double VLAN "
@@ -7984,6 +7984,8 @@ struct cmd_tunnel_filter_result {
tunnel_filter_conf.tunnel_type = RTE_TUNNEL_TYPE_NVGRE;
else if (!strcmp(res->tunnel_type, "ipingre"))
tunnel_filter_conf.tunnel_type = RTE_TUNNEL_TYPE_IP_IN_GRE;
+ else if (!strcmp(res->tunnel_type, "geneve"))
+ tunnel_filter_conf.tunnel_type = RTE_TUNNEL_TYPE_GENEVE;
else {
printf("The tunnel type %s not supported.\n", res->tunnel_type);
return;
@@ -8029,7 +8031,7 @@ struct cmd_tunnel_filter_result {
ip_value);
cmdline_parse_token_string_t cmd_tunnel_filter_tunnel_type =
TOKEN_STRING_INITIALIZER(struct cmd_tunnel_filter_result,
- tunnel_type, "vxlan#nvgre#ipingre");
+ tunnel_type, "vxlan#nvgre#ipingre#geneve");

cmdline_parse_token_string_t cmd_tunnel_filter_filter_type =
TOKEN_STRING_INITIALIZER(struct cmd_tunnel_filter_result,
@@ -8046,7 +8048,7 @@ struct cmd_tunnel_filter_result {
.f = cmd_tunnel_filter_parsed,
.data = (void *)0,
.help_str = "tunnel_filter add|rm <port_id> <outer_mac> <inner_mac> "
- "<ip> <inner_vlan> vxlan|nvgre|ipingre oip|iip|imac-ivlan|"
+ "<ip> <inner_vlan> vxlan|nvgre|ipingre|geneve oip|iip|imac-ivlan|"
"imac-ivlan-tenid|imac-tenid|imac|omac-imac-tenid <tenant_id> "
"<queue_id>: Add/Rm tunnel filter of a port",
.tokens = {
@@ -8069,6 +8071,7 @@ struct cmd_tunnel_filter_result {
struct cmd_tunnel_udp_config {
cmdline_fixed_string_t cmd;
cmdline_fixed_string_t what;
+ cmdline_fixed_string_t tunnel_type;
uint16_t udp_port;
portid_t port_id;
};
@@ -8084,9 +8087,12 @@ struct cmd_tunnel_udp_config {

tunnel_udp.udp_port = res->udp_port;

- if (!strcmp(res->cmd, "rx_vxlan_port"))
+ if (!strcmp(res->tunnel_type, "vxlan"))
tunnel_udp.prot_type = RTE_TUNNEL_TYPE_VXLAN;

+ if (!strcmp(res->tunnel_type, "geneve"))
+ tunnel_udp.prot_type = RTE_TUNNEL_TYPE_GENEVE;
+
if (!strcmp(res->what, "add"))
ret = rte_eth_dev_udp_tunnel_port_add(res->port_id,
&tunnel_udp);
@@ -8100,10 +8106,13 @@ struct cmd_tunnel_udp_config {

cmdline_parse_token_string_t cmd_tunnel_udp_config_cmd =
TOKEN_STRING_INITIALIZER(struct cmd_tunnel_udp_config,
- cmd, "rx_vxlan_port");
+ cmd, "rx_tunnel_udp_port");
cmdline_parse_token_string_t cmd_tunnel_udp_config_what =
TOKEN_STRING_INITIALIZER(struct cmd_tunnel_udp_config,
what, "add#rm");
+cmdline_parse_token_string_t cmd_tunnel_udp_config_tunnel_type =
+ TOKEN_STRING_INITIALIZER(struct cmd_tunnel_udp_config,
+ tunnel_type, "vxlan#geneve");
cmdline_parse_token_num_t cmd_tunnel_udp_config_udp_port =
TOKEN_NUM_INITIALIZER(struct cmd_tunnel_udp_config,
udp_port, UINT16);
@@ -8114,11 +8123,12 @@ struct cmd_tunnel_udp_config {
cmdline_parse_inst_t cmd_tunnel_udp_config = {
.f = cmd_tunnel_udp_config_parsed,
.data = (void *)0,
- .help_str = "rx_vxlan_port add|rm <udp_port> <port_id>: "
+ .help_str = "rx_tunnel_udp_port add|rm vxlan|geneve <udp_port> <port_id>: "
"Add/Remove a tunneling UDP port filter",
.tokens = {
(void *)&cmd_tunnel_udp_config_cmd,
(void *)&cmd_tunnel_udp_config_what,
+ (void *)&cmd_tunnel_udp_config_tunnel_type,
(void *)&cmd_tunnel_udp_config_udp_port,
(void *)&cmd_tunnel_udp_config_port_id,
NULL,
--
1.7.10.3
Ferruh Yigit
2017-12-07 00:38:34 UTC
Permalink
Post by Rasesh Mody
Replace rx_vxlan_port command with rx_tunnel_udp_port to support both VXLAN
and GENEVE udp ports.
Also updates tunnel_filter command to accept "geneve" argument, can you please
separate to another patch.

And to prevent these patches hold PMD patches, you can send a new version of the
patchset splitting qede PMD patches into their own patchset.
Post by Rasesh Mody
---
app/test-pmd/cmdline.c | 28 +++++++++++++++++++---------
1 file changed, 19 insertions(+), 9 deletions(-)
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index f71d963..4b5a8cd 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -402,11 +402,11 @@ static void cmd_help_long_parsed(void *parsed_result,
"imac-tenid|imac|omac-imac-tenid|oip|iip) (tenant_id) (queue_id)\n"
" remove a tunnel filter of a port.\n\n"
- "rx_vxlan_port add (udp_port) (port_id)\n"
- " Add an UDP port for VXLAN packet filter on a port\n\n"
+ "rx_tunnel_udp_port add vxlan|geneve (udp_port) (port_id)\n"
Not sure about "rx_tunnel_udp_port" command.

What do you think something like:
"port config (port_id) udp_tunnel_port add|rm vxlan|geneve (udp_port)"

to expand ""port config (port_id) ..." command instead of introducing a new one?

<...>
Shaikh, Shahed
2017-12-07 18:18:29 UTC
Permalink
Post by Patil, Harish
-----Original Message-----
Sent: Thursday, December 7, 2017 6:09 AM
Subject: Re: [dpdk-dev] [PATCH 4/5] app/testpmd: add configuration for udp
port tunnel type
Post by Rasesh Mody
Replace rx_vxlan_port command with rx_tunnel_udp_port to support both
VXLAN and GENEVE udp ports.
Also updates tunnel_filter command to accept "geneve" argument, can you
please separate to another patch.
And to prevent these patches hold PMD patches, you can send a new version of
the patchset splitting qede PMD patches into their own patchset.
Sure. We'll send separate patchset for qede PMD.
Post by Patil, Harish
Post by Rasesh Mody
---
app/test-pmd/cmdline.c | 28 +++++++++++++++++++---------
1 file changed, 19 insertions(+), 9 deletions(-)
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c index
f71d963..4b5a8cd 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -402,11 +402,11 @@ static void cmd_help_long_parsed(void
*parsed_result,
Post by Rasesh Mody
"imac-tenid|imac|omac-imac-tenid|oip|iip) (tenant_id)
(queue_id)\n"
Post by Rasesh Mody
" remove a tunnel filter of a port.\n\n"
- "rx_vxlan_port add (udp_port) (port_id)\n"
- " Add an UDP port for VXLAN packet filter on a
port\n\n"
Post by Rasesh Mody
+ "rx_tunnel_udp_port add vxlan|geneve (udp_port)
(port_id)\n"
Not sure about "rx_tunnel_udp_port" command.
"port config (port_id) udp_tunnel_port add|rm vxlan|geneve (udp_port)"
to expand ""port config (port_id) ..." command instead of introducing a new one?
Makes sense. I'll
Rasesh Mody
2017-11-24 20:35:45 UTC
Permalink
From: Shahed Shaikh <***@qlogic.com>

This patch refactors existing VXLAN tunneling offload code and enables
following features for GENEVE:
- destination UDP port configuration
- checksum offloads
- filter configuration

Signed-off-by: Shahed Shaikh <***@qlogic.com>
---
drivers/net/qede/qede_ethdev.c | 518 ++++++++++++++++++++++++++--------------
drivers/net/qede/qede_ethdev.h | 10 +-
drivers/net/qede/qede_rxtx.c | 4 +-
drivers/net/qede/qede_rxtx.h | 4 +-
4 files changed, 350 insertions(+), 186 deletions(-)

diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index 0128cec..68e99c5 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -15,7 +15,7 @@
static int64_t timer_period = 1;

/* VXLAN tunnel classification mapping */
-const struct _qede_vxlan_tunn_types {
+const struct _qede_udp_tunn_types {
uint16_t rte_filter_type;
enum ecore_filter_ucast_type qede_type;
enum ecore_tunn_clss qede_tunn_clss;
@@ -612,48 +612,118 @@ static void qede_set_ucast_cmn_params(struct ecore_filter_ucast *ucast)
}

static int
+qede_tunnel_update(struct qede_dev *qdev,
+ struct ecore_tunnel_info *tunn_info)
+{
+ struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
+ enum _ecore_status_t rc = ECORE_INVAL;
+ struct ecore_hwfn *p_hwfn;
+ struct ecore_ptt *p_ptt;
+ int i;
+
+ for_each_hwfn(edev, i) {
+ p_hwfn = &edev->hwfns[i];
+ p_ptt = IS_PF(edev) ? ecore_ptt_acquire(p_hwfn) : NULL;
+ rc = ecore_sp_pf_update_tunn_cfg(p_hwfn, p_ptt,
+ tunn_info, ECORE_SPQ_MODE_CB, NULL);
+ if (IS_PF(edev))
+ ecore_ptt_release(p_hwfn, p_ptt);
+
+ if (rc != ECORE_SUCCESS)
+ break;
+ }
+
+ return rc;
+}
+
+static int
qede_vxlan_enable(struct rte_eth_dev *eth_dev, uint8_t clss,
- bool enable, bool mask)
+ bool enable)
{
struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
enum _ecore_status_t rc = ECORE_INVAL;
- struct ecore_ptt *p_ptt;
struct ecore_tunnel_info tunn;
- struct ecore_hwfn *p_hwfn;
- int i;
+
+ if (qdev->vxlan.enable == enable)
+ return ECORE_SUCCESS;

memset(&tunn, 0, sizeof(struct ecore_tunnel_info));
- tunn.vxlan.b_update_mode = enable;
- tunn.vxlan.b_mode_enabled = mask;
+ tunn.vxlan.b_update_mode = true;
+ tunn.vxlan.b_mode_enabled = enable;
tunn.b_update_rx_cls = true;
tunn.b_update_tx_cls = true;
tunn.vxlan.tun_cls = clss;

- for_each_hwfn(edev, i) {
- p_hwfn = &edev->hwfns[i];
- if (IS_PF(edev)) {
- p_ptt = ecore_ptt_acquire(p_hwfn);
- if (!p_ptt)
- return -EAGAIN;
- } else {
- p_ptt = NULL;
- }
- rc = ecore_sp_pf_update_tunn_cfg(p_hwfn, p_ptt,
- &tunn, ECORE_SPQ_MODE_CB, NULL);
- if (rc != ECORE_SUCCESS) {
- DP_ERR(edev, "Failed to update tunn_clss %u\n",
- tunn.vxlan.tun_cls);
- if (IS_PF(edev))
- ecore_ptt_release(p_hwfn, p_ptt);
- break;
- }
- }
+ tunn.vxlan_port.b_update_port = true;
+ tunn.vxlan_port.port = enable ? QEDE_VXLAN_DEF_PORT : 0;

+ rc = qede_tunnel_update(qdev, &tunn);
if (rc == ECORE_SUCCESS) {
qdev->vxlan.enable = enable;
qdev->vxlan.udp_port = (enable) ? QEDE_VXLAN_DEF_PORT : 0;
- DP_INFO(edev, "vxlan is %s\n", enable ? "enabled" : "disabled");
+ DP_INFO(edev, "vxlan is %s, UDP port = %d\n",
+ enable ? "enabled" : "disabled", qdev->vxlan.udp_port);
+ } else {
+ DP_ERR(edev, "Failed to update tunn_clss %u\n",
+ tunn.vxlan.tun_cls);
+ }
+
+ return rc;
+}
+
+static int
+qede_geneve_enable(struct rte_eth_dev *eth_dev, uint8_t clss,
+ bool enable)
+{
+ struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
+ struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
+ enum _ecore_status_t rc = ECORE_INVAL;
+ struct ecore_tunnel_info tunn;
+
+ memset(&tunn, 0, sizeof(struct ecore_tunnel_info));
+ tunn.l2_geneve.b_update_mode = true;
+ tunn.l2_geneve.b_mode_enabled = enable;
+ tunn.ip_geneve.b_update_mode = true;
+ tunn.ip_geneve.b_mode_enabled = enable;
+ tunn.l2_geneve.tun_cls = clss;
+ tunn.ip_geneve.tun_cls = clss;
+ tunn.b_update_rx_cls = true;
+ tunn.b_update_tx_cls = true;
+
+ tunn.geneve_port.b_update_port = true;
+ tunn.geneve_port.port = enable ? QEDE_GENEVE_DEF_PORT : 0;
+
+ rc = qede_tunnel_update(qdev, &tunn);
+ if (rc == ECORE_SUCCESS) {
+ qdev->geneve.enable = enable;
+ qdev->geneve.udp_port = (enable) ? QEDE_GENEVE_DEF_PORT : 0;
+ DP_INFO(edev, "GENEVE is %s, UDP port = %d\n",
+ enable ? "enabled" : "disabled", qdev->geneve.udp_port);
+ } else {
+ DP_ERR(edev, "Failed to update tunn_clss %u\n",
+ clss);
+ }
+
+ return rc;
+}
+
+static int
+qede_tunn_enable(struct rte_eth_dev *eth_dev, uint8_t clss,
+ enum rte_eth_tunnel_type tunn_type, bool enable)
+{
+ int rc = -EINVAL;
+
+ switch (tunn_type) {
+ case RTE_TUNNEL_TYPE_VXLAN:
+ rc = qede_vxlan_enable(eth_dev, clss, enable);
+ break;
+ case RTE_TUNNEL_TYPE_GENEVE:
+ rc = qede_geneve_enable(eth_dev, clss, enable);
+ break;
+ default:
+ rc = -EINVAL;
+ break;
}

return rc;
@@ -1367,7 +1437,8 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
DEV_TX_OFFLOAD_TCP_CKSUM |
DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO);
+ DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
+ DEV_TX_OFFLOAD_GENEVE_TNL_TSO);

memset(&link, 0, sizeof(struct qed_link_output));
qdev->ops->common->get_link(edev, &link);
@@ -1873,6 +1944,7 @@ static int qede_flow_ctrl_get(struct rte_eth_dev *eth_dev,
RTE_PTYPE_L4_UDP,
RTE_PTYPE_TUNNEL_VXLAN,
RTE_PTYPE_L4_FRAG,
+ RTE_PTYPE_TUNNEL_GENEVE,
/* Inner */
RTE_PTYPE_INNER_L2_ETHER,
RTE_PTYPE_INNER_L2_ETHER_VLAN,
@@ -2221,74 +2293,36 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
}

static int
-qede_conf_udp_dst_port(struct rte_eth_dev *eth_dev,
- struct rte_eth_udp_tunnel *tunnel_udp,
- bool add)
+qede_udp_dst_port_del(struct rte_eth_dev *eth_dev,
+ struct rte_eth_udp_tunnel *tunnel_udp)
{
struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
struct ecore_tunnel_info tunn; /* @DPDK */
- struct ecore_hwfn *p_hwfn;
- struct ecore_ptt *p_ptt;
uint16_t udp_port;
- int rc, i;
+ int rc;

PMD_INIT_FUNC_TRACE(edev);

memset(&tunn, 0, sizeof(tunn));
- if (tunnel_udp->prot_type == RTE_TUNNEL_TYPE_VXLAN) {
- /* Enable VxLAN tunnel if needed before UDP port update using
- * default MAC/VLAN classification.
- */
- if (add) {
- if (qdev->vxlan.udp_port == tunnel_udp->udp_port) {
- DP_INFO(edev,
- "UDP port %u was already configured\n",
- tunnel_udp->udp_port);
- return ECORE_SUCCESS;
- }
- /* Enable VXLAN if it was not enabled while adding
- * VXLAN filter.
- */
- if (!qdev->vxlan.enable) {
- rc = qede_vxlan_enable(eth_dev,
- ECORE_TUNN_CLSS_MAC_VLAN, true, true);
- if (rc != ECORE_SUCCESS) {
- DP_ERR(edev, "Failed to enable VXLAN "
- "prior to updating UDP port\n");
- return rc;
- }
- }
- udp_port = tunnel_udp->udp_port;
- } else {
- if (qdev->vxlan.udp_port != tunnel_udp->udp_port) {
- DP_ERR(edev, "UDP port %u doesn't exist\n",
- tunnel_udp->udp_port);
- return ECORE_INVAL;
- }
- udp_port = 0;
+
+ switch (tunnel_udp->prot_type) {
+ case RTE_TUNNEL_TYPE_VXLAN:
+ if (qdev->vxlan.udp_port != tunnel_udp->udp_port) {
+ DP_ERR(edev, "UDP port %u doesn't exist\n",
+ tunnel_udp->udp_port);
+ return ECORE_INVAL;
}
+ udp_port = 0;

tunn.vxlan_port.b_update_port = true;
tunn.vxlan_port.port = udp_port;
- for_each_hwfn(edev, i) {
- p_hwfn = &edev->hwfns[i];
- if (IS_PF(edev)) {
- p_ptt = ecore_ptt_acquire(p_hwfn);
- if (!p_ptt)
- return -EAGAIN;
- } else {
- p_ptt = NULL;
- }
- rc = ecore_sp_pf_update_tunn_cfg(p_hwfn, p_ptt, &tunn,
- ECORE_SPQ_MODE_CB, NULL);
- if (rc != ECORE_SUCCESS) {
- DP_ERR(edev, "Unable to config UDP port %u\n",
- tunn.vxlan_port.port);
- if (IS_PF(edev))
- ecore_ptt_release(p_hwfn, p_ptt);
- return rc;
- }
+
+ rc = qede_tunnel_update(qdev, &tunn);
+ if (rc != ECORE_SUCCESS) {
+ DP_ERR(edev, "Unable to config UDP port %u\n",
+ tunn.vxlan_port.port);
+ return rc;
}

qdev->vxlan.udp_port = udp_port;
@@ -2296,26 +2330,145 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
* VXLAN filters have reached 0 then VxLAN offload can be be
* disabled.
*/
- if (!add && qdev->vxlan.enable && qdev->vxlan.num_filters == 0)
+ if (qdev->vxlan.enable && qdev->vxlan.num_filters == 0)
return qede_vxlan_enable(eth_dev,
- ECORE_TUNN_CLSS_MAC_VLAN, false, true);
+ ECORE_TUNN_CLSS_MAC_VLAN, false);
+
+ break;
+
+ case RTE_TUNNEL_TYPE_GENEVE:
+ if (qdev->geneve.udp_port != tunnel_udp->udp_port) {
+ DP_ERR(edev, "UDP port %u doesn't exist\n",
+ tunnel_udp->udp_port);
+ return ECORE_INVAL;
+ }
+
+ udp_port = 0;
+
+ tunn.geneve_port.b_update_port = true;
+ tunn.geneve_port.port = udp_port;
+
+ rc = qede_tunnel_update(qdev, &tunn);
+ if (rc != ECORE_SUCCESS) {
+ DP_ERR(edev, "Unable to config UDP port %u\n",
+ tunn.vxlan_port.port);
+ return rc;
+ }
+
+ qdev->vxlan.udp_port = udp_port;
+ /* If the request is to delete UDP port and if the number of
+ * GENEVE filters have reached 0 then GENEVE offload can be be
+ * disabled.
+ */
+ if (qdev->geneve.enable && qdev->geneve.num_filters == 0)
+ return qede_geneve_enable(eth_dev,
+ ECORE_TUNN_CLSS_MAC_VLAN, false);
+
+ break;
+
+ default:
+ return ECORE_INVAL;
}

return 0;
-}

-static int
-qede_udp_dst_port_del(struct rte_eth_dev *eth_dev,
- struct rte_eth_udp_tunnel *tunnel_udp)
-{
- return qede_conf_udp_dst_port(eth_dev, tunnel_udp, false);
}
-
static int
qede_udp_dst_port_add(struct rte_eth_dev *eth_dev,
struct rte_eth_udp_tunnel *tunnel_udp)
{
- return qede_conf_udp_dst_port(eth_dev, tunnel_udp, true);
+ struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
+ struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
+ struct ecore_tunnel_info tunn; /* @DPDK */
+ uint16_t udp_port;
+ int rc;
+
+ PMD_INIT_FUNC_TRACE(edev);
+
+ memset(&tunn, 0, sizeof(tunn));
+
+ switch (tunnel_udp->prot_type) {
+ case RTE_TUNNEL_TYPE_VXLAN:
+ if (qdev->vxlan.udp_port == tunnel_udp->udp_port) {
+ DP_INFO(edev,
+ "UDP port %u for VXLAN was already configured\n",
+ tunnel_udp->udp_port);
+ return ECORE_SUCCESS;
+ }
+
+ /* Enable VxLAN tunnel with default MAC/VLAN classification if
+ * it was not enabled while adding VXLAN filter before UDP port
+ * update.
+ */
+ if (!qdev->vxlan.enable) {
+ rc = qede_vxlan_enable(eth_dev,
+ ECORE_TUNN_CLSS_MAC_VLAN, true);
+ if (rc != ECORE_SUCCESS) {
+ DP_ERR(edev, "Failed to enable VXLAN "
+ "prior to updating UDP port\n");
+ return rc;
+ }
+ }
+ udp_port = tunnel_udp->udp_port;
+
+ tunn.vxlan_port.b_update_port = true;
+ tunn.vxlan_port.port = udp_port;
+
+ rc = qede_tunnel_update(qdev, &tunn);
+ if (rc != ECORE_SUCCESS) {
+ DP_ERR(edev, "Unable to config UDP port %u for VXLAN\n",
+ udp_port);
+ return rc;
+ }
+
+ DP_INFO(edev, "Updated UDP port %u for VXLAN\n", udp_port);
+
+ qdev->vxlan.udp_port = udp_port;
+ break;
+
+ case RTE_TUNNEL_TYPE_GENEVE:
+ if (qdev->geneve.udp_port == tunnel_udp->udp_port) {
+ DP_INFO(edev,
+ "UDP port %u for GENEVE was already configured\n",
+ tunnel_udp->udp_port);
+ return ECORE_SUCCESS;
+ }
+
+ /* Enable GENEVE tunnel with default MAC/VLAN classification if
+ * it was not enabled while adding GENEVE filter before UDP port
+ * update.
+ */
+ if (!qdev->geneve.enable) {
+ rc = qede_geneve_enable(eth_dev,
+ ECORE_TUNN_CLSS_MAC_VLAN, true);
+ if (rc != ECORE_SUCCESS) {
+ DP_ERR(edev, "Failed to enable GENEVE "
+ "prior to updating UDP port\n");
+ return rc;
+ }
+ }
+ udp_port = tunnel_udp->udp_port;
+
+ tunn.geneve_port.b_update_port = true;
+ tunn.geneve_port.port = udp_port;
+
+ rc = qede_tunnel_update(qdev, &tunn);
+ if (rc != ECORE_SUCCESS) {
+ DP_ERR(edev, "Unable to config UDP port %u for GENEVE\n",
+ udp_port);
+ return rc;
+ }
+
+ DP_INFO(edev, "Updated UDP port %u for GENEVE\n", udp_port);
+
+ qdev->geneve.udp_port = udp_port;
+ break;
+
+ default:
+ return ECORE_INVAL;
+ }
+
+ return 0;
}

static void qede_get_ecore_tunn_params(uint32_t filter, uint32_t *type,
@@ -2382,113 +2535,116 @@ static void qede_get_ecore_tunn_params(uint32_t filter, uint32_t *type,
return ECORE_SUCCESS;
}

-static int qede_vxlan_tunn_config(struct rte_eth_dev *eth_dev,
- enum rte_filter_op filter_op,
- const struct rte_eth_tunnel_filter_conf *conf)
+static int
+_qede_tunn_filter_config(struct rte_eth_dev *eth_dev,
+ const struct rte_eth_tunnel_filter_conf *conf,
+ __attribute__((unused)) enum rte_filter_op filter_op,
+ enum ecore_tunn_clss *clss,
+ bool add)
{
struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
- enum ecore_filter_ucast_type type;
- enum ecore_tunn_clss clss = MAX_ECORE_TUNN_CLSS;
struct ecore_filter_ucast ucast = {0};
- char str[80];
+ enum ecore_filter_ucast_type type;
uint16_t filter_type = 0;
+ char str[80];
int rc;

- PMD_INIT_FUNC_TRACE(edev);
+ filter_type = conf->filter_type;
+ /* Determine if the given filter classification is supported */
+ qede_get_ecore_tunn_params(filter_type, &type, clss, str);
+ if (*clss == MAX_ECORE_TUNN_CLSS) {
+ DP_ERR(edev, "Unsupported filter type\n");
+ return -EINVAL;
+ }
+ /* Init tunnel ucast params */
+ rc = qede_set_ucast_tunn_cmn_param(&ucast, conf, type);
+ if (rc != ECORE_SUCCESS) {
+ DP_ERR(edev, "Unsupported Tunnel filter type 0x%x\n",
+ conf->filter_type);
+ return rc;
+ }
+ DP_INFO(edev, "Rule: \"%s\", op %d, type 0x%x\n",
+ str, filter_op, ucast.type);

- switch (filter_op) {
- case RTE_ETH_FILTER_ADD:
- if (IS_VF(edev))
- return qede_vxlan_enable(eth_dev,
- ECORE_TUNN_CLSS_MAC_VLAN, true, true);
+ ucast.opcode = add ? ECORE_FILTER_ADD : ECORE_FILTER_REMOVE;

- filter_type = conf->filter_type;
- /* Determine if the given filter classification is supported */
- qede_get_ecore_tunn_params(filter_type, &type, &clss, str);
- if (clss == MAX_ECORE_TUNN_CLSS) {
- DP_ERR(edev, "Unsupported filter type\n");
- return -EINVAL;
- }
- /* Init tunnel ucast params */
- rc = qede_set_ucast_tunn_cmn_param(&ucast, conf, type);
- if (rc != ECORE_SUCCESS) {
- DP_ERR(edev, "Unsupported VxLAN filter type 0x%x\n",
- conf->filter_type);
- return rc;
+ /* Skip MAC/VLAN if filter is based on VNI */
+ if (!(filter_type & ETH_TUNNEL_FILTER_TENID)) {
+ rc = qede_mac_int_ops(eth_dev, &ucast, add);
+ if ((rc == 0) && add) {
+ /* Enable accept anyvlan */
+ qede_config_accept_any_vlan(qdev, true);
}
- DP_INFO(edev, "Rule: \"%s\", op %d, type 0x%x\n",
- str, filter_op, ucast.type);
-
- ucast.opcode = ECORE_FILTER_ADD;
+ } else {
+ rc = qede_ucast_filter(eth_dev, &ucast, add);
+ if (rc == 0)
+ rc = ecore_filter_ucast_cmd(edev, &ucast,
+ ECORE_SPQ_MODE_CB, NULL);
+ }

- /* Skip MAC/VLAN if filter is based on VNI */
- if (!(filter_type & ETH_TUNNEL_FILTER_TENID)) {
- rc = qede_mac_int_ops(eth_dev, &ucast, 1);
- if (rc == 0) {
- /* Enable accept anyvlan */
- qede_config_accept_any_vlan(qdev, true);
- }
- } else {
- rc = qede_ucast_filter(eth_dev, &ucast, 1);
- if (rc == 0)
- rc = ecore_filter_ucast_cmd(edev, &ucast,
- ECORE_SPQ_MODE_CB, NULL);
- }
+ return rc;
+}

- if (rc != ECORE_SUCCESS)
- return rc;
+static int
+qede_tunn_filter_config(struct rte_eth_dev *eth_dev,
+ enum rte_filter_op filter_op,
+ const struct rte_eth_tunnel_filter_conf *conf)
+{
+ struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
+ struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
+ enum ecore_tunn_clss clss = MAX_ECORE_TUNN_CLSS;
+ bool add;
+ int rc;

- qdev->vxlan.num_filters++;
- qdev->vxlan.filter_type = filter_type;
- if (!qdev->vxlan.enable)
- return qede_vxlan_enable(eth_dev, clss, true, true);
+ PMD_INIT_FUNC_TRACE(edev);

- break;
+ switch (filter_op) {
+ case RTE_ETH_FILTER_ADD:
+ add = true;
+ break;
case RTE_ETH_FILTER_DELETE:
- if (IS_VF(edev))
- return qede_vxlan_enable(eth_dev,
- ECORE_TUNN_CLSS_MAC_VLAN, false, true);
+ add = false;
+ break;
+ default:
+ DP_ERR(edev, "Unsupported operation %d\n", filter_op);
+ return -EINVAL;
+ }

- filter_type = conf->filter_type;
- /* Determine if the given filter classification is supported */
- qede_get_ecore_tunn_params(filter_type, &type, &clss, str);
- if (clss == MAX_ECORE_TUNN_CLSS) {
- DP_ERR(edev, "Unsupported filter type\n");
- return -EINVAL;
- }
- /* Init tunnel ucast params */
- rc = qede_set_ucast_tunn_cmn_param(&ucast, conf, type);
- if (rc != ECORE_SUCCESS) {
- DP_ERR(edev, "Unsupported VxLAN filter type 0x%x\n",
- conf->filter_type);
- return rc;
- }
- DP_INFO(edev, "Rule: \"%s\", op %d, type 0x%x\n",
- str, filter_op, ucast.type);
+ if (IS_VF(edev))
+ return qede_tunn_enable(eth_dev,
+ ECORE_TUNN_CLSS_MAC_VLAN,
+ conf->tunnel_type, add);

- ucast.opcode = ECORE_FILTER_REMOVE;
+ rc = _qede_tunn_filter_config(eth_dev, conf, filter_op, &clss, add);
+ if (rc != ECORE_SUCCESS)
+ return rc;

- if (!(filter_type & ETH_TUNNEL_FILTER_TENID)) {
- rc = qede_mac_int_ops(eth_dev, &ucast, 0);
- } else {
- rc = qede_ucast_filter(eth_dev, &ucast, 0);
- if (rc == 0)
- rc = ecore_filter_ucast_cmd(edev, &ucast,
- ECORE_SPQ_MODE_CB, NULL);
+ if (add) {
+ if (conf->tunnel_type == RTE_TUNNEL_TYPE_VXLAN) {
+ qdev->vxlan.num_filters++;
+ qdev->vxlan.filter_type = conf->filter_type;
+ } else { /* GENEVE */
+ qdev->geneve.num_filters++;
+ qdev->geneve.filter_type = conf->filter_type;
}
- if (rc != ECORE_SUCCESS)
- return rc;

- qdev->vxlan.num_filters--;
+ if (!qdev->vxlan.enable || !qdev->geneve.enable)
+ return qede_tunn_enable(eth_dev, clss,
+ conf->tunnel_type,
+ true);
+ } else {
+ if (conf->tunnel_type == RTE_TUNNEL_TYPE_VXLAN)
+ qdev->vxlan.num_filters--;
+ else /*GENEVE*/
+ qdev->geneve.num_filters--;

/* Disable VXLAN if VXLAN filters become 0 */
- if (qdev->vxlan.num_filters == 0)
- return qede_vxlan_enable(eth_dev, clss, false, true);
- break;
- default:
- DP_ERR(edev, "Unsupported operation %d\n", filter_op);
- return -EINVAL;
+ if ((qdev->vxlan.num_filters == 0) ||
+ (qdev->geneve.num_filters == 0))
+ return qede_tunn_enable(eth_dev, clss,
+ conf->tunnel_type,
+ false);
}

return 0;
@@ -2508,13 +2664,13 @@ int qede_dev_filter_ctrl(struct rte_eth_dev *eth_dev,
case RTE_ETH_FILTER_TUNNEL:
switch (filter_conf->tunnel_type) {
case RTE_TUNNEL_TYPE_VXLAN:
+ case RTE_TUNNEL_TYPE_GENEVE:
DP_INFO(edev,
"Packet steering to the specified Rx queue"
- " is not supported with VXLAN tunneling");
- return(qede_vxlan_tunn_config(eth_dev, filter_op,
+ " is not supported with UDP tunneling");
+ return(qede_tunn_filter_config(eth_dev, filter_op,
filter_conf));
/* Place holders for future tunneling support */
- case RTE_TUNNEL_TYPE_GENEVE:
case RTE_TUNNEL_TYPE_TEREDO:
case RTE_TUNNEL_TYPE_NVGRE:
case RTE_TUNNEL_TYPE_IP_IN_GRE:
diff --git a/drivers/net/qede/qede_ethdev.h b/drivers/net/qede/qede_ethdev.h
index 021de5c..7e55baf 100644
--- a/drivers/net/qede/qede_ethdev.h
+++ b/drivers/net/qede/qede_ethdev.h
@@ -166,11 +166,14 @@ struct qede_fdir_info {
SLIST_HEAD(fdir_list_head, qede_fdir_entry)fdir_list_head;
};

-struct qede_vxlan_tunn {
+/* IANA assigned default UDP ports for encapsulation protocols */
+#define QEDE_VXLAN_DEF_PORT (4789)
+#define QEDE_GENEVE_DEF_PORT (6081)
+
+struct qede_udp_tunn {
bool enable;
uint16_t num_filters;
uint16_t filter_type;
-#define QEDE_VXLAN_DEF_PORT (4789)
uint16_t udp_port;
};

@@ -202,7 +205,8 @@ struct qede_dev {
SLIST_HEAD(uc_list_head, qede_ucast_entry) uc_list_head;
uint16_t num_uc_addr;
bool handle_hw_err;
- struct qede_vxlan_tunn vxlan;
+ struct qede_udp_tunn vxlan;
+ struct qede_udp_tunn geneve;
struct qede_fdir_info fdir_info;
bool vlan_strip_flg;
char drv_ver[QEDE_PMD_DRV_VER_STR_SIZE];
diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
index 01a24e5..184f0e1 100644
--- a/drivers/net/qede/qede_rxtx.c
+++ b/drivers/net/qede/qede_rxtx.c
@@ -1792,7 +1792,9 @@ static inline uint32_t qede_rx_cqe_to_tunn_pkt_type(uint16_t flags)
if (((tx_ol_flags & PKT_TX_TUNNEL_MASK) ==
PKT_TX_TUNNEL_VXLAN) ||
((tx_ol_flags & PKT_TX_TUNNEL_MASK) ==
- PKT_TX_TUNNEL_MPLSINUDP)) {
+ PKT_TX_TUNNEL_MPLSINUDP) ||
+ ((tx_ol_flags & PKT_TX_TUNNEL_MASK) ==
+ PKT_TX_TUNNEL_GENEVE)) {
/* Check against max which is Tunnel IPv6 + ext */
if (unlikely(txq->nb_tx_avail <
ETH_TX_MIN_BDS_PER_TUNN_IPV6_WITH_EXT_PKT))
diff --git a/drivers/net/qede/qede_rxtx.h b/drivers/net/qede/qede_rxtx.h
index acf9e47..6214c97 100644
--- a/drivers/net/qede/qede_rxtx.h
+++ b/drivers/net/qede/qede_rxtx.h
@@ -73,7 +73,8 @@
ETH_RSS_IPV6 |\
ETH_RSS_NONFRAG_IPV6_TCP |\
ETH_RSS_NONFRAG_IPV6_UDP |\
- ETH_RSS_VXLAN)
+ ETH_RSS_VXLAN |\
+ ETH_RSS_GENEVE)

#define QEDE_TXQ_FLAGS ((uint32_t)ETH_TXQ_FLAGS_NOMULTSEGS)

@@ -151,6 +152,7 @@
PKT_TX_QINQ_PKT | \
PKT_TX_VLAN_PKT | \
PKT_TX_TUNNEL_VXLAN | \
+ PKT_TX_TUNNEL_GENEVE | \
PKT_TX_TUNNEL_MPLSINUDP)

#define QEDE_TX_OFFLOAD_NOTSUP_MASK \
--
1.7.10.3
Rasesh Mody
2017-12-14 06:36:00 UTC
Permalink
Hi Ferruh,

This patch set adds enhancements and fixes for qede PMD.

v1..v2 - separate out QEDE PMD changes from rest.

Thanks!
Rasesh

Harish Patil (2):
net/qede: fix to enable LRO over tunnels
net/qede: fix to reject config with no Rx queue

Shahed Shaikh (1):
net/qede: add support for GENEVE tunneling offload

drivers/net/qede/qede_ethdev.c | 530 ++++++++++++++++++++++++++--------------
drivers/net/qede/qede_ethdev.h | 10 +-
drivers/net/qede/qede_rxtx.c | 4 +-
drivers/net/qede/qede_rxtx.h | 4 +-
4 files changed, 360 insertions(+), 188 deletions(-)
--
1.7.10.3
Ferruh Yigit
2017-12-15 21:05:10 UTC
Permalink
Post by Rasesh Mody
Hi Ferruh,
This patch set adds enhancements and fixes for qede PMD.
v1..v2 - separate out QEDE PMD changes from rest.
Thanks!
Rasesh
net/qede: fix to enable LRO over tunnels
net/qede: fix to reject config with no Rx queue
net/qede: add support for GENEVE tunneling offload
Series applied to dpdk-next-net/master, thanks.
Rasesh Mody
2017-12-14 06:36:01 UTC
Permalink
From: Harish Patil <***@cavium.com>

Enable LRO feature to work with tunnel encapsulation protocols.

Fixes: 29540be7efce ("net/qede: support LRO/TSO offloads")
Cc: ***@dpdk.org

Signed-off-by: Harish Patil <***@cavium.com>
---
drivers/net/qede/qede_ethdev.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index 6f5ba2a..cc473d6 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -482,8 +482,8 @@ int qede_activate_vport(struct rte_eth_dev *eth_dev, bool flg)
/* Enable LRO in split mode */
sge_tpa_params->tpa_ipv4_en_flg = enable;
sge_tpa_params->tpa_ipv6_en_flg = enable;
- sge_tpa_params->tpa_ipv4_tunn_en_flg = false;
- sge_tpa_params->tpa_ipv6_tunn_en_flg = false;
+ sge_tpa_params->tpa_ipv4_tunn_en_flg = enable;
+ sge_tpa_params->tpa_ipv6_tunn_en_flg = enable;
/* set if tpa enable changes */
sge_tpa_params->update_tpa_en_flg = 1;
/* set if tpa parameters should be handled */
--
1.7.10.3
Rasesh Mody
2017-12-14 06:36:02 UTC
Permalink
From: Harish Patil <***@cavium.com>

The qede firmware expects minimum one RX queue to be created, otherwise
it results in firmware exception. So a check is added to prevent that.

Fixes: ec94dbc57362 ("qede: add base driver")
Cc: ***@dpdk.org

Signed-off-by: Harish Patil <***@cavium.com>
---
drivers/net/qede/qede_ethdev.c | 8 ++++++++
1 file changed, 8 insertions(+)

diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index cc473d6..0128cec 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -1233,6 +1233,14 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
}
}

+ /* We need to have min 1 RX queue.There is no min check in
+ * rte_eth_dev_configure(), so we are checking it here.
+ */
+ if (eth_dev->data->nb_rx_queues == 0) {
+ DP_ERR(edev, "Minimum one RX queue is required\n");
+ return -EINVAL;
+ }
+
/* Sanity checks and throw warnings */
if (rxmode->enable_scatter)
eth_dev->data->scattered_rx = 1;
--
1.7.10.3
Rasesh Mody
2017-12-14 06:36:03 UTC
Permalink
From: Shahed Shaikh <***@qlogic.com>

This patch refactors existing VXLAN tunneling offload code and enables
following features for GENEVE:
- destination UDP port configuration
- checksum offloads
- filter configuration

Signed-off-by: Shahed Shaikh <***@qlogic.com>
---
drivers/net/qede/qede_ethdev.c | 518 ++++++++++++++++++++++++++--------------
drivers/net/qede/qede_ethdev.h | 10 +-
drivers/net/qede/qede_rxtx.c | 4 +-
drivers/net/qede/qede_rxtx.h | 4 +-
4 files changed, 350 insertions(+), 186 deletions(-)

diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index 0128cec..68e99c5 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -15,7 +15,7 @@
static int64_t timer_period = 1;

/* VXLAN tunnel classification mapping */
-const struct _qede_vxlan_tunn_types {
+const struct _qede_udp_tunn_types {
uint16_t rte_filter_type;
enum ecore_filter_ucast_type qede_type;
enum ecore_tunn_clss qede_tunn_clss;
@@ -612,48 +612,118 @@ static void qede_set_ucast_cmn_params(struct ecore_filter_ucast *ucast)
}

static int
+qede_tunnel_update(struct qede_dev *qdev,
+ struct ecore_tunnel_info *tunn_info)
+{
+ struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
+ enum _ecore_status_t rc = ECORE_INVAL;
+ struct ecore_hwfn *p_hwfn;
+ struct ecore_ptt *p_ptt;
+ int i;
+
+ for_each_hwfn(edev, i) {
+ p_hwfn = &edev->hwfns[i];
+ p_ptt = IS_PF(edev) ? ecore_ptt_acquire(p_hwfn) : NULL;
+ rc = ecore_sp_pf_update_tunn_cfg(p_hwfn, p_ptt,
+ tunn_info, ECORE_SPQ_MODE_CB, NULL);
+ if (IS_PF(edev))
+ ecore_ptt_release(p_hwfn, p_ptt);
+
+ if (rc != ECORE_SUCCESS)
+ break;
+ }
+
+ return rc;
+}
+
+static int
qede_vxlan_enable(struct rte_eth_dev *eth_dev, uint8_t clss,
- bool enable, bool mask)
+ bool enable)
{
struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
enum _ecore_status_t rc = ECORE_INVAL;
- struct ecore_ptt *p_ptt;
struct ecore_tunnel_info tunn;
- struct ecore_hwfn *p_hwfn;
- int i;
+
+ if (qdev->vxlan.enable == enable)
+ return ECORE_SUCCESS;

memset(&tunn, 0, sizeof(struct ecore_tunnel_info));
- tunn.vxlan.b_update_mode = enable;
- tunn.vxlan.b_mode_enabled = mask;
+ tunn.vxlan.b_update_mode = true;
+ tunn.vxlan.b_mode_enabled = enable;
tunn.b_update_rx_cls = true;
tunn.b_update_tx_cls = true;
tunn.vxlan.tun_cls = clss;

- for_each_hwfn(edev, i) {
- p_hwfn = &edev->hwfns[i];
- if (IS_PF(edev)) {
- p_ptt = ecore_ptt_acquire(p_hwfn);
- if (!p_ptt)
- return -EAGAIN;
- } else {
- p_ptt = NULL;
- }
- rc = ecore_sp_pf_update_tunn_cfg(p_hwfn, p_ptt,
- &tunn, ECORE_SPQ_MODE_CB, NULL);
- if (rc != ECORE_SUCCESS) {
- DP_ERR(edev, "Failed to update tunn_clss %u\n",
- tunn.vxlan.tun_cls);
- if (IS_PF(edev))
- ecore_ptt_release(p_hwfn, p_ptt);
- break;
- }
- }
+ tunn.vxlan_port.b_update_port = true;
+ tunn.vxlan_port.port = enable ? QEDE_VXLAN_DEF_PORT : 0;

+ rc = qede_tunnel_update(qdev, &tunn);
if (rc == ECORE_SUCCESS) {
qdev->vxlan.enable = enable;
qdev->vxlan.udp_port = (enable) ? QEDE_VXLAN_DEF_PORT : 0;
- DP_INFO(edev, "vxlan is %s\n", enable ? "enabled" : "disabled");
+ DP_INFO(edev, "vxlan is %s, UDP port = %d\n",
+ enable ? "enabled" : "disabled", qdev->vxlan.udp_port);
+ } else {
+ DP_ERR(edev, "Failed to update tunn_clss %u\n",
+ tunn.vxlan.tun_cls);
+ }
+
+ return rc;
+}
+
+static int
+qede_geneve_enable(struct rte_eth_dev *eth_dev, uint8_t clss,
+ bool enable)
+{
+ struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
+ struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
+ enum _ecore_status_t rc = ECORE_INVAL;
+ struct ecore_tunnel_info tunn;
+
+ memset(&tunn, 0, sizeof(struct ecore_tunnel_info));
+ tunn.l2_geneve.b_update_mode = true;
+ tunn.l2_geneve.b_mode_enabled = enable;
+ tunn.ip_geneve.b_update_mode = true;
+ tunn.ip_geneve.b_mode_enabled = enable;
+ tunn.l2_geneve.tun_cls = clss;
+ tunn.ip_geneve.tun_cls = clss;
+ tunn.b_update_rx_cls = true;
+ tunn.b_update_tx_cls = true;
+
+ tunn.geneve_port.b_update_port = true;
+ tunn.geneve_port.port = enable ? QEDE_GENEVE_DEF_PORT : 0;
+
+ rc = qede_tunnel_update(qdev, &tunn);
+ if (rc == ECORE_SUCCESS) {
+ qdev->geneve.enable = enable;
+ qdev->geneve.udp_port = (enable) ? QEDE_GENEVE_DEF_PORT : 0;
+ DP_INFO(edev, "GENEVE is %s, UDP port = %d\n",
+ enable ? "enabled" : "disabled", qdev->geneve.udp_port);
+ } else {
+ DP_ERR(edev, "Failed to update tunn_clss %u\n",
+ clss);
+ }
+
+ return rc;
+}
+
+static int
+qede_tunn_enable(struct rte_eth_dev *eth_dev, uint8_t clss,
+ enum rte_eth_tunnel_type tunn_type, bool enable)
+{
+ int rc = -EINVAL;
+
+ switch (tunn_type) {
+ case RTE_TUNNEL_TYPE_VXLAN:
+ rc = qede_vxlan_enable(eth_dev, clss, enable);
+ break;
+ case RTE_TUNNEL_TYPE_GENEVE:
+ rc = qede_geneve_enable(eth_dev, clss, enable);
+ break;
+ default:
+ rc = -EINVAL;
+ break;
}

return rc;
@@ -1367,7 +1437,8 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
DEV_TX_OFFLOAD_TCP_CKSUM |
DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO);
+ DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
+ DEV_TX_OFFLOAD_GENEVE_TNL_TSO);

memset(&link, 0, sizeof(struct qed_link_output));
qdev->ops->common->get_link(edev, &link);
@@ -1873,6 +1944,7 @@ static int qede_flow_ctrl_get(struct rte_eth_dev *eth_dev,
RTE_PTYPE_L4_UDP,
RTE_PTYPE_TUNNEL_VXLAN,
RTE_PTYPE_L4_FRAG,
+ RTE_PTYPE_TUNNEL_GENEVE,
/* Inner */
RTE_PTYPE_INNER_L2_ETHER,
RTE_PTYPE_INNER_L2_ETHER_VLAN,
@@ -2221,74 +2293,36 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
}

static int
-qede_conf_udp_dst_port(struct rte_eth_dev *eth_dev,
- struct rte_eth_udp_tunnel *tunnel_udp,
- bool add)
+qede_udp_dst_port_del(struct rte_eth_dev *eth_dev,
+ struct rte_eth_udp_tunnel *tunnel_udp)
{
struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
struct ecore_tunnel_info tunn; /* @DPDK */
- struct ecore_hwfn *p_hwfn;
- struct ecore_ptt *p_ptt;
uint16_t udp_port;
- int rc, i;
+ int rc;

PMD_INIT_FUNC_TRACE(edev);

memset(&tunn, 0, sizeof(tunn));
- if (tunnel_udp->prot_type == RTE_TUNNEL_TYPE_VXLAN) {
- /* Enable VxLAN tunnel if needed before UDP port update using
- * default MAC/VLAN classification.
- */
- if (add) {
- if (qdev->vxlan.udp_port == tunnel_udp->udp_port) {
- DP_INFO(edev,
- "UDP port %u was already configured\n",
- tunnel_udp->udp_port);
- return ECORE_SUCCESS;
- }
- /* Enable VXLAN if it was not enabled while adding
- * VXLAN filter.
- */
- if (!qdev->vxlan.enable) {
- rc = qede_vxlan_enable(eth_dev,
- ECORE_TUNN_CLSS_MAC_VLAN, true, true);
- if (rc != ECORE_SUCCESS) {
- DP_ERR(edev, "Failed to enable VXLAN "
- "prior to updating UDP port\n");
- return rc;
- }
- }
- udp_port = tunnel_udp->udp_port;
- } else {
- if (qdev->vxlan.udp_port != tunnel_udp->udp_port) {
- DP_ERR(edev, "UDP port %u doesn't exist\n",
- tunnel_udp->udp_port);
- return ECORE_INVAL;
- }
- udp_port = 0;
+
+ switch (tunnel_udp->prot_type) {
+ case RTE_TUNNEL_TYPE_VXLAN:
+ if (qdev->vxlan.udp_port != tunnel_udp->udp_port) {
+ DP_ERR(edev, "UDP port %u doesn't exist\n",
+ tunnel_udp->udp_port);
+ return ECORE_INVAL;
}
+ udp_port = 0;

tunn.vxlan_port.b_update_port = true;
tunn.vxlan_port.port = udp_port;
- for_each_hwfn(edev, i) {
- p_hwfn = &edev->hwfns[i];
- if (IS_PF(edev)) {
- p_ptt = ecore_ptt_acquire(p_hwfn);
- if (!p_ptt)
- return -EAGAIN;
- } else {
- p_ptt = NULL;
- }
- rc = ecore_sp_pf_update_tunn_cfg(p_hwfn, p_ptt, &tunn,
- ECORE_SPQ_MODE_CB, NULL);
- if (rc != ECORE_SUCCESS) {
- DP_ERR(edev, "Unable to config UDP port %u\n",
- tunn.vxlan_port.port);
- if (IS_PF(edev))
- ecore_ptt_release(p_hwfn, p_ptt);
- return rc;
- }
+
+ rc = qede_tunnel_update(qdev, &tunn);
+ if (rc != ECORE_SUCCESS) {
+ DP_ERR(edev, "Unable to config UDP port %u\n",
+ tunn.vxlan_port.port);
+ return rc;
}

qdev->vxlan.udp_port = udp_port;
@@ -2296,26 +2330,145 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
* VXLAN filters have reached 0 then VxLAN offload can be be
* disabled.
*/
- if (!add && qdev->vxlan.enable && qdev->vxlan.num_filters == 0)
+ if (qdev->vxlan.enable && qdev->vxlan.num_filters == 0)
return qede_vxlan_enable(eth_dev,
- ECORE_TUNN_CLSS_MAC_VLAN, false, true);
+ ECORE_TUNN_CLSS_MAC_VLAN, false);
+
+ break;
+
+ case RTE_TUNNEL_TYPE_GENEVE:
+ if (qdev->geneve.udp_port != tunnel_udp->udp_port) {
+ DP_ERR(edev, "UDP port %u doesn't exist\n",
+ tunnel_udp->udp_port);
+ return ECORE_INVAL;
+ }
+
+ udp_port = 0;
+
+ tunn.geneve_port.b_update_port = true;
+ tunn.geneve_port.port = udp_port;
+
+ rc = qede_tunnel_update(qdev, &tunn);
+ if (rc != ECORE_SUCCESS) {
+ DP_ERR(edev, "Unable to config UDP port %u\n",
+ tunn.vxlan_port.port);
+ return rc;
+ }
+
+ qdev->vxlan.udp_port = udp_port;
+ /* If the request is to delete UDP port and if the number of
+ * GENEVE filters have reached 0 then GENEVE offload can be be
+ * disabled.
+ */
+ if (qdev->geneve.enable && qdev->geneve.num_filters == 0)
+ return qede_geneve_enable(eth_dev,
+ ECORE_TUNN_CLSS_MAC_VLAN, false);
+
+ break;
+
+ default:
+ return ECORE_INVAL;
}

return 0;
-}

-static int
-qede_udp_dst_port_del(struct rte_eth_dev *eth_dev,
- struct rte_eth_udp_tunnel *tunnel_udp)
-{
- return qede_conf_udp_dst_port(eth_dev, tunnel_udp, false);
}
-
static int
qede_udp_dst_port_add(struct rte_eth_dev *eth_dev,
struct rte_eth_udp_tunnel *tunnel_udp)
{
- return qede_conf_udp_dst_port(eth_dev, tunnel_udp, true);
+ struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
+ struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
+ struct ecore_tunnel_info tunn; /* @DPDK */
+ uint16_t udp_port;
+ int rc;
+
+ PMD_INIT_FUNC_TRACE(edev);
+
+ memset(&tunn, 0, sizeof(tunn));
+
+ switch (tunnel_udp->prot_type) {
+ case RTE_TUNNEL_TYPE_VXLAN:
+ if (qdev->vxlan.udp_port == tunnel_udp->udp_port) {
+ DP_INFO(edev,
+ "UDP port %u for VXLAN was already configured\n",
+ tunnel_udp->udp_port);
+ return ECORE_SUCCESS;
+ }
+
+ /* Enable VxLAN tunnel with default MAC/VLAN classification if
+ * it was not enabled while adding VXLAN filter before UDP port
+ * update.
+ */
+ if (!qdev->vxlan.enable) {
+ rc = qede_vxlan_enable(eth_dev,
+ ECORE_TUNN_CLSS_MAC_VLAN, true);
+ if (rc != ECORE_SUCCESS) {
+ DP_ERR(edev, "Failed to enable VXLAN "
+ "prior to updating UDP port\n");
+ return rc;
+ }
+ }
+ udp_port = tunnel_udp->udp_port;
+
+ tunn.vxlan_port.b_update_port = true;
+ tunn.vxlan_port.port = udp_port;
+
+ rc = qede_tunnel_update(qdev, &tunn);
+ if (rc != ECORE_SUCCESS) {
+ DP_ERR(edev, "Unable to config UDP port %u for VXLAN\n",
+ udp_port);
+ return rc;
+ }
+
+ DP_INFO(edev, "Updated UDP port %u for VXLAN\n", udp_port);
+
+ qdev->vxlan.udp_port = udp_port;
+ break;
+
+ case RTE_TUNNEL_TYPE_GENEVE:
+ if (qdev->geneve.udp_port == tunnel_udp->udp_port) {
+ DP_INFO(edev,
+ "UDP port %u for GENEVE was already configured\n",
+ tunnel_udp->udp_port);
+ return ECORE_SUCCESS;
+ }
+
+ /* Enable GENEVE tunnel with default MAC/VLAN classification if
+ * it was not enabled while adding GENEVE filter before UDP port
+ * update.
+ */
+ if (!qdev->geneve.enable) {
+ rc = qede_geneve_enable(eth_dev,
+ ECORE_TUNN_CLSS_MAC_VLAN, true);
+ if (rc != ECORE_SUCCESS) {
+ DP_ERR(edev, "Failed to enable GENEVE "
+ "prior to updating UDP port\n");
+ return rc;
+ }
+ }
+ udp_port = tunnel_udp->udp_port;
+
+ tunn.geneve_port.b_update_port = true;
+ tunn.geneve_port.port = udp_port;
+
+ rc = qede_tunnel_update(qdev, &tunn);
+ if (rc != ECORE_SUCCESS) {
+ DP_ERR(edev, "Unable to config UDP port %u for GENEVE\n",
+ udp_port);
+ return rc;
+ }
+
+ DP_INFO(edev, "Updated UDP port %u for GENEVE\n", udp_port);
+
+ qdev->geneve.udp_port = udp_port;
+ break;
+
+ default:
+ return ECORE_INVAL;
+ }
+
+ return 0;
}

static void qede_get_ecore_tunn_params(uint32_t filter, uint32_t *type,
@@ -2382,113 +2535,116 @@ static void qede_get_ecore_tunn_params(uint32_t filter, uint32_t *type,
return ECORE_SUCCESS;
}

-static int qede_vxlan_tunn_config(struct rte_eth_dev *eth_dev,
- enum rte_filter_op filter_op,
- const struct rte_eth_tunnel_filter_conf *conf)
+static int
+_qede_tunn_filter_config(struct rte_eth_dev *eth_dev,
+ const struct rte_eth_tunnel_filter_conf *conf,
+ __attribute__((unused)) enum rte_filter_op filter_op,
+ enum ecore_tunn_clss *clss,
+ bool add)
{
struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
- enum ecore_filter_ucast_type type;
- enum ecore_tunn_clss clss = MAX_ECORE_TUNN_CLSS;
struct ecore_filter_ucast ucast = {0};
- char str[80];
+ enum ecore_filter_ucast_type type;
uint16_t filter_type = 0;
+ char str[80];
int rc;

- PMD_INIT_FUNC_TRACE(edev);
+ filter_type = conf->filter_type;
+ /* Determine if the given filter classification is supported */
+ qede_get_ecore_tunn_params(filter_type, &type, clss, str);
+ if (*clss == MAX_ECORE_TUNN_CLSS) {
+ DP_ERR(edev, "Unsupported filter type\n");
+ return -EINVAL;
+ }
+ /* Init tunnel ucast params */
+ rc = qede_set_ucast_tunn_cmn_param(&ucast, conf, type);
+ if (rc != ECORE_SUCCESS) {
+ DP_ERR(edev, "Unsupported Tunnel filter type 0x%x\n",
+ conf->filter_type);
+ return rc;
+ }
+ DP_INFO(edev, "Rule: \"%s\", op %d, type 0x%x\n",
+ str, filter_op, ucast.type);

- switch (filter_op) {
- case RTE_ETH_FILTER_ADD:
- if (IS_VF(edev))
- return qede_vxlan_enable(eth_dev,
- ECORE_TUNN_CLSS_MAC_VLAN, true, true);
+ ucast.opcode = add ? ECORE_FILTER_ADD : ECORE_FILTER_REMOVE;

- filter_type = conf->filter_type;
- /* Determine if the given filter classification is supported */
- qede_get_ecore_tunn_params(filter_type, &type, &clss, str);
- if (clss == MAX_ECORE_TUNN_CLSS) {
- DP_ERR(edev, "Unsupported filter type\n");
- return -EINVAL;
- }
- /* Init tunnel ucast params */
- rc = qede_set_ucast_tunn_cmn_param(&ucast, conf, type);
- if (rc != ECORE_SUCCESS) {
- DP_ERR(edev, "Unsupported VxLAN filter type 0x%x\n",
- conf->filter_type);
- return rc;
+ /* Skip MAC/VLAN if filter is based on VNI */
+ if (!(filter_type & ETH_TUNNEL_FILTER_TENID)) {
+ rc = qede_mac_int_ops(eth_dev, &ucast, add);
+ if ((rc == 0) && add) {
+ /* Enable accept anyvlan */
+ qede_config_accept_any_vlan(qdev, true);
}
- DP_INFO(edev, "Rule: \"%s\", op %d, type 0x%x\n",
- str, filter_op, ucast.type);
-
- ucast.opcode = ECORE_FILTER_ADD;
+ } else {
+ rc = qede_ucast_filter(eth_dev, &ucast, add);
+ if (rc == 0)
+ rc = ecore_filter_ucast_cmd(edev, &ucast,
+ ECORE_SPQ_MODE_CB, NULL);
+ }

- /* Skip MAC/VLAN if filter is based on VNI */
- if (!(filter_type & ETH_TUNNEL_FILTER_TENID)) {
- rc = qede_mac_int_ops(eth_dev, &ucast, 1);
- if (rc == 0) {
- /* Enable accept anyvlan */
- qede_config_accept_any_vlan(qdev, true);
- }
- } else {
- rc = qede_ucast_filter(eth_dev, &ucast, 1);
- if (rc == 0)
- rc = ecore_filter_ucast_cmd(edev, &ucast,
- ECORE_SPQ_MODE_CB, NULL);
- }
+ return rc;
+}

- if (rc != ECORE_SUCCESS)
- return rc;
+static int
+qede_tunn_filter_config(struct rte_eth_dev *eth_dev,
+ enum rte_filter_op filter_op,
+ const struct rte_eth_tunnel_filter_conf *conf)
+{
+ struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
+ struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
+ enum ecore_tunn_clss clss = MAX_ECORE_TUNN_CLSS;
+ bool add;
+ int rc;

- qdev->vxlan.num_filters++;
- qdev->vxlan.filter_type = filter_type;
- if (!qdev->vxlan.enable)
- return qede_vxlan_enable(eth_dev, clss, true, true);
+ PMD_INIT_FUNC_TRACE(edev);

- break;
+ switch (filter_op) {
+ case RTE_ETH_FILTER_ADD:
+ add = true;
+ break;
case RTE_ETH_FILTER_DELETE:
- if (IS_VF(edev))
- return qede_vxlan_enable(eth_dev,
- ECORE_TUNN_CLSS_MAC_VLAN, false, true);
+ add = false;
+ break;
+ default:
+ DP_ERR(edev, "Unsupported operation %d\n", filter_op);
+ return -EINVAL;
+ }

- filter_type = conf->filter_type;
- /* Determine if the given filter classification is supported */
- qede_get_ecore_tunn_params(filter_type, &type, &clss, str);
- if (clss == MAX_ECORE_TUNN_CLSS) {
- DP_ERR(edev, "Unsupported filter type\n");
- return -EINVAL;
- }
- /* Init tunnel ucast params */
- rc = qede_set_ucast_tunn_cmn_param(&ucast, conf, type);
- if (rc != ECORE_SUCCESS) {
- DP_ERR(edev, "Unsupported VxLAN filter type 0x%x\n",
- conf->filter_type);
- return rc;
- }
- DP_INFO(edev, "Rule: \"%s\", op %d, type 0x%x\n",
- str, filter_op, ucast.type);
+ if (IS_VF(edev))
+ return qede_tunn_enable(eth_dev,
+ ECORE_TUNN_CLSS_MAC_VLAN,
+ conf->tunnel_type, add);

- ucast.opcode = ECORE_FILTER_REMOVE;
+ rc = _qede_tunn_filter_config(eth_dev, conf, filter_op, &clss, add);
+ if (rc != ECORE_SUCCESS)
+ return rc;

- if (!(filter_type & ETH_TUNNEL_FILTER_TENID)) {
- rc = qede_mac_int_ops(eth_dev, &ucast, 0);
- } else {
- rc = qede_ucast_filter(eth_dev, &ucast, 0);
- if (rc == 0)
- rc = ecore_filter_ucast_cmd(edev, &ucast,
- ECORE_SPQ_MODE_CB, NULL);
+ if (add) {
+ if (conf->tunnel_type == RTE_TUNNEL_TYPE_VXLAN) {
+ qdev->vxlan.num_filters++;
+ qdev->vxlan.filter_type = conf->filter_type;
+ } else { /* GENEVE */
+ qdev->geneve.num_filters++;
+ qdev->geneve.filter_type = conf->filter_type;
}
- if (rc != ECORE_SUCCESS)
- return rc;

- qdev->vxlan.num_filters--;
+ if (!qdev->vxlan.enable || !qdev->geneve.enable)
+ return qede_tunn_enable(eth_dev, clss,
+ conf->tunnel_type,
+ true);
+ } else {
+ if (conf->tunnel_type == RTE_TUNNEL_TYPE_VXLAN)
+ qdev->vxlan.num_filters--;
+ else /*GENEVE*/
+ qdev->geneve.num_filters--;

/* Disable VXLAN if VXLAN filters become 0 */
- if (qdev->vxlan.num_filters == 0)
- return qede_vxlan_enable(eth_dev, clss, false, true);
- break;
- default:
- DP_ERR(edev, "Unsupported operation %d\n", filter_op);
- return -EINVAL;
+ if ((qdev->vxlan.num_filters == 0) ||
+ (qdev->geneve.num_filters == 0))
+ return qede_tunn_enable(eth_dev, clss,
+ conf->tunnel_type,
+ false);
}

return 0;
@@ -2508,13 +2664,13 @@ int qede_dev_filter_ctrl(struct rte_eth_dev *eth_dev,
case RTE_ETH_FILTER_TUNNEL:
switch (filter_conf->tunnel_type) {
case RTE_TUNNEL_TYPE_VXLAN:
+ case RTE_TUNNEL_TYPE_GENEVE:
DP_INFO(edev,
"Packet steering to the specified Rx queue"
- " is not supported with VXLAN tunneling");
- return(qede_vxlan_tunn_config(eth_dev, filter_op,
+ " is not supported with UDP tunneling");
+ return(qede_tunn_filter_config(eth_dev, filter_op,
filter_conf));
/* Place holders for future tunneling support */
- case RTE_TUNNEL_TYPE_GENEVE:
case RTE_TUNNEL_TYPE_TEREDO:
case RTE_TUNNEL_TYPE_NVGRE:
case RTE_TUNNEL_TYPE_IP_IN_GRE:
diff --git a/drivers/net/qede/qede_ethdev.h b/drivers/net/qede/qede_ethdev.h
index 021de5c..7e55baf 100644
--- a/drivers/net/qede/qede_ethdev.h
+++ b/drivers/net/qede/qede_ethdev.h
@@ -166,11 +166,14 @@ struct qede_fdir_info {
SLIST_HEAD(fdir_list_head, qede_fdir_entry)fdir_list_head;
};

-struct qede_vxlan_tunn {
+/* IANA assigned default UDP ports for encapsulation protocols */
+#define QEDE_VXLAN_DEF_PORT (4789)
+#define QEDE_GENEVE_DEF_PORT (6081)
+
+struct qede_udp_tunn {
bool enable;
uint16_t num_filters;
uint16_t filter_type;
-#define QEDE_VXLAN_DEF_PORT (4789)
uint16_t udp_port;
};

@@ -202,7 +205,8 @@ struct qede_dev {
SLIST_HEAD(uc_list_head, qede_ucast_entry) uc_list_head;
uint16_t num_uc_addr;
bool handle_hw_err;
- struct qede_vxlan_tunn vxlan;
+ struct qede_udp_tunn vxlan;
+ struct qede_udp_tunn geneve;
struct qede_fdir_info fdir_info;
bool vlan_strip_flg;
char drv_ver[QEDE_PMD_DRV_VER_STR_SIZE];
diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
index 01a24e5..184f0e1 100644
--- a/drivers/net/qede/qede_rxtx.c
+++ b/drivers/net/qede/qede_rxtx.c
@@ -1792,7 +1792,9 @@ static inline uint32_t qede_rx_cqe_to_tunn_pkt_type(uint16_t flags)
if (((tx_ol_flags & PKT_TX_TUNNEL_MASK) ==
PKT_TX_TUNNEL_VXLAN) ||
((tx_ol_flags & PKT_TX_TUNNEL_MASK) ==
- PKT_TX_TUNNEL_MPLSINUDP)) {
+ PKT_TX_TUNNEL_MPLSINUDP) ||
+ ((tx_ol_flags & PKT_TX_TUNNEL_MASK) ==
+ PKT_TX_TUNNEL_GENEVE)) {
/* Check against max which is Tunnel IPv6 + ext */
if (unlikely(txq->nb_tx_avail <
ETH_TX_MIN_BDS_PER_TUNN_IPV6_WITH_EXT_PKT))
diff --git a/drivers/net/qede/qede_rxtx.h b/drivers/net/qede/qede_rxtx.h
index acf9e47..6214c97 100644
--- a/drivers/net/qede/qede_rxtx.h
+++ b/drivers/net/qede/qede_rxtx.h
@@ -73,7 +73,8 @@
ETH_RSS_IPV6 |\
ETH_RSS_NONFRAG_IPV6_TCP |\
ETH_RSS_NONFRAG_IPV6_UDP |\
- ETH_RSS_VXLAN)
+ ETH_RSS_VXLAN |\
+ ETH_RSS_GENEVE)

#define QEDE_TXQ_FLAGS ((uint32_t)ETH_TXQ_FLAGS_NOMULTSEGS)

@@ -151,6 +152,7 @@
PKT_TX_QINQ_PKT | \
PKT_TX_VLAN_PKT | \
PKT_TX_TUNNEL_VXLAN | \
+ PKT_TX_TUNNEL_GENEVE | \
PKT_TX_TUNNEL_MPLSINUDP)

#define QEDE_TX_OFFLOAD_NOTSUP_MASK \
--
1.7.10.3
Ferruh Yigit
2017-12-15 21:01:36 UTC
Permalink
Post by Rasesh Mody
This patch refactors existing VXLAN tunneling offload code and enables
- destination UDP port configuration
- checksum offloads
- filter configuration
Acked-by: Rasesh Mody <***@cavium.com>


Hi Rasesh,

I am adding an explicit ack, if you are disagree please shout.
And for further patches, can you please add ackes for the patches that are not
sent by maintainer.
Mody, Rasesh
2017-12-15 21:12:26 UTC
Permalink
Hi Ferruh,
Sent: Friday, December 15, 2017 1:02 PM
Post by Rasesh Mody
This patch refactors existing VXLAN tunneling offload code and enables
- destination UDP port configuration
- checksum offloads
- filter configuration
Hi Rasesh,
I am adding an explicit ack, if you are disagree please shout.
And for further patches, can you please add ackes for the patches that are
not sent by maintainer.
Please note in this patch we accidentally used Signed-off-by with qlogic.com domain for the QEDE PMD maintainer. The correct one should have been "Shahed Shaikh <***@c
Ferruh Yigit
2017-12-16 00:19:22 UTC
Permalink
Post by Rasesh Mody
Hi Ferruh,
Sent: Friday, December 15, 2017 1:02 PM
Post by Rasesh Mody
This patch refactors existing VXLAN tunneling offload code and enables
- destination UDP port configuration
- checksum offloads
- filter configuration
Hi Rasesh,
I am adding an explicit ack, if you are disagree please shout.
And for further patches, can you please add ackes for the patches that are
not sent by maintainer.
email address fixed in next-net, thanks.
Post by Rasesh Mody
Thanks for adding ack.
Mody, Rasesh
2017-12-16 00:42:50 UTC
Permalink
Sent: Friday, December 15, 2017 4:19 PM
Post by Rasesh Mody
Hi Ferruh,
Sent: Friday, December 15, 2017 1:02 PM
Post by Rasesh Mody
This patch refactors existing VXLAN tunneling offload code and
- destination UDP port configuration
- checksum offloads
- filter configuration
Hi Rasesh,
I am adding an explicit ack, if you are disagree please shout.
And for further patches, can you please add ackes for the patches
that are not sent by maintainer.
Please note in this patch we accidentally used Signed-off-by with
qlogic.com domain for the QEDE PMD maintainer. The correct one should
email address fixed in next-
Continue reading on narkive:
Loading...