Discussion:
[dpdk-dev] Mellanox Flow Steering
Raghav Sethi
2015-04-12 05:10:01 UTC
Permalink
Hi folks,

I'm trying to use the flow steering features of the Mellanox card to
effectively use a multicore server for a benchmark.

The system has a single-port Mellanox ConnectX-3 EN, and I want to use 4 of
the 32 cores present and 4 of the 16 RX queues supported by the hardware
(i.e. one RX queue per core).

I assign RX queues to each of the cores, but obviously without flow
steering (all the packets have the same IP and UDP headers, but different
dest MACs in the ethernet headers) each of the packets hits one core. I've
set up the client such that it sends packets with a different destination
MAC for each RX queue (e.g. RX queue 1 should get 10:00:00:00:00:00, RX
queue 2 should get 10:00:00:00:00:01 and so on).

I try to accomplish this by using ethtool to set flow steering rules (e.g.
ethtool -U p7p1 flow-type ether dst 10:00:00:00:00:00 action 1 loc 1,
ethtool -U p7p1 flow-type ether dst 10:00:00:00:00:01 action 2 loc 2..).

As soon as I set up these rules though, packets matching them just stop
hitting my application. All other packets go through, and removing the
rules also causes the packets to go through. I'm pretty sure my application
is looking at all the queues, but I tried changing the rules to try a rule
for every single destination RX queue (0-16), and that doesn't work either.

If it helps, my code is based on the l2fwd sample application, and is here:
https://gist.github.com/raghavsethi/416fb77d74ccf81bd93e

Also, I added the following to my /etc/init.d: options mlx4_core
log_num_mgm_entry_size=-1, and restarted the driver before any of these
tests.

Any ideas what might be causing my packets to drop? In case this is a
Mellanox issue, should I be talking to their customer support?

Best,
Raghav Sethi
Zhou, Danny
2015-04-12 11:47:42 UTC
Permalink
Currently, the DPDK PMD and NIC kernel driver cannot drive a same NIC device simultaneously. When you
use ethtool to setup flow director filter, the rules are written to NIC via ethtool support in kernel driver. But when
DPDK PMD is loaded to drive same device, the rules previously written by ethtool/kernel_driver will be invalid, so
you may have to use DPDK APIs to rewrite your rules to the NIC again.

The bifurcated driver is designed to provide a solution to support the kernel driver and DPDK coexist scenarios, but
it has security concern so netdev maintainer rejects it.

It should not be a Mellanox hardware problem, if you try it on Intel NIC the result is same.
-----Original Message-----
Sent: Sunday, April 12, 2015 1:10 PM
Subject: [dpdk-dev] Mellanox Flow Steering
Hi folks,
I'm trying to use the flow steering features of the Mellanox card to
effectively use a multicore server for a benchmark.
The system has a single-port Mellanox ConnectX-3 EN, and I want to use 4 of
the 32 cores present and 4 of the 16 RX queues supported by the hardware
(i.e. one RX queue per core).
I assign RX queues to each of the cores, but obviously without flow
steering (all the packets have the same IP and UDP headers, but different
dest MACs in the ethernet headers) each of the packets hits one core. I've
set up the client such that it sends packets with a different destination
MAC for each RX queue (e.g. RX queue 1 should get 10:00:00:00:00:00, RX
queue 2 should get 10:00:00:00:00:01 and so on).
I try to accomplish this by using ethtool to set flow steering rules (e.g.
ethtool -U p7p1 flow-type ether dst 10:00:00:00:00:00 action 1 loc 1,
ethtool -U p7p1 flow-type ether dst 10:00:00:00:00:01 action 2 loc 2..).
As soon as I set up these rules though, packets matching them just stop
hitting my application. All other packets go through, and removing the
rules also causes the packets to go through. I'm pretty sure my application
is looking at all the queues, but I tried changing the rules to try a rule
for every single destination RX queue (0-16), and that doesn't work either.
https://gist.github.com/raghavsethi/416fb77d74ccf81bd93e
Also, I added the following to my /etc/init.d: options mlx4_core
log_num_mgm_entry_size=-1, and restarted the driver before any of these
tests.
Any ideas what might be causing my packets to drop? In case this is a
Mellanox issue, should I be talking to their customer support?
B
Raghav Sethi
2015-04-12 16:17:52 UTC
Permalink
Hi Danny,

Thanks, that's helpful. However, Mellanox cards don't support Intel Flow
Director, so how would one go about installing these rules in the NIC? The
only technique the Mellanox User Manual (
http://www.mellanox.com/related-docs/prod_software/Mellanox_EN_for_Linux_User_Manual_v2_0-3_0_0.pdf)
lists to use Flow Steering is the ethtool based method.

Additionally, the mlx4_core driver is used both by DPDK PMD and otherwise
(unlike the igb_uio driver, which needs to be loaded to use PMD) and it
seems weird that only the packets affected by the rules don't hit the DPDK
application. That indicates to me that the NIC is dealing with the rules
somehow even though a DPDK application is running.

Best,
Raghav
Post by Zhou, Danny
Currently, the DPDK PMD and NIC kernel driver cannot drive a same NIC
device simultaneously. When you
use ethtool to setup flow director filter, the rules are written to NIC
via ethtool support in kernel driver. But when
DPDK PMD is loaded to drive same device, the rules previously written by
ethtool/kernel_driver will be invalid, so
you may have to use DPDK APIs to rewrite your rules to the NIC again.
The bifurcated driver is designed to provide a solution to support the
kernel driver and DPDK coexist scenarios, but
it has security concern so netdev maintainer rejects it.
It should not be a Mellanox hardware problem, if you try it on Intel NIC
the result is same.
-----Original Message-----
Sent: Sunday, April 12, 2015 1:10 PM
Subject: [dpdk-dev] Mellanox Flow Steering
Hi folks,
I'm trying to use the flow steering features of the Mellanox card to
effectively use a multicore server for a benchmark.
The system has a single-port Mellanox ConnectX-3 EN, and I want to use 4
of
the 32 cores present and 4 of the 16 RX queues supported by the hardware
(i.e. one RX queue per core).
I assign RX queues to each of the cores, but obviously without flow
steering (all the packets have the same IP and UDP headers, but different
dest MACs in the ethernet headers) each of the packets hits one core.
I've
set up the client such that it sends packets with a different destination
MAC for each RX queue (e.g. RX queue 1 should get 10:00:00:00:00:00, RX
queue 2 should get 10:00:00:00:00:01 and so on).
I try to accomplish this by using ethtool to set flow steering rules
(e.g.
ethtool -U p7p1 flow-type ether dst 10:00:00:00:00:00 action 1 loc 1,
ethtool -U p7p1 flow-type ether dst 10:00:00:00:00:01 action 2 loc 2..).
As soon as I set up these rules though, packets matching them just stop
hitting my application. All other packets go through, and removing the
rules also causes the packets to go through. I'm pretty sure my
application
is looking at all the queues, but I tried changing the rules to try a
rule
for every single destination RX queue (0-16), and that doesn't work
either.
If it helps, my code is based on the l2fwd sample application, and is
https://gist.github.com/raghavsethi/416fb77d74ccf81bd93e
Also, I added the following to my /etc/init.d: options mlx4_core
log_num_mgm_entry_size=-1, and restarted the driver before any of these
tests.
Any ideas what might be causing my packets to drop? In case this is a
Mellanox issue, should I be talking to their customer support?
Best,
Raghav Sethi
Zhou, Danny
2015-04-12 23:29:35 UTC
Permalink
Thanks for clarification Olga. I assume when PMD is upgraded to support flow director, the rules should be only set
by PMD while DPDK application is running, right? Also, when DPDK application exits, the rules previously written by
the PMD are invalid then user needs to reset rules by ethtool via mlx4_en driver.

I think it does not make sense to allow two drivers, one in kernel and another in user space, to control a same
NIC device simultaneously. Or a control plane synchronization mechanism is needed between two drivers.
A master driver responsible for NIC control solely is expected.
-----Original Message-----
Sent: Monday, April 13, 2015 4:39 AM
Subject: RE: [dpdk-dev] Mellanox Flow Steering
Hi Raghav,
You are right with your observations, Mellanox PMD and mlx4_en (kernel driver) are co-exist.
When DPDK application run, all traffic is redirected to DPDK application. When DPDK application exit the traffic is received by
mlx4_en driver.
Regarding ethtool configuration you did, it influence only mlx4_en driver, it doesn't influence Mellanox PMD queues.
Mellanox PMD doesn't support Flow Director, like you mention, and we are working to add it.
Currently the only way to spread traffic between different PMD queues is using RSS.
Best Regards,
Olga
-----Original Message-----
Sent: Sunday, April 12, 2015 7:18 PM
Subject: Re: [dpdk-dev] Mellanox Flow Steering
Hi Danny,
Thanks, that's helpful. However, Mellanox cards don't support Intel Flow Director, so how would one go about installing these
rules in the NIC? The only technique the Mellanox User Manual (
http://www.mellanox.com/related-docs/prod_software/Mellanox_EN_for_Linux_User_Manual_v2_0-3_0_0.pdf)
lists to use Flow Steering is the ethtool based method.
Additionally, the mlx4_core driver is used both by DPDK PMD and otherwise (unlike the igb_uio driver, which needs to be loaded
to use PMD) and it seems weird that only the packets affected by the rules don't hit the DPDK application. That indicates to me
that the NIC is dealing with the rules somehow even though a DPDK application is running.
Best,
Raghav
Post by Zhou, Danny
Currently, the DPDK PMD and NIC kernel driver cannot drive a same NIC
device simultaneously. When you use ethtool to setup flow director
filter, the rules are written to NIC via ethtool support in kernel
driver. But when DPDK PMD is loaded to drive same device, the rules
previously written by ethtool/kernel_driver will be invalid, so you
may have to use DPDK APIs to rewrite your rules to the NIC again.
The bifurcated driver is designed to provide a solution to support the
kernel driver and DPDK coexist scenarios, but it has security concern
so netdev maintainer rejects it.
It should not be a Mellanox hardware problem, if you try it on Intel
NIC the result is same.
-----Original Message-----
Sent: Sunday, April 12, 2015 1:10 PM
Subject: [dpdk-dev] Mellanox Flow Steering
Hi folks,
I'm trying to use the flow steering features of the Mellanox card to
effectively use a multicore server for a benchmark.
The system has a single-port Mellanox ConnectX-3 EN, and I want to use 4
of
the 32 cores present and 4 of the 16 RX queues supported by the
hardware (i.e. one RX queue per core).
I assign RX queues to each of the cores, but obviously without flow
steering (all the packets have the same IP and UDP headers, but
different dest MACs in the ethernet headers) each of the packets hits one core.
I've
set up the client such that it sends packets with a different
destination MAC for each RX queue (e.g. RX queue 1 should get
10:00:00:00:00:00, RX queue 2 should get 10:00:00:00:00:01 and so on).
I try to accomplish this by using ethtool to set flow steering rules
(e.g.
ethtool -U p7p1 flow-type ether dst 10:00:00:00:00:00 action 1 loc
1, ethtool -U p7p1 flow-type ether dst 10:00:00:00:00:01 action 2 loc 2..).
As soon as I set up these rules though, packets matching them just
stop hitting my application. All other packets go through, and
removing the rules also causes the packets to go through. I'm pretty
sure my
application
is looking at all the queues, but I tried changing the rules to try a
rule
for every single destination RX queue (0-16), and that doesn't work
either.
If it helps, my code is based on the l2fwd sample application, and is
https://gist.github.com/raghavsethi/416fb77d74ccf81bd93e
Also, I added the following to my /etc/init.d: options mlx4_core
log_num_mgm_entry_size=-1, and restarted the driver before any of
these tests.
Any ideas what might be causing my packets to drop? In case this is
a Mellanox issue, should I be talking to their customer support?
Best,
Olga Shern
2015-04-13 16:59:48 UTC
Permalink
Hi Danny,

Please see below

Best Regards,
Olga

-----Original Message-----
From: Zhou, Danny [mailto:***@intel.com]
Sent: Monday, April 13, 2015 2:30 AM
To: Olga Shern; Raghav Sethi; ***@dpdk.org
Subject: RE: [dpdk-dev] Mellanox Flow Steering

Thanks for clarification Olga. I assume when PMD is upgraded to support flow director, the rules should be only set by PMD while DPDK application is running, right?
[Olga ] Right
Also, when DPDK application exits, the rules previously written by the PMD are invalid then user needs to reset rules by ethtool via mlx4_en driver.
[Olga ] Right

I think it does not make sense to allow two drivers, one in kernel and another in user space, to control a same NIC device simultaneously. Or a control plane synchronization mechanism is needed between two drivers.
[Olga ] Agree :) We are looking for a solution

A master driver responsible for NIC control solely is expected.
[Olga ] Or there should be synchronization mechanism as you mentioned before
-----Original Message-----
Sent: Monday, April 13, 2015 4:39 AM
Subject: RE: [dpdk-dev] Mellanox Flow Steering
Hi Raghav,
You are right with your observations, Mellanox PMD and mlx4_en (kernel driver) are co-exist.
When DPDK application run, all traffic is redirected to DPDK
application. When DPDK application exit the traffic is received by mlx4_en driver.
Regarding ethtool configuration you did, it influence only mlx4_en driver, it doesn't influence Mellanox PMD queues.
Mellanox PMD doesn't support Flow Director, like you mention, and we are working to add it.
Currently the only way to spread traffic between different PMD queues is using RSS.
Best Regards,
Olga
-----Original Message-----
Sent: Sunday, April 12, 2015 7:18 PM
Subject: Re: [dpdk-dev] Mellanox Flow Steering
Hi Danny,
Thanks, that's helpful. However, Mellanox cards don't support Intel
Flow Director, so how would one go about installing these rules in the
NIC? The only technique the Mellanox User Manual (
http://www.mellanox.com/related-docs/prod_software/Mellanox_EN_for_Lin
ux_User_Manual_v2_0-3_0_0.pdf) lists to use Flow Steering is the
ethtool based method.
Additionally, the mlx4_core driver is used both by DPDK PMD and
otherwise (unlike the igb_uio driver, which needs to be loaded to use
PMD) and it seems weird that only the packets affected by the rules don't hit the DPDK application. That indicates to me that the NIC is dealing with the rules somehow even though a DPDK application is running.
Best,
Raghav
Post by Zhou, Danny
Currently, the DPDK PMD and NIC kernel driver cannot drive a same
NIC device simultaneously. When you use ethtool to setup flow
director filter, the rules are written to NIC via ethtool support in
kernel driver. But when DPDK PMD is loaded to drive same device, the
rules previously written by ethtool/kernel_driver will be invalid,
so you may have to use DPDK APIs to rewrite your rules to the NIC again.
The bifurcated driver is designed to provide a solution to support
the kernel driver and DPDK coexist scenarios, but it has security
concern so netdev maintainer rejects it.
It should not be a Mellanox hardware problem, if you try it on Intel
NIC the result is same.
-----Original Message-----
Sent: Sunday, April 12, 2015 1:10 PM
Subject: [dpdk-dev] Mellanox Flow Steering
Hi folks,
I'm trying to use the flow steering features of the Mellanox card
to effectively use a multicore server for a benchmark.
The system has a single-port Mellanox ConnectX-3 EN, and I want to use 4
of
the 32 cores present and 4 of the 16 RX queues supported by the
hardware (i.e. one RX queue per core).
I assign RX queues to each of the cores, but obviously without
flow steering (all the packets have the same IP and UDP headers,
but different dest MACs in the ethernet headers) each of the packets hits one core.
I've
set up the client such that it sends packets with a different
destination MAC for each RX queue (e.g. RX queue 1 should get
10:00:00:00:00:00, RX queue 2 should get 10:00:00:00:00:01 and so on).
I try to accomplish this by using ethtool to set flow steering rules
(e.g.
ethtool -U p7p1 flow-type ether dst 10:00:00:00:00:00 action 1 loc
1, ethtool -U p7p1 flow-type ether dst 10:00:00:00:00:01 action 2 loc 2..).
As soon as I set up these rules though, packets matching them just
stop hitting my application. All other packets go through, and
removing the rules also causes the packets to go through. I'm
pretty sure my
application
is looking at all the queues, but I tried changing the rules to try a
rule
for every single destination RX queue (0-16), and that doesn't work
either.
If it helps, my code is based on the l2fwd sample application, and is
https://gist.github.com/raghavsethi/416fb77d74ccf81bd93e
Also, I added the following to my /etc/init.d: options mlx4_core
log_num_mgm_entry_size=-1, and restarted the driver before any of
these tests.
Any ideas what might be causing my packets to drop? In case this
is a Mellanox issue, should I be talking to their customer support?
Raghav Sethi
2015-04-13 18:01:15 UTC
Permalink
Hi Olga,

Thanks for clarifying. It appears that the mlx driver does not allow me to
modify RSS options. lib/libret_pmd_mlx4/mlx4.c file states that RSS hash
key and options cannot be modified. However, I will need to modify the hash
function to be an identity/mask function, and the key to be dst mac for my
application.

Would it be correct to conclude that I cannot route packets to cores based
on dst mac using the Mellanox?

If so, given that I have complete control over the packet headers, is there
any other way to ensure deterministic, but equal partitioning of 5-tuple
space across cores using the Mellanox card? My application uses UDP, so I'm
not really concerned about flows. I'm sure the default RSS function
attempts to do just this, but some links to documentation/code for
DPDK+mlx4 default RSS would be great.

Best,
Raghav
Hi Raghav,
You are right with your observations, Mellanox PMD and mlx4_en (kernel
driver) are co-exist.
When DPDK application run, all traffic is redirected to DPDK application.
When DPDK application exit the traffic is received by mlx4_en driver.
Regarding ethtool configuration you did, it influence only mlx4_en driver,
it doesn't influence Mellanox PMD queues.
Mellanox PMD doesn't support Flow Director, like you mention, and we are
working to add it.
Currently the only way to spread traffic between different PMD queues is
using RSS.
Best Regards,
Olga
-----Original Message-----
Sent: Sunday, April 12, 2015 7:18 PM
Subject: Re: [dpdk-dev] Mellanox Flow Steering
Hi Danny,
Thanks, that's helpful. However, Mellanox cards don't support Intel Flow
Director, so how would one go about installing these rules in the NIC? The
only technique the Mellanox User Manual (
http://www.mellanox.com/related-docs/prod_software/Mellanox_EN_for_Linux_User_Manual_v2_0-3_0_0.pdf
)
lists to use Flow Steering is the ethtool based method.
Additionally, the mlx4_core driver is used both by DPDK PMD and otherwise
(unlike the igb_uio driver, which needs to be loaded to use PMD) and it
seems weird that only the packets affected by the rules don't hit the DPDK
application. That indicates to me that the NIC is dealing with the rules
somehow even though a DPDK application is running.
Best,
Raghav
Post by Zhou, Danny
Currently, the DPDK PMD and NIC kernel driver cannot drive a same NIC
device simultaneously. When you use ethtool to setup flow director
filter, the rules are written to NIC via ethtool support in kernel
driver. But when DPDK PMD is loaded to drive same device, the rules
previously written by ethtool/kernel_driver will be invalid, so you
may have to use DPDK APIs to rewrite your rules to the NIC again.
The bifurcated driver is designed to provide a solution to support the
kernel driver and DPDK coexist scenarios, but it has security concern
so netdev maintainer rejects it.
It should not be a Mellanox hardware problem, if you try it on Intel
NIC the result is same.
-----Original Message-----
Sent: Sunday, April 12, 2015 1:10 PM
Subject: [dpdk-dev] Mellanox Flow Steering
Hi folks,
I'm trying to use the flow steering features of the Mellanox card to
effectively use a multicore server for a benchmark.
The system has a single-port Mellanox ConnectX-3 EN, and I want to use 4
of
the 32 cores present and 4 of the 16 RX queues supported by the
hardware (i.e. one RX queue per core).
I assign RX queues to each of the cores, but obviously without flow
steering (all the packets have the same IP and UDP headers, but
different dest MACs in the ethernet headers) each of the packets hits
one core.
Post by Zhou, Danny
I've
set up the client such that it sends packets with a different
destination MAC for each RX queue (e.g. RX queue 1 should get
10:00:00:00:00:00, RX queue 2 should get 10:00:00:00:00:01 and so on).
I try to accomplish this by using ethtool to set flow steering rules
(e.g.
ethtool -U p7p1 flow-type ether dst 10:00:00:00:00:00 action 1 loc
1, ethtool -U p7p1 flow-type ether dst 10:00:00:00:00:01 action 2 loc
2..).
Post by Zhou, Danny
As soon as I set up these rules though, packets matching them just
stop hitting my application. All other packets go through, and
removing the rules also causes the packets to go through. I'm pretty
sure my
application
is looking at all the queues, but I tried changing the rules to try a
rule
for every single destination RX queue (0-16), and that doesn't work
either.
If it helps, my code is based on the l2fwd sample application, and is
https://gist.github.com/raghavsethi/416fb77d74ccf81bd93e
Also, I added the following to my /etc/init.d: options mlx4_core
log_num_mgm_entry_size=-1, and restarted the driver before any of
these tests.
Any ideas what might be causing my packets to drop? In case this is
a Mellanox issue, should I be talking to their customer support?
Best,
Raghav Sethi
Loading...