Discussion:
[dpdk-dev] DPDK 2.2 roadmap
(too old to reply)
Thomas Monjalon
2015-09-09 08:44:39 UTC
Permalink
Hello,

The new features for the release 2.2 must be first submitted before 2nd October.
They should be integrated before 23rd October.

In order to ease cooperation and integration, it would be nice to see
announces or technical details of planned features for 2.2 or 2.3.
Then the roadmap page will be filled accordingly:
http://dpdk.org/dev/roadmap
Generally speaking, it helps to know what you are working on and what is the
status.

Thanks
O'Driscoll, Tim
2015-09-09 12:56:24 UTC
Permalink
-----Original Message-----
Sent: Wednesday, September 9, 2015 9:45 AM
Subject: [dpdk-dev] DPDK 2.2 roadmap
Hello,
The new features for the release 2.2 must be first submitted before 2nd October.
They should be integrated before 23rd October.
In order to ease cooperation and integration, it would be nice to see
announces or technical details of planned features for 2.2 or 2.3.
http://dpdk.org/dev/roadmap
Generally speaking, it helps to know what you are working on and what is the
status.
I think it's a great idea to create an overall roadmap for the project for 2.2 and beyond. To kick this off, below are the features that we're hoping to submit for the 2.2 release. You should be seeing RFCs and v1 patch sets for these over the next few weeks.

Userspace Ethtool Sample App: Further enhancements to the userspace ethtool implementation that was submitted in 2.1 including: Implement rte_ethtool shim layer based on rte_ethdev API, Provide a sample application that demonstrates/allows testing of the rte_ethtool API, Implement rte_ethtool_get_ringparam/rte_ethtool_set_ringparam, Rework rte_eth_dev_get_reg_info() so we can get/set HW registers in a convenient way.

Vector PMD for fm10k: A vector PMD, similar to the one that currently exists for the Intel 10Gbps NICs, will be implemented for fm10k.

DCB for i40e &X550: DCB support will be extended to the i40e and X550 NICs.

IEEE1588 on X550 & fm10k: IEEE1588 support will be extended to the X550 and fm10k.

IEEE1588 Sample App: A sample application for an IEEE1588 Client.

Cryptodev Library: This implements a cryptodev API and PMDs for the Intel Quick Assist Technology DH895xxC hardware accelerator and the AES-NI multi-buffer software implementation. See http://dpdk.org/ml/archives/dev/2015-August/022930.html for further details.

IPsec Sample App: A sample application will be provided to show how the cryptodev library can be used to implement IPsec. This will be based on the NetBSD IPsec implementation.

Interrupt Mode for i40e, fm10k & e1000: Interrupt mode support, which was added in the 2.1 release, will be extended to the 140e, fm10k and e1000.

Completion of PCI Hot Plug: PCI Hot Plug support, which was added in the 2.0 and 2.1 releases, will be extended to the Xenvirt and Vmxnet3 PMDs.

Increase Number of Next Hops for LPM (IPv4 and IPv6): The current DPDK implementation for LPM for IPv4 and IPv6 limits the number of next hops to 256, as the next hop ID is an 8-bit long field. The size of the field will be increased to allow an increased number of next hops.

Performance Thread Sample App: A sample application will be provided showing how different threading models can be used in DPDK. It will be possible to configure the application for, and contrast forwarding performance of, different threading models including lightweight threads. See http://dpdk.org/ml/archives/dev/2015-September/023186.html for further details.

DPDK Keep-Alive: The purpose is to detect packet processing core failures (e.g. infinite loop) and ensure the failure of the core does not result in a fault that is not detectable by a management entity.

Vhost Offload Feature Support: This feature will implement virtio TSO offload to help improve performance.

Common Port Statistics: This feature will extend the exposed NIC statistics, improving the method of presentation to make it obvious what their purpose is. This functionality is based on the rte_eth_xstats_* extended stats API implemented in DPDK 2.1.

NFV Use-cases using Packet Framework (Edge Router): Enhancements will be made to the IP_Pipeline application and the Packet Framework libraries so that they can be used to support an Edge Router NFV use case.

Refactor EAL for Non-PCI Devices: This has been discussed extensively on the mailing list. See the RFCs for Refactor eal driver registration code (http://dpdk.org/ml/archives/dev/2015-September/023257.html) and Remove pci driver from vdevs (http://dpdk.org/ml/archives/dev/2015-August/023016.html).

Vhost Multi-Queue Support: The vhost-user library will be updated to provide multi-queue support in the host similar to the RSS model so that guest driver may allocate multiple rx/tx queues which then may be used to load balance between cores. See http://dpdk.org/ml/archives/dev/2015-August/022758.html for more details.

Virtio Vhost Optimization: Virtio and vhost performance will be optimized to allow efficient high performance packet movement between guest and host.

Config Granularity of RSS for i40e: All RSS hash and filter configurations for IPv4/v6, TCP/UDP, GRE, etc will be implemented for i40e. This includes support for QinQ and tunneled packets.

I40e 32-bit GRE Keys: Both 24 and 32 bit keys for GRE will be supported for i40e.

X550 Enhancements: Support VF TSO, more Entries in RSS and per VF RSS, network overlay support with flow director for X550.

TSO support for fm10K, igb, ixgbe VF: Enable TCP Segmentation Offload for fm10K, igb and ixgbe VF.

Ixgbe, i40e and fm10k base code updates.
Thomas Monjalon
2015-09-10 12:43:59 UTC
Permalink
Post by O'Driscoll, Tim
DCB for i40e &X550: DCB support will be extended to the i40e and X550 NICs.
A patch for DCB on X550 is already applied but the release notes part was forgotten.
Post by O'Driscoll, Tim
IPsec Sample App: A sample application will be provided to show how the
cryptodev library can be used to implement IPsec. This will be based on
the NetBSD IPsec implementation.
For each API, it is really interesting to have some unit tests and some
examples to show how it should be used. However adding an IPsec stack looks
to be beyond the need of an API example. It may be big and difficult to
maintain when updating DPDK.
Why not spawn a new project here: http://dpdk.org/browse/ ?
Post by O'Driscoll, Tim
Refactor EAL for Non-PCI Devices: This has been discussed extensively on the mailing list. See the RFCs for Refactor eal driver registration code (http://dpdk.org/ml/archives/dev/2015-September/023257.html) and Remove pci driver from vdevs (http://dpdk.org/ml/archives/dev/2015-August/023016.html).
I have the feeling we should think it as an unification work to reduce
differences between physical and virtual devices handling.
Probably it will have to be continued during 2.3 cycle.
Post by O'Driscoll, Tim
Vhost Multi-Queue Support: The vhost-user library will be updated to provide multi-queue support in the host similar to the RSS model so that guest driver may allocate multiple rx/tx queues which then may be used to load balance between cores. See http://dpdk.org/ml/archives/dev/2015-August/022758.html for more details.
Does it require a patch on Qemu?
Post by O'Driscoll, Tim
Config Granularity of RSS for i40e: All RSS hash and filter configurations for IPv4/v6, TCP/UDP, GRE, etc will be implemented for i40e. This includes support for QinQ and tunneled packets.
What is missing in i40e RSS/filtering?
Post by O'Driscoll, Tim
I40e 32-bit GRE Keys: Both 24 and 32 bit keys for GRE will be supported for i40e.
Could you please give more details?
Post by O'Driscoll, Tim
X550 Enhancements: Support VF TSO, more Entries in RSS and per VF RSS, network overlay support with flow director for X550.
Could you please give more details on "network overlay support with flow director"?

Thanks for sharing your plan
Thomas F Herbert
2015-09-10 13:26:53 UTC
Permalink
Post by Thomas Monjalon
Post by O'Driscoll, Tim
DCB for i40e &X550: DCB support will be extended to the i40e and X550 NICs.
A patch for DCB on X550 is already applied but the release notes part was forgotten.
Post by O'Driscoll, Tim
IPsec Sample App: A sample application will be provided to show how the
cryptodev library can be used to implement IPsec. This will be based on
the NetBSD IPsec implementation.
For each API, it is really interesting to have some unit tests and some
examples to show how it should be used. However adding an IPsec stack looks
to be beyond the need of an API example. It may be big and difficult to
maintain when updating DPDK.
Why not spawn a new project here: http://dpdk.org/browse/ ?
Post by O'Driscoll, Tim
Refactor EAL for Non-PCI Devices: This has been discussed extensively on the mailing list. See the RFCs for Refactor eal driver registration code (http://dpdk.org/ml/archives/dev/2015-September/023257.html) and Remove pci driver from vdevs (http://dpdk.org/ml/archives/dev/2015-August/023016.html).
I have the feeling we should think it as an unification work to reduce
differences between physical and virtual devices handling.
Probably it will have to be continued during 2.3 cycle.
Post by O'Driscoll, Tim
Vhost Multi-Queue Support: The vhost-user library will be updated to provide multi-queue support in the host similar to the RSS model so that guest driver may allocate multiple rx/tx queues which then may be used to load balance between cores. See http://dpdk.org/ml/archives/dev/2015-August/022758.html for more details.
Does it require a patch on Qemu?
I believe it requires V6 of the Qemu patch. Changchun or fbl can confirm
this. Reference to where to acquire the Qemu patch is documented in the
multiqueue patch here:
http://dpdk.org/ml/archives/dev/2015-August/022758.html
Post by Thomas Monjalon
Post by O'Driscoll, Tim
Config Granularity of RSS for i40e: All RSS hash and filter configurations for IPv4/v6, TCP/UDP, GRE, etc will be implemented for i40e. This includes support for QinQ and tunneled packets.
What is missing in i40e RSS/filtering?
Post by O'Driscoll, Tim
I40e 32-bit GRE Keys: Both 24 and 32 bit keys for GRE will be supported for i40e.
Could you please give more details?
Post by O'Driscoll, Tim
X550 Enhancements: Support VF TSO, more Entries in RSS and per VF RSS, network overlay support with flow director for X550.
Could you please give more details on "network overlay support with flow director"?
Thanks for sharing your plan
--
Thomas F Herbert Red Hat
Flavio Leitner
2015-09-10 19:27:59 UTC
Permalink
Post by Thomas Monjalon
Post by O'Driscoll, Tim
Vhost Multi-Queue Support: The vhost-user library will be updated to provide multi-queue support in the host similar to the RSS model so that guest driver may allocate multiple rx/tx queues which then may be used to load balance between cores. See http://dpdk.org/ml/archives/dev/2015-August/022758.html for more details.
Does it require a patch on Qemu?
Yes, it does. Last patchset seems to be v7.
fbl
Thomas Monjalon
2015-09-10 19:55:18 UTC
Permalink
Post by Flavio Leitner
Post by Thomas Monjalon
Post by O'Driscoll, Tim
Vhost Multi-Queue Support: The vhost-user library will be updated to provide multi-queue support in the host similar to the RSS model so that guest driver may allocate multiple rx/tx queues which then may be used to load balance between cores. See http://dpdk.org/ml/archives/dev/2015-August/022758.html for more details.
Does it require a patch on Qemu?
Yes, it does. Last patchset seems to be v7.
It's better to wait the feature approved and integrated in Qemu first.
Yuanhan Liu
2015-09-11 01:23:07 UTC
Permalink
Post by Thomas Monjalon
Post by Flavio Leitner
Post by Thomas Monjalon
Post by O'Driscoll, Tim
Vhost Multi-Queue Support: The vhost-user library will be updated to provide multi-queue support in the host similar to the RSS model so that guest driver may allocate multiple rx/tx queues which then may be used to load balance between cores. See http://dpdk.org/ml/archives/dev/2015-August/022758.html for more details.
Does it require a patch on Qemu?
Yes, it does. Last patchset seems to be v7.
It's better to wait the feature approved and integrated in Qemu first.
Agreed, and I'm working on it.

--yliu
O'Driscoll, Tim
2015-09-11 08:08:34 UTC
Permalink
-----Original Message-----
Sent: Thursday, September 10, 2015 1:44 PM
To: O'Driscoll, Tim
Subject: Re: [dpdk-dev] DPDK 2.2 roadmap
Post by O'Driscoll, Tim
DCB for i40e &X550: DCB support will be extended to the i40e and X550
NICs.
A patch for DCB on X550 is already applied but the release notes part was forgotten.
Post by O'Driscoll, Tim
IPsec Sample App: A sample application will be provided to show how
the
Post by O'Driscoll, Tim
cryptodev library can be used to implement IPsec. This will be based
on
Post by O'Driscoll, Tim
the NetBSD IPsec implementation.
For each API, it is really interesting to have some unit tests and some
examples to show how it should be used. However adding an IPsec stack looks
to be beyond the need of an API example. It may be big and difficult to
maintain when updating DPDK.
Why not spawn a new project here: http://dpdk.org/browse/ ?
What we're planning is a sample application that shows how hardware and software crypto accelerators can be used to accelerate a typical workload, which happens to be IPsec. We plan to provide an RFC next week which will give further details on what's being proposed. We don't expect maintenance to be a problem, but we can discuss that further when the scope has been clarified.
Zhang, Helin
2015-09-14 08:40:20 UTC
Permalink
-----Original Message-----
Sent: Thursday, September 10, 2015 8:44 PM
To: O'Driscoll, Tim
Subject: Re: [dpdk-dev] DPDK 2.2 roadmap
Post by O'Driscoll, Tim
DCB for i40e &X550: DCB support will be extended to the i40e and X550 NICs.
Yes, i40e DCB will be enabled in R2.2.
A patch for DCB on X550 is already applied but the release notes part was forgotten.
Post by O'Driscoll, Tim
IPsec Sample App: A sample application will be provided to show how
the cryptodev library can be used to implement IPsec. This will be
based on the NetBSD IPsec implementation.
For each API, it is really interesting to have some unit tests and some examples to
show how it should be used. However adding an IPsec stack looks to be beyond
the need of an API example. It may be big and difficult to maintain when
updating DPDK.
Why not spawn a new project here: http://dpdk.org/browse/ ?
Post by O'Driscoll, Tim
Refactor EAL for Non-PCI Devices: This has been discussed extensively on the
mailing list. See the RFCs for Refactor eal driver registration code
(http://dpdk.org/ml/archives/dev/2015-September/023257.html) and Remove
pci driver from vdevs
(http://dpdk.org/ml/archives/dev/2015-August/023016.html).
I have the feeling we should think it as an unification work to reduce differences
between physical and virtual devices handling.
Probably it will have to be continued during 2.3 cycle.
Post by O'Driscoll, Tim
Vhost Multi-Queue Support: The vhost-user library will be updated to provide
multi-queue support in the host similar to the RSS model so that guest driver may
allocate multiple rx/tx queues which then may be used to load balance between
cores. See http://dpdk.org/ml/archives/dev/2015-August/022758.html for more
details.
Does it require a patch on Qemu?
Post by O'Driscoll, Tim
Config Granularity of RSS for i40e: All RSS hash and filter configurations for
IPv4/v6, TCP/UDP, GRE, etc will be implemented for i40e. This includes support
for QinQ and tunneled packets.
What is missing in i40e RSS/filtering?
The fields used for HASH-ing is configured by firmware, some of users want to have
a bit more flexibility on selecting packet fields for hashing.
Post by O'Driscoll, Tim
I40e 32-bit GRE Keys: Both 24 and 32 bit keys for GRE will be supported for
i40e.
Could you please give more details?
Currently only 24 bits of GRE key will be used for hash/filter calculation, while 32 bits
was wanted by some of users. So either 24 or 32 bits can be selected by the users
from R2.2.
Post by O'Driscoll, Tim
X550 Enhancements: Support VF TSO, more Entries in RSS and per VF RSS,
network overlay support with flow director for X550.
Could you please give more details on "network overlay support with flow director"?
X550 hardware supports flow director on tunneled packets (VXLAN/NVGRE), these
will be enabled from R2.2.

Regards,
Helin
Thanks for sharing your plan
Thomas Monjalon
2015-09-14 08:57:26 UTC
Permalink
Thanks for the details.
The roadmap is updated:
http://dpdk.org/dev/roadmap

Maybe the keep-alive feature needs some design discussion.

Anyone else to share his plan?
Olga Shern
2015-09-15 20:11:28 UTC
Permalink
Hi,

Mellanox will submit a new PMD for ConnectX-4 and ConnectX-4 LX cards

Best Regards,
Olga


-----Original Message-----
From: dev [mailto:dev-***@dpdk.org] On Behalf Of Thomas Monjalon
Sent: Monday, September 14, 2015 11:57 AM
To: Zhang, Helin; O'Driscoll, Tim
Cc: ***@dpdk.org
Subject: Re: [dpdk-dev] DPDK 2.2 roadmap

Thanks for the details.
The roadmap is updated:
http://dpdk.org/dev/roadmap

Maybe the keep-alive feature needs some design discussion.

Anyone else to share his plan?

Patel, Rashmin N
2015-09-09 17:48:44 UTC
Permalink
There were two line items on the 2.2 roadmap: Xen Driver and Hyper-V Driver. Can you provide some more details?

Thanks,
Rashmin

-----Original Message-----
From: dev [mailto:dev-***@dpdk.org] On Behalf Of Thomas Monjalon
Sent: Wednesday, September 09, 2015 1:45 AM
To: ***@dpdk.org
Subject: [dpdk-dev] DPDK 2.2 roadmap

Hello,

The new features for the release 2.2 must be first submitted before 2nd October.
They should be integrated before 23rd October.

In order to ease cooperation and integration, it would be nice to see announces or technical details of planned features for 2.2 or 2.3.
Then the roadmap page will be filled accordingly:
http://dpdk.org/dev/roadmap
Generally speaking, it helps to know what you are working on and what is the status.

Thanks
Stephen Hemminger
2015-09-09 18:00:18 UTC
Permalink
On Wed, 9 Sep 2015 17:48:44 +0000
Post by Patel, Rashmin N
There were two line items on the 2.2 roadmap: Xen Driver and Hyper-V Driver. Can you provide some more details?
Brocade will be resubmitting the Xen netfront and Hyper-V virtual drivers.
I had been holding off until some issues found during QA testing and DPDK 2.1 was released.
Now that 2.1 is out, and QA tests are looking good will rebase in a couple of weeks.
Matej Vido
2015-09-14 13:44:27 UTC
Permalink
Hello Thomas,

CESNET would like to submit new virtual poll mode driver for COMBO-80G and
COMBO-100G cards. This PMD requires libsze2 library and kernel modules
(combov3, szedata2_cv3) to be installed.

We have already sent a patch serie in June
(http://dpdk.org/ml/archives/dev/2015-June/019736.html ), but missed
previous merge window. Now, we are working on a new patchset version with
scattered packets support.

Meanwhile, we have improved the firmware and would like to share the
results of our benchmark:https://www.liberouter.org/wp-content/uploads/2015/09/pmd_szedata2_dpdk_measurement-2015-09-11.pdf

Regards,
Matej
Post by Thomas Monjalon
Thanks for the details.
http://dpdk.org/dev/roadmap
Maybe the keep-alive feature needs some design discussion.
Anyone else to share his plan?
--
Matej Vido
CESNET, a. l. e.
Thomas Monjalon
2015-09-14 14:02:32 UTC
Permalink
Hi,
Post by Matej Vido
CESNET would like to submit new virtual poll mode driver for COMBO-80G and
COMBO-100G cards. This PMD requires libsze2 library and kernel modules
(combov3, szedata2_cv3) to be installed.
The name of the driver was szedata2, right?
What is the meaning? Why not having liberouter in its name?
Post by Matej Vido
We have already sent a patch serie in June
(http://dpdk.org/ml/archives/dev/2015-June/019736.html ), but missed
previous merge window. Now, we are working on a new patchset version with
scattered packets support.
Please try to split the patches in order to ease the review.
One feature per patch is a good rule.
Post by Matej Vido
Meanwhile, we have improved the firmware and would like to share the
results of our benchmark:https://www.liberouter.org/wp-content/uploads/2015/09/pmd_szedata2_dpdk_measurement-2015-09-11.pdf
Nice results!

Thanks
Matej Vido
2015-09-14 15:37:06 UTC
Permalink
Hi,
Post by Thomas Monjalon
Hi,
Post by Matej Vido
CESNET would like to submit new virtual poll mode driver for COMBO-80G
and
Post by Matej Vido
COMBO-100G cards. This PMD requires libsze2 library and kernel modules
(combov3, szedata2_cv3) to be installed.
The name of the driver was szedata2, right?
What is the meaning? Why not having liberouter in its name?
SZE is our Straight ZEro copy technology for fast data transfers between
hardware and RAM, 2 is its second version, _cv3 denotes compatibility with
our third generation of FPGA cards. Liberouter is the name of our team,
which does also other, non-hardware stuff.

In the future we want to rewrite this PMD to be independent on sze library
and kernel modules - that could also further improve performance a little.
Post by Thomas Monjalon
Post by Matej Vido
We have already sent a patch serie in June
(http://dpdk.org/ml/archives/dev/2015-June/019736.html ), but missed
previous merge window. Now, we are working on a new patchset version with
scattered packets support.
Please try to split the patches in order to ease the review.
One feature per patch is a good rule.
Post by Matej Vido
Meanwhile, we have improved the firmware and would like to share the
https://www.liberouter.org/wp-content/uploads/2015/09/pmd_szedata2_dpdk_measurement-2015-09-11.pdf
Nice results!
Thanks
Regards,
Matej

--
Matej Vido,
CESNET, a. l. e.
David Marchand
2015-09-15 09:16:17 UTC
Permalink
Hello all,

My turn.

As far as the 2.2 is concerned, I have some fixes/changes waiting for going
upstream :
- allow default mac removal (to be discussed)
- kvargs api updates / cleanup (no change on abi, I would say)
- vlan filtering api fixes and ixgbevf/igbvf associated fixes (might have
an impact on abi)
- ethdev fixes wrt hotplug framework
- minor fixes in testpmd

After this, depending on the schedule (so will most likely be for 2.3 or
later), I have some ideas on :
- cleanup for hotplug and maybe discussions on pci bind/unbind operations
- provide a little tool to have informations/capabilities on drivers (à la
modinfo)
- continue work on hotplug


By the way, I have some questions to the community :

- I noticed that with hotplug support, testpmd has become *really* hungry
on mbufs and memory.
The problem comes from the "basic" assumption that we must have enough
memory/mbufs for the maximum number of ports that might be available but
are not in the most common tests setup.
One solution might be to rework the way mbufs are reserved :
* either we let testpmd start with limited mbufs count the way it was
working before edab33b1 ("app/testpmd: support port hotplug"), then when
trying to start a port, this operation can fail if not enough mbufs are
available for it
* or we can try to create one mempool per port. The mempools would be
populated at the port init / close (?).
Anyone volunteers to rework this ?
Other ideas ?


- looking at a patch from Chao (
http://dpdk.org/ml/archives/dev/2015-August/022819.html), I think we need
to rework the way the numa nodes are handled in the dpdk.
The problem is that we rely on static arrays for some resources per socket.
I suppose this was designed with the idea that socket "physical" indexes
are contiguous, but this is not true on systems running power8 bare metal
(where numa indexes can be 0, 1, 16, 17 on quad nodes servers).
I suppose we can go with a mapping array (populated at the same time cpus
are discovered), then use this mapping array and preserve all apis, but
this might not be that trivial.
Volunteers ?
Ideas ?


- finally, looking at the eal, there are still some cleanups to do.
More specifically, are there any users of the ivshmem feature in dpdk ?
I can see little value in keeping the ivshmem feature in the eal (well
maybe because I don't use it) as it relies on hacks.
So I can see two options:
* someone still wants it to work, then we need a good rework to get rid of
those hacks under #ifdef in eal and the special configuration files can
disappear
* or if nobody complains, we can schedule its deprecation then removal.


Thanks.
--
David Marchand
Continue reading on narkive:
Loading...