Discussion:
[dpdk-dev] Is rte_mempool library is multi-thread safe ???
ankit kumar
2013-12-19 10:57:59 UTC
Permalink
Hi,

I was testing rte_ring in DPDK as it provides multi-consumer &
multi-produce queue(lock-free but not wait-free).

It was working fine but i am not sure that memory allocation &
deallocation in rte_mempool library, is multi-thread safe or not.
Because its giving me error during allocation(from
multi-thread) as "Allocation failed(returned pointer is NULL in
allocation function rte_pktmbuf_alloc())" & during rte_pktmbuf_free()
it generates segmentation fault and when i traced out, core file
generated, it indicated that the problem is in rte_pktmbuf_free().

Any help !!!
Thomas Monjalon
2013-12-19 17:04:31 UTC
Permalink
Hello,
Post by ankit kumar
It was working fine but i am not sure that memory allocation &
deallocation in rte_mempool library, is multi-thread safe or not.
From http://dpdk.org/doc/api/rte__mempool_8h.html:
"the mempool implementation is not preemptable"
There is more explanations in this email:
http://dpdk.org/ml/archives/dev/2013-August/000402.html
--
Thomas
Peter Chen
2013-12-19 21:30:42 UTC
Permalink
does that mean that on the same core, we can't do rte_eth_rx_burst in one
thread (I assume this function allocates from mempool for storing mbufs
everytime it receives a packet), while another thread calls
rte_pktmbuf_alloc from the same mem_pool?


On Thu, Dec 19, 2013 at 9:04 AM, Thomas Monjalon
Post by Thomas Monjalon
Hello,
Post by ankit kumar
It was working fine but i am not sure that memory allocation &
deallocation in rte_mempool library, is multi-thread safe or not.
Olivier MATZ
2013-12-20 21:28:02 UTC
Permalink
Hi Peter,
Post by Peter Chen
does that mean that on the same core, we can't do rte_eth_rx_burst in one
thread (I assume this function allocates from mempool for storing mbufs
everytime it receives a packet), while another thread calls
rte_pktmbuf_alloc from the same mem_pool?
That's correct. In the rte_mempool code, there is a per-lcore cache:
see the local_cache field of struct rte_mempool.

If you are running several pthreads per lcore, they will share the
same cache if they have the same lcore_id and the mempool is
not designed for that. Therefore it can return wrong results.
The cache can be disabled (at run-time or compile-time), but you will
loose a lot of performance.

Even if you solve the problem of the cache, as the mempool uses a ring
internally, you would still experiment performance issues (see links
from Thomas' previous email).

By the way, why would you need to have several pthreads on one lcore?

Regards,
Olivier

Loading...