Discussion:
[dpdk-dev] vhost-user technical isssues
Xie, Huawei
2014-11-11 21:37:47 UTC
Permalink
Hi Tetsuya:
There are two major technical issues in my mind for vhost-user implementation.

1) memory region map
Vhost-user passes us file fd and offset for each memory region. Unfortunately the mmap offset is "very" wrong. I discovered this issue long time ago, and also found
that I couldn't mmap the huge page file even with correct offset(need double check).
Just now I find that people reported this issue on Nov 3.
[Qemu-devel] [PULL 27/29] vhost-user: fix mmap offset calculation
Anyway, I turned to the same idea used in our DPDK vhost-cuse: only use the fd for region(0) to map the whole file.
I think we should use this way temporarily to support qemu-2.1 as it has that bug.

2) what message is the indicator for vhost start/release?
Previously for vhost-cuse, it has SET_BACKEND message.
What we should do for vhost-user?
SET_VRING_KICK for start?
What about for release?
Unlike the kernel virtio, the DPDK virtio in guest could be restarted.

Thoughts?

-huawei
Tetsuya Mukawa
2014-11-12 04:12:41 UTC
Permalink
Hi Xie,
Post by Xie, Huawei
There are two major technical issues in my mind for vhost-user implementation.
1) memory region map
Vhost-user passes us file fd and offset for each memory region. Unfortunately the mmap offset is "very" wrong. I discovered this issue long time ago, and also found
that I couldn't mmap the huge page file even with correct offset(need double check).
Just now I find that people reported this issue on Nov 3.
[Qemu-devel] [PULL 27/29] vhost-user: fix mmap offset calculation
Anyway, I turned to the same idea used in our DPDK vhost-cuse: only use the fd for region(0) to map the whole file.
I think we should use this way temporarily to support qemu-2.1 as it has that bug.
I agree with you.
Also we may have an issue about un-mapping file on hugetlbfs of linux.
When I check munmap(), it seems 'size' need to be aligned by hugepage size.
(I guess it may be a kernel bug. Might be fixed already.)
Please add return value checking code for munmap().
Still munmap() might be failed.
Post by Xie, Huawei
2) what message is the indicator for vhost start/release?
Previously for vhost-cuse, it has SET_BACKEND message.
What we should do for vhost-user?
SET_VRING_KICK for start?
I think so.
Post by Xie, Huawei
What about for release?
Unlike the kernel virtio, the DPDK virtio in guest could be restarted.
Thoughts?
I guess we need to consider 2 types of restarting.
One is virtio-net driver restarting, the other is vhost-user backend
restarting.
But, so far, it's nice to start to think about virtio-net driver
restarting first.

Probably we need to implement a way to let vhost-user backend know
virtio-net driver is restarted.
I am not sure what is good way to let vhost-user backend know it.
But how about followings RFC?

- When unix domain socket is closed, vhost-user backend should treat it
as "release".
It is useful when QEMU itself is gone suddenly.

- Also, implementing new ioctl command like VHOST_RESET_BACKEND.
This command should be sent from virtio-net device of QEMU when
VIRTIO_CONFIG_STATUS_RESET register of virtio-net device is set by
vrtio-net driver.
(Usually this register is set when virtio-net driver is initialized or
stopped.)
It means we need to change QEMU. ;)
It seems virtio-net PMD already sets this register when PMD is
initialized or stopped.
So this framework should work, and can let vhost-user backend know
driver resetting.
(And I guess we can say same things for virtio-net kernel driver.)
It might be enough to close an unix domain socket, instead of
implementing new command.
But in the case, we may need auto reconnection mechanism.

- We also need to consider DPDK application is gone suddenly without
setting reset register.
In the case, vhost-user backend cannot know it. Only user (or some kind
of watchdog
applications on guest) knows it.
Because of this, user(or app.) should have responsibility to solve this
situation.
To be more precise, user should let vhost-user backend know device
releasing.
If user starts an other DPDK application without solving the issue, the
new DPDK application may
access memory that vhost-user backend is also accessing.
I guess user can solve the issue using "dpdk_nic_bind.py".
The script can move virtio-net device to kernel virtio-net driver, and
return it to igb_uio.
While those steps, virtio-net device is initialized by virtio-net
kernel driver.
So vhost-user backend can know device releasing.

Tetsuya
Post by Xie, Huawei
-huawei
Linhaifeng
2014-11-13 06:30:31 UTC
Permalink
Post by Tetsuya Mukawa
Hi Xie,
Post by Xie, Huawei
There are two major technical issues in my mind for vhost-user implementation.
1) memory region map
Vhost-user passes us file fd and offset for each memory region. Unfortunately the mmap offset is "very" wrong. I discovered this issue long time ago, and also found
that I couldn't mmap the huge page file even with correct offset(need double check).
Just now I find that people reported this issue on Nov 3.
[Qemu-devel] [PULL 27/29] vhost-user: fix mmap offset calculation
Anyway, I turned to the same idea used in our DPDK vhost-cuse: only use the fd for region(0) to map the whole file.
I think we should use this way temporarily to support qemu-2.1 as it has that bug.
I agree with you.
Also we may have an issue about un-mapping file on hugetlbfs of linux.
When I check munmap(), it seems 'size' need to be aligned by hugepage size.
(I guess it may be a kernel bug. Might be fixed already.)
Please add return value checking code for munmap().
Still munmap() might be failed.
are you munmmap the region 0? region 0 is not need to mmap so not need to munmap too.

I can munmap success with the other regions.
Post by Tetsuya Mukawa
Post by Xie, Huawei
2) what message is the indicator for vhost start/release?
Previously for vhost-cuse, it has SET_BACKEND message.
What we should do for vhost-user?
SET_VRING_KICK for start?
I think so.
Post by Xie, Huawei
What about for release?
Unlike the kernel virtio, the DPDK virtio in guest could be restarted.
Thoughts?
I guess we need to consider 2 types of restarting.
One is virtio-net driver restarting, the other is vhost-user backend
restarting.
But, so far, it's nice to start to think about virtio-net driver
restarting first.
Probably we need to implement a way to let vhost-user backend know
virtio-net driver is restarted.
I am not sure what is good way to let vhost-user backend know it.
But how about followings RFC?
- When unix domain socket is closed, vhost-user backend should treat it
as "release".
It is useful when QEMU itself is gone suddenly.
- Also, implementing new ioctl command like VHOST_RESET_BACKEND.
This command should be sent from virtio-net device of QEMU when
VIRTIO_CONFIG_STATUS_RESET register of virtio-net device is set by
vrtio-net driver.
(Usually this register is set when virtio-net driver is initialized or
stopped.)
It means we need to change QEMU. ;)
It seems virtio-net PMD already sets this register when PMD is
initialized or stopped.
So this framework should work, and can let vhost-user backend know
driver resetting.
(And I guess we can say same things for virtio-net kernel driver.)
It might be enough to close an unix domain socket, instead of
implementing new command.
But in the case, we may need auto reconnection mechanism.
- We also need to consider DPDK application is gone suddenly without
setting reset register.
In the case, vhost-user backend cannot know it. Only user (or some kind
of watchdog
applications on guest) knows it.
Because of this, user(or app.) should have responsibility to solve this
situation.
To be more precise, user should let vhost-user backend know device
releasing.
If user starts an other DPDK application without solving the issue, the
new DPDK application may
access memory that vhost-user backend is also accessing.
I guess user can solve the issue using "dpdk_nic_bind.py".
The script can move virtio-net device to kernel virtio-net driver, and
return it to igb_uio.
While those steps, virtio-net device is initialized by virtio-net
kernel driver.
So vhost-user backend can know device releasing.
Tetsuya
Post by Xie, Huawei
-huawei
--
Regards,
Haifeng
Tetsuya Mukawa
2014-11-14 02:30:40 UTC
Permalink
Hi Lin,
Post by Linhaifeng
Post by Tetsuya Mukawa
Hi Xie,
Post by Xie, Huawei
There are two major technical issues in my mind for vhost-user implementation.
1) memory region map
Vhost-user passes us file fd and offset for each memory region. Unfortunately the mmap offset is "very" wrong. I discovered this issue long time ago, and also found
that I couldn't mmap the huge page file even with correct offset(need double check).
Just now I find that people reported this issue on Nov 3.
[Qemu-devel] [PULL 27/29] vhost-user: fix mmap offset calculation
Anyway, I turned to the same idea used in our DPDK vhost-cuse: only use the fd for region(0) to map the whole file.
I think we should use this way temporarily to support qemu-2.1 as it has that bug.
I agree with you.
Also we may have an issue about un-mapping file on hugetlbfs of linux.
When I check munmap(), it seems 'size' need to be aligned by hugepage size.
(I guess it may be a kernel bug. Might be fixed already.)
Please add return value checking code for munmap().
Still munmap() might be failed.
are you munmmap the region 0? region 0 is not need to mmap so not need to munmap too.
I can munmap success with the other regions.
Could you please let me know how many size do you specify when you
munmap region1?

I still fail to munmap region1.
Here is a patch to vhost-user test of QEMU. Could you please check it?

----------------------------------
diff --git a/tests/vhost-user-test.c b/tests/vhost-user-test.c
index 75fedf0..4e17910 100644
--- a/tests/vhost-user-test.c
+++ b/tests/vhost-user-test.c
@@ -37,7 +37,7 @@
#endif

#define QEMU_CMD_ACCEL " -machine accel=tcg"
-#define QEMU_CMD_MEM " -m 512 -object
memory-backend-file,id=mem,size=512M,"\
+#define QEMU_CMD_MEM " -m 6000 -object
memory-backend-file,id=mem,size=6000M,"\
"mem-path=%s,share=on -numa node,memdev=mem"
#define QEMU_CMD_CHR " -chardev socket,id=chr0,path=%s"
#define QEMU_CMD_NETDEV " -netdev
vhost-user,id=net0,chardev=chr0,vhostforce"
@@ -221,14 +221,16 @@ static void read_guest_mem(void)

/* check for sanity */
g_assert_cmpint(fds_num, >, 0);
- g_assert_cmpint(fds_num, ==, memory.nregions);
+ //g_assert_cmpint(fds_num, ==, memory.nregions);

+ fprintf(stderr, "%s(%d)\n", __func__, __LINE__);
/* iterate all regions */
for (i = 0; i < fds_num; i++) {
+ int ret = 0;

/* We'll check only the region statring at 0x0*/
if (memory.regions[i].guest_phys_addr != 0x0) {
- continue;
+ //continue;
}

g_assert_cmpint(memory.regions[i].memory_size, >, 1024);
@@ -237,6 +239,13 @@ static void read_guest_mem(void)

guest_mem = mmap(0, size, PROT_READ | PROT_WRITE,
MAP_SHARED, fds[i], 0);
+ fprintf(stderr, "guest_phys_addr=%lu, memory_size=%lu, "
+ "userspace_addr=%lu, mmap_offset=%lu\n",
+ memory.regions[i].guest_phys_addr,
+ memory.regions[i].memory_size,
+ memory.regions[i].userspace_addr,
+ memory.regions[i].mmap_offset);
+ fprintf(stderr, "mmap=%p, size=%lu\n", guest_mem, size);

g_assert(guest_mem != MAP_FAILED);
guest_mem += (memory.regions[i].mmap_offset / sizeof(*guest_mem));
@@ -248,7 +257,20 @@ static void read_guest_mem(void)
g_assert_cmpint(a, ==, b);
}

- munmap(guest_mem, memory.regions[i].memory_size);
+ ret = munmap(guest_mem, memory.regions[i].memory_size);
+ fprintf(stderr, "munmap=%p, size=%lu, ret=%d\n",
+ guest_mem, memory.regions[i].memory_size, ret);
+ {
+ size_t hugepagesize;
+
+ size = memory.regions[i].memory_size;
+ /* assume hugepage size is 1GB, try again */
+ hugepagesize = 1024 * 1024 * 1024;
+ size = (size + hugepagesize - 1) / hugepagesize * hugepagesize;
+ }
+ ret = munmap(guest_mem, size);
+ fprintf(stderr, "munmap=%p, size=%lu, ret=%d\n",
+ guest_mem, size, ret);
}

g_assert_cmpint(1, ==, 1);
----------------------------------
$ sudo QTEST_HUGETLBFS_PATH=/mnt/huge make check
region=0, mmap=0x2aaac0000000, size=6291456000
region=0, munmap=0x2aab80000000, size=3070230528, ret=-1 << failed
region=0, munmap=0x2aab80000000, size=3221225472, ret=0
region=1, mmap=0x2aab80000000, size=655360
region=1, munmap=0x2aab80000000, size=655360, ret=-1 << failed
region=1, munmap=0x2aab80000000, size=1073741824, ret=0


Thanks,
Tetsuya
Linhaifeng
2014-11-14 03:13:41 UTC
Permalink
Post by Tetsuya Mukawa
Hi Lin,
Post by Linhaifeng
Post by Tetsuya Mukawa
Hi Xie,
Post by Xie, Huawei
There are two major technical issues in my mind for vhost-user implementation.
1) memory region map
Vhost-user passes us file fd and offset for each memory region. Unfortunately the mmap offset is "very" wrong. I discovered this issue long time ago, and also found
that I couldn't mmap the huge page file even with correct offset(need double check).
Just now I find that people reported this issue on Nov 3.
[Qemu-devel] [PULL 27/29] vhost-user: fix mmap offset calculation
Anyway, I turned to the same idea used in our DPDK vhost-cuse: only use the fd for region(0) to map the whole file.
I think we should use this way temporarily to support qemu-2.1 as it has that bug.
I agree with you.
Also we may have an issue about un-mapping file on hugetlbfs of linux.
When I check munmap(), it seems 'size' need to be aligned by hugepage size.
(I guess it may be a kernel bug. Might be fixed already.)
Please add return value checking code for munmap().
Still munmap() might be failed.
are you munmmap the region 0? region 0 is not need to mmap so not need to munmap too.
I can munmap success with the other regions.
Could you please let me know how many size do you specify when you
munmap region1?
2G (region->memory_size + region.memory->offset)
Post by Tetsuya Mukawa
I still fail to munmap region1.
Here is a patch to vhost-user test of QEMU. Could you please check it?
----------------------------------
diff --git a/tests/vhost-user-test.c b/tests/vhost-user-test.c
index 75fedf0..4e17910 100644
--- a/tests/vhost-user-test.c
+++ b/tests/vhost-user-test.c
@@ -37,7 +37,7 @@
#endif
#define QEMU_CMD_ACCEL " -machine accel=tcg"
-#define QEMU_CMD_MEM " -m 512 -object
memory-backend-file,id=mem,size=512M,"\
+#define QEMU_CMD_MEM " -m 6000 -object
memory-backend-file,id=mem,size=6000M,"\
"mem-path=%s,share=on -numa node,memdev=mem"
#define QEMU_CMD_CHR " -chardev socket,id=chr0,path=%s"
#define QEMU_CMD_NETDEV " -netdev
vhost-user,id=net0,chardev=chr0,vhostforce"
@@ -221,14 +221,16 @@ static void read_guest_mem(void)
/* check for sanity */
g_assert_cmpint(fds_num, >, 0);
- g_assert_cmpint(fds_num, ==, memory.nregions);
+ //g_assert_cmpint(fds_num, ==, memory.nregions);
+ fprintf(stderr, "%s(%d)\n", __func__, __LINE__);
/* iterate all regions */
for (i = 0; i < fds_num; i++) {
+ int ret = 0;
/* We'll check only the region statring at 0x0*/
if (memory.regions[i].guest_phys_addr != 0x0) {
- continue;
+ //continue;
}
if (memory.regions[i].guest_phys_addr == 0x0) {
close(fd);
continue;
}
Post by Tetsuya Mukawa
g_assert_cmpint(memory.regions[i].memory_size, >, 1024);
@@ -237,6 +239,13 @@ static void read_guest_mem(void)
guest_mem = mmap(0, size, PROT_READ | PROT_WRITE,
MAP_SHARED, fds[i], 0);
+ fprintf(stderr, "guest_phys_addr=%lu, memory_size=%lu, "
+ "userspace_addr=%lu, mmap_offset=%lu\n",
+ memory.regions[i].guest_phys_addr,
+ memory.regions[i].memory_size,
+ memory.regions[i].userspace_addr,
+ memory.regions[i].mmap_offset);
+ fprintf(stderr, "mmap=%p, size=%lu\n", guest_mem, size);
g_assert(guest_mem != MAP_FAILED);
guest_mem += (memory.regions[i].mmap_offset / sizeof(*guest_mem));
@@ -248,7 +257,20 @@ static void read_guest_mem(void)
g_assert_cmpint(a, ==, b);
}
- munmap(guest_mem, memory.regions[i].memory_size);
+ ret = munmap(guest_mem, memory.regions[i].memory_size);
+ fprintf(stderr, "munmap=%p, size=%lu, ret=%d\n",
+ guest_mem, memory.regions[i].memory_size, ret);
+ {
+ size_t hugepagesize;
+
+ size = memory.regions[i].memory_size;
+ /* assume hugepage size is 1GB, try again */
+ hugepagesize = 1024 * 1024 * 1024;
+ size = (size + hugepagesize - 1) / hugepagesize * hugepagesize;
+ }
size should be same as mmap and
guest_mem -= (memory.regions[i].mmap_offset / sizeof(*guest_mem));
Post by Tetsuya Mukawa
+ ret = munmap(guest_mem, size);
+ fprintf(stderr, "munmap=%p, size=%lu, ret=%d\n",
+ guest_mem, size, ret);
}
g_assert_cmpint(1, ==, 1);
----------------------------------
$ sudo QTEST_HUGETLBFS_PATH=/mnt/huge make check
region=0, mmap=0x2aaac0000000, size=6291456000
region=0, munmap=0x2aab80000000, size=3070230528, ret=-1 << failed
region=0, munmap=0x2aab80000000, size=3221225472, ret=0
region=1, mmap=0x2aab80000000, size=655360
region=1, munmap=0x2aab80000000, size=655360, ret=-1 << failed
region=1, munmap=0x2aab80000000, size=1073741824, ret=0
Thanks,
Tetsuya
.
--
Regards,
Haifeng
Tetsuya Mukawa
2014-11-14 03:40:19 UTC
Permalink
Hi Lin,
Post by Linhaifeng
size should be same as mmap and
guest_mem -= (memory.regions[i].mmap_offset / sizeof(*guest_mem));
Thanks. It should be.
How about following patch?

-------------------------------------------------------
diff --git a/tests/vhost-user-test.c b/tests/vhost-user-test.c
index 75fedf0..be4b171 100644
--- a/tests/vhost-user-test.c
+++ b/tests/vhost-user-test.c
@@ -37,7 +37,7 @@
#endif

#define QEMU_CMD_ACCEL " -machine accel=tcg"
-#define QEMU_CMD_MEM " -m 512 -object
memory-backend-file,id=mem,size=512M,"\
+#define QEMU_CMD_MEM " -m 6000 -object
memory-backend-file,id=mem,size=6000M,"\
"mem-path=%s,share=on -numa node,memdev=mem"
#define QEMU_CMD_CHR " -chardev socket,id=chr0,path=%s"
#define QEMU_CMD_NETDEV " -netdev
vhost-user,id=net0,chardev=chr0,vhostforce"
@@ -221,13 +221,16 @@ static void read_guest_mem(void)

/* check for sanity */
g_assert_cmpint(fds_num, >, 0);
- g_assert_cmpint(fds_num, ==, memory.nregions);
+ //g_assert_cmpint(fds_num, ==, memory.nregions);

+ fprintf(stderr, "%s(%d)\n", __func__, __LINE__);
/* iterate all regions */
for (i = 0; i < fds_num; i++) {
+ int ret = 0;

/* We'll check only the region statring at 0x0*/
- if (memory.regions[i].guest_phys_addr != 0x0) {
+ if (memory.regions[i].guest_phys_addr == 0x0) {
+ close(fds[i]);
continue;
}

@@ -237,6 +240,7 @@ static void read_guest_mem(void)

guest_mem = mmap(0, size, PROT_READ | PROT_WRITE,
MAP_SHARED, fds[i], 0);
+ fprintf(stderr, "region=%d, mmap=%p, size=%lu\n", i, guest_mem, size);

g_assert(guest_mem != MAP_FAILED);
guest_mem += (memory.regions[i].mmap_offset / sizeof(*guest_mem));
@@ -247,8 +251,10 @@ static void read_guest_mem(void)

g_assert_cmpint(a, ==, b);
}
-
- munmap(guest_mem, memory.regions[i].memory_size);
+ guest_mem -= (memory.regions[i].mmap_offset / sizeof(*guest_mem));
+ ret = munmap(guest_mem, memory.regions[i].memory_size);
+ fprintf(stderr, "region=%d, munmap=%p, size=%lu, ret=%d\n",
+ i, guest_mem, size, ret);
}

g_assert_cmpint(1, ==, 1);
-------------------------------------------------------
I am using 1GB hugepage size.

$ sudo QTEST_HUGETLBFS_PATH=/mnt/huge make check
region=0, mmap=0x2aaac0000000, size=6291456000
region=0, munmap=0x2aaac0000000, size=6291456000, ret=-1 << failed

6291456000 is not aligned by 1GB.
When I specify 4096MB as guest memory size, munmap() doesn't return
error like following.

$ sudo QTEST_HUGETLBFS_PATH=/mnt/huge make check
region=0, mmap=0x2aaac0000000, size=4294967296
region=0, munmap=0x2aaac0000000, size=4294967296, ret=0

Thanks,
Tetsuya
Tetsuya Mukawa
2014-11-14 04:05:06 UTC
Permalink
Post by Tetsuya Mukawa
I am using 1GB hugepage size.
$ sudo QTEST_HUGETLBFS_PATH=/mnt/huge make check
region=0, mmap=0x2aaac0000000, size=6291456000
region=0, munmap=0x2aaac0000000, size=6291456000, ret=-1 << failed
6291456000 is not aligned by 1GB.
When I specify 4096MB as guest memory size, munmap() doesn't return
error like following.
$ sudo QTEST_HUGETLBFS_PATH=/mnt/huge make check
region=0, mmap=0x2aaac0000000, size=4294967296
region=0, munmap=0x2aaac0000000, size=4294967296, ret=0
Also I've checked mmap2 and munmap implementation of current linux kernel.
When a file on hugetlbfs is mapped, 'size' will be aligned by hugepages
size in some case.
But when munmap is called, 'size' will be aligned by PAGE_SIZE.
It mean we cannot use same 'size' value for mmap and munmap in some case.
I guess this implementation or specification cases the munmap issue.

Thanks,
Tetsuya
Linhaifeng
2014-11-14 04:42:59 UTC
Permalink
Post by Tetsuya Mukawa
Hi Lin,
Post by Linhaifeng
size should be same as mmap and
guest_mem -= (memory.regions[i].mmap_offset / sizeof(*guest_mem));
Thanks. It should be.
How about following patch?
-------------------------------------------------------
diff --git a/tests/vhost-user-test.c b/tests/vhost-user-test.c
index 75fedf0..be4b171 100644
--- a/tests/vhost-user-test.c
+++ b/tests/vhost-user-test.c
@@ -37,7 +37,7 @@
#endif
#define QEMU_CMD_ACCEL " -machine accel=tcg"
-#define QEMU_CMD_MEM " -m 512 -object
memory-backend-file,id=mem,size=512M,"\
+#define QEMU_CMD_MEM " -m 6000 -object
memory-backend-file,id=mem,size=6000M,"\
"mem-path=%s,share=on -numa node,memdev=mem"
#define QEMU_CMD_CHR " -chardev socket,id=chr0,path=%s"
#define QEMU_CMD_NETDEV " -netdev
vhost-user,id=net0,chardev=chr0,vhostforce"
@@ -221,13 +221,16 @@ static void read_guest_mem(void)
/* check for sanity */
g_assert_cmpint(fds_num, >, 0);
- g_assert_cmpint(fds_num, ==, memory.nregions);
+ //g_assert_cmpint(fds_num, ==, memory.nregions);
+ fprintf(stderr, "%s(%d)\n", __func__, __LINE__);
/* iterate all regions */
for (i = 0; i < fds_num; i++) {
+ int ret = 0;
/* We'll check only the region statring at 0x0*/
- if (memory.regions[i].guest_phys_addr != 0x0) {
+ if (memory.regions[i].guest_phys_addr == 0x0) {
+ close(fds[i]);
continue;
}
@@ -237,6 +240,7 @@ static void read_guest_mem(void)
guest_mem = mmap(0, size, PROT_READ | PROT_WRITE,
How many is size? mmap_size + mmap_offset ?
Post by Tetsuya Mukawa
MAP_SHARED, fds[i], 0);
+ fprintf(stderr, "region=%d, mmap=%p, size=%lu\n", i, guest_mem, size);
g_assert(guest_mem != MAP_FAILED);
guest_mem += (memory.regions[i].mmap_offset / sizeof(*guest_mem));
@@ -247,8 +251,10 @@ static void read_guest_mem(void)
g_assert_cmpint(a, ==, b);
}
-
- munmap(guest_mem, memory.regions[i].memory_size);
+ guest_mem -= (memory.regions[i].mmap_offset / sizeof(*guest_mem));
+ ret = munmap(guest_mem, memory.regions[i].memory_size);
memory.regions[i].memory_size --> memory.regions[i].memory_size + memory.regions[i].memory_offset

check you have apply qemu's patch: [PATCH] vhost-user: fix mmap offset calculation
Post by Tetsuya Mukawa
+ fprintf(stderr, "region=%d, munmap=%p, size=%lu, ret=%d\n",
+ i, guest_mem, size, ret);
}
g_assert_cmpint(1, ==, 1);
-------------------------------------------------------
I am using 1GB hugepage size.
$ sudo QTEST_HUGETLBFS_PATH=/mnt/huge make check
region=0, mmap=0x2aaac0000000, size=6291456000
region=0, munmap=0x2aaac0000000, size=6291456000, ret=-1 << failed
6291456000 is not aligned by 1GB.
When I specify 4096MB as guest memory size, munmap() doesn't return
error like following.
$ sudo QTEST_HUGETLBFS_PATH=/mnt/huge make check
region=0, mmap=0x2aaac0000000, size=4294967296
region=0, munmap=0x2aaac0000000, size=4294967296, ret=0
Thanks,
Tetsuya
.
--
Regards,
Haifeng
Tetsuya Mukawa
2014-11-14 05:12:53 UTC
Permalink
Hi Lin,
Post by Linhaifeng
Post by Tetsuya Mukawa
Hi Lin,
Post by Linhaifeng
size should be same as mmap and
guest_mem -= (memory.regions[i].mmap_offset / sizeof(*guest_mem));
Thanks. It should be.
How about following patch?
-------------------------------------------------------
diff --git a/tests/vhost-user-test.c b/tests/vhost-user-test.c
index 75fedf0..be4b171 100644
--- a/tests/vhost-user-test.c
+++ b/tests/vhost-user-test.c
@@ -37,7 +37,7 @@
#endif
#define QEMU_CMD_ACCEL " -machine accel=tcg"
-#define QEMU_CMD_MEM " -m 512 -object
memory-backend-file,id=mem,size=512M,"\
+#define QEMU_CMD_MEM " -m 6000 -object
memory-backend-file,id=mem,size=6000M,"\
"mem-path=%s,share=on -numa node,memdev=mem"
#define QEMU_CMD_CHR " -chardev socket,id=chr0,path=%s"
#define QEMU_CMD_NETDEV " -netdev
vhost-user,id=net0,chardev=chr0,vhostforce"
@@ -221,13 +221,16 @@ static void read_guest_mem(void)
/* check for sanity */
g_assert_cmpint(fds_num, >, 0);
- g_assert_cmpint(fds_num, ==, memory.nregions);
+ //g_assert_cmpint(fds_num, ==, memory.nregions);
+ fprintf(stderr, "%s(%d)\n", __func__, __LINE__);
/* iterate all regions */
for (i = 0; i < fds_num; i++) {
+ int ret = 0;
/* We'll check only the region statring at 0x0*/
- if (memory.regions[i].guest_phys_addr != 0x0) {
+ if (memory.regions[i].guest_phys_addr == 0x0) {
+ close(fds[i]);
continue;
}
@@ -237,6 +240,7 @@ static void read_guest_mem(void)
guest_mem = mmap(0, size, PROT_READ | PROT_WRITE,
How many is size? mmap_size + mmap_offset ?
In this case, guest memory length is the size.
I added messages from this program within last email.
Could you please also check it?
Post by Linhaifeng
Post by Tetsuya Mukawa
MAP_SHARED, fds[i], 0);
+ fprintf(stderr, "region=%d, mmap=%p, size=%lu\n", i, guest_mem, size);
g_assert(guest_mem != MAP_FAILED);
guest_mem += (memory.regions[i].mmap_offset / sizeof(*guest_mem));
@@ -247,8 +251,10 @@ static void read_guest_mem(void)
g_assert_cmpint(a, ==, b);
}
-
- munmap(guest_mem, memory.regions[i].memory_size);
+ guest_mem -= (memory.regions[i].mmap_offset / sizeof(*guest_mem));
+ ret = munmap(guest_mem, memory.regions[i].memory_size);
memory.regions[i].memory_size --> memory.regions[i].memory_size + memory.regions[i].memory_offset
check you have apply qemu's patch: [PATCH] vhost-user: fix mmap offset calculation
I checked it using latest QEMU code.
So the patch you mentioned is included.

I guess you can munmap a file, because 'size' is aligned by hugepage
size like 2GB.
Could you please try another value like 6000MB?

Thanks,
Tetsuya
Linhaifeng
2014-11-14 05:30:09 UTC
Permalink
Post by Tetsuya Mukawa
ease try another value like 6000MB
i have try this value 6000MB.I can munmap success.

you mmap with size "memory_size + memory_offset" should also munmap with this size.
--
Regards,
Haifeng
Tetsuya Mukawa
2014-11-14 06:57:06 UTC
Permalink
Hi Lin,
Post by Linhaifeng
Post by Tetsuya Mukawa
ease try another value like 6000MB
i have try this value 6000MB.I can munmap success.
you mmap with size "memory_size + memory_offset" should also munmap with this size.
I appreciate for your testing and sugesstions. :)
I am not sure what is difference between your environment and my
environment.

Here is my code and message from the code.
---------------------------------------------
[code]
---------------------------------------------
size = memory.regions[i].memory_size + memory.regions[i].mmap_offset;

guest_mem = mmap(0, size, PROT_READ | PROT_WRITE,
MAP_SHARED, fds[i], 0);

fprintf(stderr, "region=%d, mmap=%p, size=%lu\n", i, guest_mem, size);

g_assert(guest_mem != MAP_FAILED);

ret = munmap(guest_mem, size);

fprintf(stderr, "region=%d, munmap=%p, size=%lu, ret=%d\n",
i, guest_mem, size, ret);

---------------------------------------------
[messages]
---------------------------------------------
region=0, mmap=0x2aaac0000000, size=6291456000
region=0, munmap=0x2aaac0000000, size=6291456000, ret=-1

With your environment, 'ret' will be 0.
In my environment, 'size' should be aligned not to get error.
Anyway, it's nice to implement more simple.
When munmap failure occurs, let's think it again.

Thanks,
Tetsuya
Xie, Huawei
2014-11-14 10:59:29 UTC
Permalink
I tested with latest qemu(with offset fix) in vhost app(not with test case), unmap succeeds only when the size is aligned to 1GB(hugepage size).

Another important thing is could we do mmap(0, region[i].memory_size, PROT_XX, mmap_offset) rather than with offset 0? With the region above 4GB, we will waste 4GB address space. Or we at least need to round down offset to nearest 1GB, and round up memory size to upper 1GB, to save some address space waste.

Anyway, this is ugly. Kernel doesn't take care of us, do those alignment for us automatically.
-----Original Message-----
Sent: Thursday, November 13, 2014 11:57 PM
To: Linhaifeng; Xie, Huawei
Subject: Re: [dpdk-dev] vhost-user technical isssues
Hi Lin,
Post by Linhaifeng
Post by Tetsuya Mukawa
ease try another value like 6000MB
i have try this value 6000MB.I can munmap success.
you mmap with size "memory_size + memory_offset" should also munmap
with this size.
I appreciate for your testing and sugesstions. :)
I am not sure what is difference between your environment and my
environment.
Here is my code and message from the code.
---------------------------------------------
[code]
---------------------------------------------
size = memory.regions[i].memory_size + memory.regions[i].mmap_offset;
guest_mem = mmap(0, size, PROT_READ | PROT_WRITE,
MAP_SHARED, fds[i], 0);
fprintf(stderr, "region=%d, mmap=%p, size=%lu\n", i, guest_mem, size);
g_assert(guest_mem != MAP_FAILED);
ret = munmap(guest_mem, size);
fprintf(stderr, "region=%d, munmap=%p, size=%lu, ret=%d\n",
i, guest_mem, size, ret);
---------------------------------------------
[messages]
---------------------------------------------
region=0, mmap=0x2aaac0000000, size=6291456000
region=0, munmap=0x2aaac0000000, size=6291456000, ret=-1
With your environment, 'ret' will be 0.
In my environment, 'size' should be aligned not to get error.
Anyway, it's nice to implement more simple.
When munmap failure occurs, let's think it again.
Thanks,
Tetsuya
Tetsuya Mukawa
2014-11-17 06:14:01 UTC
Permalink
Hi Xie,
Post by Xie, Huawei
I tested with latest qemu(with offset fix) in vhost app(not with test case), unmap succeeds only when the size is aligned to 1GB(hugepage size).
I appreciate for your testing.
Post by Xie, Huawei
Another important thing is could we do mmap(0, region[i].memory_size, PROT_XX, mmap_offset) rather than with offset 0? With the region above 4GB, we will waste 4GB address space. Or we at least need to round down offset to nearest 1GB, and round up memory size to upper 1GB, to save some address space waste.
Anyway, this is ugly. Kernel doesn't take care of us, do those alignment for us automatically.
It seems 'offset' also should be aligned by hugepage size also.
But it might be a specification of mmap. Manpage of mmap says 'offset'
should be aligned by sysconf(_SC_PAGE_SIZE).
If the target file is on hugetlbfs, I guess hugepage size is used as
alignment size.

Thanks,
Tetsuya

Xie, Huawei
2014-11-14 00:22:27 UTC
Permalink
-----Original Message-----
Sent: Tuesday, November 11, 2014 9:13 PM
Cc: Long, Thomas
Subject: Re: vhost-user technical isssues
Hi Xie,
Post by Xie, Huawei
There are two major technical issues in my mind for vhost-user
implementation.
Post by Xie, Huawei
1) memory region map
Vhost-user passes us file fd and offset for each memory region. Unfortunately
the mmap offset is "very" wrong. I discovered this issue long time ago, and also
found
Post by Xie, Huawei
that I couldn't mmap the huge page file even with correct offset(need double
check).
Post by Xie, Huawei
Just now I find that people reported this issue on Nov 3.
[Qemu-devel] [PULL 27/29] vhost-user: fix mmap offset calculation
Anyway, I turned to the same idea used in our DPDK vhost-cuse: only use the fd
for region(0) to map the whole file.
Post by Xie, Huawei
I think we should use this way temporarily to support qemu-2.1 as it has that
bug.
I agree with you.
Also we may have an issue about un-mapping file on hugetlbfs of linux.
When I check munmap(), it seems 'size' need to be aligned by hugepage size.
(I guess it may be a kernel bug. Might be fixed already.)
Please add return value checking code for munmap().
Still munmap() might be failed.
Post by Xie, Huawei
2) what message is the indicator for vhost start/release?
Previously for vhost-cuse, it has SET_BACKEND message.
What we should do for vhost-user?
SET_VRING_KICK for start?
I think so.
Post by Xie, Huawei
What about for release?
Unlike the kernel virtio, the DPDK virtio in guest could be restarted.
Thoughts?
I guess we need to consider 2 types of restarting.
One is virtio-net driver restarting, the other is vhost-user backend
restarting.
But, so far, it's nice to start to think about virtio-net driver
restarting first.
Probably we need to implement a way to let vhost-user backend know
virtio-net driver is restarted.
I am not sure what is good way to let vhost-user backend know it.
But how about followings RFC?
I checked your code today, and didn't find the logic to deal with virtio reconfiguration.
- When unix domain socket is closed, vhost-user backend should treat it
as "release".
It is useful when QEMU itself is gone suddenly.
This is the simple case.
- Also, implementing new ioctl command like VHOST_RESET_BACKEND.
This command should be sent from virtio-net device of QEMU when
VIRTIO_CONFIG_STATUS_RESET register of virtio-net device is set by
vrtio-net driver.
(Usually this register is set when virtio-net driver is initialized or
stopped.)
It means we need to change QEMU. ;)
It seems virtio-net PMD already sets this register when PMD is
initialized or stopped.
So this framework should work, and can let vhost-user backend know
driver resetting.
(And I guess we can say same things for virtio-net kernel driver.)
It might be enough to close an unix domain socket, instead of
implementing new command.
I don't understand wrt closing the socket.

The socket connection from qemu will be opened and close once during the life cycle of
virtio. This is correct behavior. But virtio driver couldn't be reconfigured several times by guest.
It is done by writing status val to the STATUS register.
But in the case, we may need auto reconnection mechanism.
- We also need to consider DPDK application is gone suddenly without
setting reset register.
In the case, vhost-user backend cannot know it. Only user (or some kind
of watchdog
applications on guest) knows it.
Because of this, user(or app.) should have responsibility to solve this
situation.
To be more precise, user should let vhost-user backend know device
releasing.
If user starts an other DPDK application without solving the issue, the
new DPDK application may
access memory that vhost-user backend is also accessing.
I guess user can solve the issue using "dpdk_nic_bind.py".
The script can move virtio-net device to kernel virtio-net driver, and
return it to igb_uio.
While those steps, virtio-net device is initialized by virtio-net
kernel driver.
So vhost-user backend can know device releasing.
My thought without new message support:
When vhost-user receives another configuration message since last time it is ready for
processing, then we could release it from data core, and process the next reconfiguration
message, and then re-add it to data core when it is ready again(check new kick message as before).

The candidate message is set_mem_table.

It is ok we keep the device on data core until we receive the new reconfiguration message. Just waste vhost
some cycles checking the avail idx.
Tetsuya
Post by Xie, Huawei
-huawei
Tetsuya Mukawa
2014-11-14 02:52:55 UTC
Permalink
Hi Xie,
Post by Xie, Huawei
I think so. I guess we need to consider 2 types of restarting. One is
virtio-net driver restarting, the other is vhost-user backend
restarting. But, so far, it's nice to start to think about virtio-net
driver restarting first. Probably we need to implement a way to let
vhost-user backend know virtio-net driver is restarted. I am not sure
what is good way to let vhost-user backend know it. But how about
followings RFC?
I checked your code today, and didn't find the logic to deal with virtio reconfiguration.
Yes.
I guess the first implementation of librte_vhost may just replace
vhost-example function.
Probably vhost-example doesn't think about restarting.
Because of this, I haven't implemented.
Post by Xie, Huawei
My thought without new message support: When vhost-user receives
another configuration message since last time it is ready for
processing, then we could release it from data core, and process the
next reconfiguration message, and then re-add it to data core when it
is ready again(check new kick message as before). The candidate
message is set_mem_table. It is ok we keep the device on data core
until we receive the new reconfiguration message. Just waste vhost
some cycles checking the avail idx.
For example, let's assume DPDK app1 is started on guest with virtio-net
device port.
If DPDK app1 on guest is stopped, and other DPDK app2 on guest is
started without virtio-net device port.
Hugepages DPDK app1 used will be used by DPDK app2.
It means the memory accessed by vhost-user backend might be changed by
DPDK app2.
And vhost-user backend will be crashed.
So I guess we need some kinds of reset message.

Thanks,
Tetsuya
Xie, Huawei
2014-11-15 01:42:49 UTC
Permalink
This post might be inappropriate. Click to display it.
Linhaifeng
2014-11-13 06:12:36 UTC
Permalink
Post by Xie, Huawei
There are two major technical issues in my mind for vhost-user implementation.
1) memory region map
Vhost-user passes us file fd and offset for each memory region. Unfortunately the mmap offset is "very" wrong. I discovered this issue long time ago, and also found
that I couldn't mmap the huge page file even with correct offset(need double check).
Just now I find that people reported this issue on Nov 3.
[Qemu-devel] [PULL 27/29] vhost-user: fix mmap offset calculation
Anyway, I turned to the same idea used in our DPDK vhost-cuse: only use the fd for region(0) to map the whole file.
I think we should use this way temporarily to support qemu-2.1 as it has that bug.
this bug is not in dpdk's vhost-user just for qemu's vhost-user backend
Post by Xie, Huawei
2) what message is the indicator for vhost start/release?
Previously for vhost-cuse, it has SET_BACKEND message.
What we should do for vhost-user?
SET_VRING_KICK for start?
What about for release?
Unlike the kernel virtio, the DPDK virtio in guest could be restarted.
Thoughts?
-huawei
--
Regards,
Haifeng
Linhaifeng
2014-11-13 06:27:42 UTC
Permalink
Post by Xie, Huawei
There are two major technical issues in my mind for vhost-user implementation.
1) memory region map
Vhost-user passes us file fd and offset for each memory region. Unfortunately the mmap offset is "very" wrong. I discovered this issue long time ago, and also found
that I couldn't mmap the huge page file even with correct offset(need double check).
Just now I find that people reported this issue on Nov 3.
[Qemu-devel] [PULL 27/29] vhost-user: fix mmap offset calculation
Anyway, I turned to the same idea used in our DPDK vhost-cuse: only use the fd for region(0) to map the whole file.
I think we should use this way temporarily to support qemu-2.1 as it has that bug.
the size of region 0 is not same as the file size. may be you should mmap the other region.

region 0:
gpa = 0x0
size = 655360
ua = 0x2aaaaac00000
offset = 0

region 1:// use this region to mmap.BTW how to avoid mmap twice when there are two devices?
gpa = 0xC0000
size = 2146697216
ua = 0x2aaaaacc0000
offset = 786432
Post by Xie, Huawei
2) what message is the indicator for vhost start/release?
Previously for vhost-cuse, it has SET_BACKEND message.
What we should do for vhost-user?
SET_VRING_KICK for start?
What about for release?
Unlike the kernel virtio, the DPDK virtio in guest could be restarted.
Thoughts?
-huawei
--
Regards,
Haifeng
Xie, Huawei
2014-11-14 01:28:19 UTC
Permalink
-----Original Message-----
Sent: Wednesday, November 12, 2014 11:28 PM
Subject: Re: [dpdk-dev] vhost-user technical isssues
Post by Xie, Huawei
There are two major technical issues in my mind for vhost-user
implementation.
Post by Xie, Huawei
1) memory region map
Vhost-user passes us file fd and offset for each memory region. Unfortunately
the mmap offset is "very" wrong. I discovered this issue long time ago, and also
found
Post by Xie, Huawei
that I couldn't mmap the huge page file even with correct offset(need double
check).
Post by Xie, Huawei
Just now I find that people reported this issue on Nov 3.
[Qemu-devel] [PULL 27/29] vhost-user: fix mmap offset calculation
Anyway, I turned to the same idea used in our DPDK vhost-cuse: only use the fd
for region(0) to map the whole file.
Post by Xie, Huawei
I think we should use this way temporarily to support qemu-2.1 as it has that
bug.
the size of region 0 is not same as the file size. may be you should mmap the other region.
Haifeng:

Will calculate the maximum memory size, and use any file fd to mmap it.
Here we assume the fds for different regions actually point to the same file.

In theory we should use the fd for each region to map each memory region.
In fact we could map once. This will also save address space for 1GB huge page
due to mmap alignment requirement.
gpa = 0x0
size = 655360
ua = 0x2aaaaac00000
offset = 0
region 1:// use this region to mmap.BTW how to avoid mmap twice when there are two devices?
gpa = 0xC0000
size = 2146697216
ua = 0x2aaaaacc0000
offset = 786432
What do you mean by two devices?
Post by Xie, Huawei
2) what message is the indicator for vhost start/release?
Previously for vhost-cuse, it has SET_BACKEND message.
What we should do for vhost-user?
SET_VRING_KICK for start?
What about for release?
Unlike the kernel virtio, the DPDK virtio in guest could be restarted.
Thoughts?
-huawei
--
Regards,
Haifeng
Linhaifeng
2014-11-14 02:24:03 UTC
Permalink
Post by Xie, Huawei
-----Original Message-----
Sent: Wednesday, November 12, 2014 11:28 PM
Subject: Re: [dpdk-dev] vhost-user technical isssues
Post by Xie, Huawei
There are two major technical issues in my mind for vhost-user
implementation.
Post by Xie, Huawei
1) memory region map
Vhost-user passes us file fd and offset for each memory region. Unfortunately
the mmap offset is "very" wrong. I discovered this issue long time ago, and also
found
Post by Xie, Huawei
that I couldn't mmap the huge page file even with correct offset(need double
check).
Post by Xie, Huawei
Just now I find that people reported this issue on Nov 3.
[Qemu-devel] [PULL 27/29] vhost-user: fix mmap offset calculation
Anyway, I turned to the same idea used in our DPDK vhost-cuse: only use the fd
for region(0) to map the whole file.
Post by Xie, Huawei
I think we should use this way temporarily to support qemu-2.1 as it has that
bug.
the size of region 0 is not same as the file size. may be you should mmap the
other region.
Will calculate the maximum memory size, and use any file fd to mmap it.
Here we assume the fds for different regions actually point to the same file.
actually there may be two hugepage files created by qemu.
one day i create a 4G VM found qemu create 2 hugepage file and send them to vhost-user.
you can try to test it.
Post by Xie, Huawei
In theory we should use the fd for each region to map each memory region.
In fact we could map once. This will also save address space for 1GB huge page
due to mmap alignment requirement.
gpa = 0x0
size = 655360
ua = 0x2aaaaac00000
offset = 0
region 1:// use this region to mmap.BTW how to avoid mmap twice when there
are two devices?
gpa = 0xC0000
size = 2146697216
ua = 0x2aaaaacc0000
offset = 786432
What do you mean by two devices?
e.g there are two vhost-user backends in a VM, we will receive two SET_MEM_TABLE messages, actually we only need mmap once in one message.

I think qemu should add a new message to send all hugepage fd and size once.
as this we not need to mmap and calculate memory in set_mem_table message.
Post by Xie, Huawei
Post by Xie, Huawei
2) what message is the indicator for vhost start/release?
Previously for vhost-cuse, it has SET_BACKEND message.
What we should do for vhost-user?
SET_VRING_KICK for start?
What about for release?
Unlike the kernel virtio, the DPDK virtio in guest could be restarted.
Thoughts?
-huawei
--
Regards,
Haifeng
.
--
Regards,
Haifeng
Tetsuya Mukawa
2014-11-14 02:35:40 UTC
Permalink
actually there may be two hugepage files created by qemu. one day i
create a 4G VM found qemu create 2 hugepage file and send them to
vhost-user. you can try to test it.
That's the case.
Bcasue I didn't think actually we can do like that, I try to mmap
region0 with guest memory size.

Thanks,
Tetsuya.
Xie, Huawei
2014-11-14 06:24:39 UTC
Permalink
-----Original Message-----
Sent: Thursday, November 13, 2014 7:24 PM
Subject: Re: [dpdk-dev] vhost-user technical isssues
Post by Xie, Huawei
-----Original Message-----
Sent: Wednesday, November 12, 2014 11:28 PM
Subject: Re: [dpdk-dev] vhost-user technical isssues
Post by Xie, Huawei
There are two major technical issues in my mind for vhost-user
implementation.
Post by Xie, Huawei
1) memory region map
Vhost-user passes us file fd and offset for each memory region.
Unfortunately
Post by Xie, Huawei
the mmap offset is "very" wrong. I discovered this issue long time ago, and
also
Post by Xie, Huawei
found
Post by Xie, Huawei
that I couldn't mmap the huge page file even with correct offset(need
double
Post by Xie, Huawei
check).
Post by Xie, Huawei
Just now I find that people reported this issue on Nov 3.
[Qemu-devel] [PULL 27/29] vhost-user: fix mmap offset calculation
Anyway, I turned to the same idea used in our DPDK vhost-cuse: only use the
fd
Post by Xie, Huawei
for region(0) to map the whole file.
Post by Xie, Huawei
I think we should use this way temporarily to support qemu-2.1 as it has that
bug.
the size of region 0 is not same as the file size. may be you should mmap the
other region.
Will calculate the maximum memory size, and use any file fd to mmap it.
Here we assume the fds for different regions actually point to the same file.
actually there may be two hugepage files created by qemu.
one day i create a 4G VM found qemu create 2 hugepage file and send them to vhost-user.
you can try to test it.
Ok, if that is the case, we need to fix vhost-cuse as well.
Post by Xie, Huawei
In theory we should use the fd for each region to map each memory region.
In fact we could map once. This will also save address space for 1GB huge page
due to mmap alignment requirement.
gpa = 0x0
size = 655360
ua = 0x2aaaaac00000
offset = 0
region 1:// use this region to mmap.BTW how to avoid mmap twice when
there
Post by Xie, Huawei
are two devices?
gpa = 0xC0000
size = 2146697216
ua = 0x2aaaaacc0000
offset = 786432
What do you mean by two devices?
e.g there are two vhost-user backends in a VM, we will receive two
SET_MEM_TABLE messages, actually we only need mmap once in one message.
I think qemu should add a new message to send all hugepage fd and size once.
as this we not need to mmap and calculate memory in set_mem_table message.
Post by Xie, Huawei
Post by Xie, Huawei
2) what message is the indicator for vhost start/release?
Previously for vhost-cuse, it has SET_BACKEND message.
What we should do for vhost-user?
SET_VRING_KICK for start?
What about for release?
Unlike the kernel virtio, the DPDK virtio in guest could be restarted.
Thoughts?
-huawei
--
Regards,
Haifeng
.
--
Regards,
Haifeng
Loading...