* AIO requests may be disordered by Qemu-kvm iothread with disk cache=writethrough, Bug or Feature?
@ 2015-10-08 11:59 charlie.song
2015-10-08 15:37 ` Fam Zheng
2015-10-14 9:32 ` Stefan Hajnoczi
0 siblings, 2 replies; 7+ messages in thread
From: charlie.song @ 2015-10-08 11:59 UTC (permalink / raw)
To: kvm
Dear KVM Developers:
I am Xiang Song from UCloud company. We currently encounter a weird phenomenon about Qemu-KVM IOthread.
We recently try to use Linux AIO from guest OS and find that the IOthread mechanism of Qemu-KVM will reorder I/O requests from guest OS
even when the AIO write requests are issued from a single thread in order. This does not happen on the host OS however.
We are not sure whether this is a feature of Qemu-KVM IOthread mechanism or a Bug.
The testbd is as following: (the guest disk device cache is configured to writethrough.)
CPU: Intel(R) Xeon(R) CPU E5-2650
QEMU version: 1.5.3
Host/Guest Kernel: Both Linux 4.1.8 & Linux 2.6.32, OS type CentOS 6.5
Simplified Guest OS qemu cmd:
/usr/libexec/qemu-kvm -machine rhel6.3.0,accel=kvm,usb=off -cpu kvm64 -smp 8,sockets=8,cores=1,threads=1
-drive file=/var/lib/libvirt/images/song-disk.img,if=none,id=drive-virtio-disk0,format=qcow2,serial=UCLOUD_DISK_VDA,cache=writethrough
-device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:22:d5:52,bus=pci.0,addr=0x4
The test code triggerring this phenomenon work as following: it use linux aio API to issue concurrent async write requests to a file. During exection it will
continuously write data into target test file. There are total 'X' jobs, and each job is assigned a job id JOB_ID which starts from 0. Each job will write 16 * 512
Byte data into the target file at offset = JOB_ID * 512. (the data is repeated uint64_t JOB_ID).
There is only one thread handling 'X' jobs one by one through Linux AIO (io_submit) cmd. When handling jobs, it will continuously
issuing AIO requests without waiting for AIO Callbacks. When it finishes, the file should look like:
[0....0][1...1][2...2][3...3]...[X-1...X-1]
Then we use a check program to test the resulting file, it can continuously read the first 8 byte (uint64_t) of each sector and print it out. In normal cases,
it's output is like:
0 1 2 3 .... X-1
Exec output: (Set X=32)
In our guest OS, the output is abnormal: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 18 18 18 18 18 24 25 26 27 28 29 30 31.
It can be seen that job20~job24 are overwrited by job19.
In our host OS, the output is as expected, 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31.
I can provide the example code if needed.
Best regards, song
2015-10-08
charlie.song
^ permalink raw reply [flat|nested] 7+ messages in thread* Re: AIO requests may be disordered by Qemu-kvm iothread with disk cache=writethrough, Bug or Feature?
2015-10-08 11:59 AIO requests may be disordered by Qemu-kvm iothread with disk cache=writethrough, Bug or Feature? charlie.song
@ 2015-10-08 15:37 ` Fam Zheng
2015-10-09 3:25 ` charlie.song
2015-10-14 9:32 ` Stefan Hajnoczi
1 sibling, 1 reply; 7+ messages in thread
From: Fam Zheng @ 2015-10-08 15:37 UTC (permalink / raw)
To: charlie.song; +Cc: kvm
On Thu, 10/08 19:59, charlie.song wrote:
> Dear KVM Developers:
> I am Xiang Song from UCloud company. We currently encounter a weird phenomenon about Qemu-KVM IOthread.
> We recently try to use Linux AIO from guest OS and find that the IOthread mechanism of Qemu-KVM will reorder I/O requests from guest OS
> even when the AIO write requests are issued from a single thread in order. This does not happen on the host OS however.
> We are not sure whether this is a feature of Qemu-KVM IOthread mechanism or a Bug.
>
> The testbd is as following: (the guest disk device cache is configured to writethrough.)
> CPU: Intel(R) Xeon(R) CPU E5-2650
> QEMU version: 1.5.3
> Host/Guest Kernel: Both Linux 4.1.8 & Linux 2.6.32, OS type CentOS 6.5
> Simplified Guest OS qemu cmd:
> /usr/libexec/qemu-kvm -machine rhel6.3.0,accel=kvm,usb=off -cpu kvm64 -smp 8,sockets=8,cores=1,threads=1
> -drive file=/var/lib/libvirt/images/song-disk.img,if=none,id=drive-virtio-disk0,format=qcow2,serial=UCLOUD_DISK_VDA,cache=writethrough
> -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:22:d5:52,bus=pci.0,addr=0x4
You mentioned iothread above but it's not in your command line?
>
> The test code triggerring this phenomenon work as following: it use linux aio API to issue concurrent async write requests to a file. During exection it will
> continuously write data into target test file. There are total 'X' jobs, and each job is assigned a job id JOB_ID which starts from 0. Each job will write 16 * 512
> Byte data into the target file at offset = JOB_ID * 512. (the data is repeated uint64_t JOB_ID).
> There is only one thread handling 'X' jobs one by one through Linux AIO (io_submit) cmd. When handling jobs, it will continuously
> issuing AIO requests without waiting for AIO Callbacks. When it finishes, the file should look like:
> [0....0][1...1][2...2][3...3]...[X-1...X-1]
> Then we use a check program to test the resulting file, it can continuously read the first 8 byte (uint64_t) of each sector and print it out. In normal cases,
> it's output is like:
> 0 1 2 3 .... X-1
>
> Exec output: (Set X=32)
> In our guest OS, the output is abnormal: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 18 18 18 18 18 24 25 26 27 28 29 30 31.
> It can be seen that job20~job24 are overwrited by job19.
> In our host OS, the output is as expected, 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31.
I'm not 100% sure but I don't think the returning of io_submit guarantees any
ordering, usually you need to wait for the callback to ensure that.
Fam
>
>
> I can provide the example code if needed.
>
> Best regards, song
>
> 2015-10-08
>
>
> charlie.song
>
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: Re: AIO requests may be disordered by Qemu-kvm iothread with disk cache=writethrough, Bug or Feature?
2015-10-08 15:37 ` Fam Zheng
@ 2015-10-09 3:25 ` charlie.song
2015-10-09 4:16 ` Fam Zheng
0 siblings, 1 reply; 7+ messages in thread
From: charlie.song @ 2015-10-09 3:25 UTC (permalink / raw)
To: Fam Zheng; +Cc: kvm
At 2015-10-08 23:37:02, "Fam Zheng" <famz@redhat.com> wrote:
>On Thu, 10/08 19:59, charlie.song wrote:
>> Dear KVM Developers:
>> I am Xiang Song from UCloud company. We currently encounter a weird phenomenon about Qemu-KVM IOthread.
>> We recently try to use Linux AIO from guest OS and find that the IOthread mechanism of Qemu-KVM will reorder I/O requests from guest OS
>> even when the AIO write requests are issued from a single thread in order. This does not happen on the host OS however.
>> We are not sure whether this is a feature of Qemu-KVM IOthread mechanism or a Bug.
>>
>> The testbd is as following: (the guest disk device cache is configured to writethrough.)
>> CPU: Intel(R) Xeon(R) CPU E5-2650
>> QEMU version: 1.5.3
>> Host/Guest Kernel: Both Linux 4.1.8 & Linux 2.6.32, OS type CentOS 6.5
>> Simplified Guest OS qemu cmd:
>> /usr/libexec/qemu-kvm -machine rhel6.3.0,accel=kvm,usb=off -cpu kvm64 -smp 8,sockets=8,cores=1,threads=1
>> -drive file=/var/lib/libvirt/images/song-disk.img,if=none,id=drive-virtio-disk0,format=qcow2,serial=UCLOUD_DISK_VDA,cache=writethrough
>> -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:22:d5:52,bus=pci.0,addr=0x4
>
>You mentioned iothread above but it's not in your command line?
I means the thread pool mechanism used by qemu-kvm to accelerate I/O processing.This is used by paio_submit (block/raw-posix.c) by default and with
pool->max_threads = 64 as I know. (qemu-kvm version 1.5.3)
>
>>
>> The test code triggerring this phenomenon work as following: it use linux aio API to issue concurrent async write requests to a file. During exection it will
>> continuously write data into target test file. There are total 'X' jobs, and each job is assigned a job id JOB_ID which starts from 0. Each job will write 16 * 512
>> Byte data into the target file at offset = JOB_ID * 512. (the data is repeated uint64_t JOB_ID).
>> There is only one thread handling 'X' jobs one by one through Linux AIO (io_submit) cmd. When handling jobs, it will continuously
>> issuing AIO requests without waiting for AIO Callbacks. When it finishes, the file should look like:
>> [0....0][1...1][2...2][3...3]...[X-1...X-1]
>> Then we use a check program to test the resulting file, it can continuously read the first 8 byte (uint64_t) of each sector and print it out. In normal cases,
>> it's output is like:
>> 0 1 2 3 .... X-1
>>
>> Exec output: (Set X=32)
>> In our guest OS, the output is abnormal: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 18 18 18 18 18 24 25 26 27 28 29 30 31.
>> It can be seen that job20~job24 are overwrited by job19.
>> In our host OS, the output is as expected, 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31.
>
>I'm not 100% sure but I don't think the returning of io_submit guarantees any
>ordering, usually you need to wait for the callback to ensure that.
Is there any proof or artical about the ordering of io_submit requests?
>
>Fam
>
>>
>>
>> I can provide the example code if needed.
>>
>> Best regards, song
>>
>> 2015-10-08
>>
>>
>> charlie.song
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe kvm" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: Re: AIO requests may be disordered by Qemu-kvm iothread with disk cache=writethrough, Bug or Feature?
2015-10-09 3:25 ` charlie.song
@ 2015-10-09 4:16 ` Fam Zheng
2015-10-09 4:33 ` charlie.song
[not found] ` <56174390.80804@ucloud.cn>
0 siblings, 2 replies; 7+ messages in thread
From: Fam Zheng @ 2015-10-09 4:16 UTC (permalink / raw)
To: charlie.song; +Cc: kvm
On Fri, 10/09 11:25, charlie.song wrote:
> At 2015-10-08 23:37:02, "Fam Zheng" <famz@redhat.com> wrote:
> >On Thu, 10/08 19:59, charlie.song wrote:
> >> Dear KVM Developers:
> >> I am Xiang Song from UCloud company. We currently encounter a weird phenomenon about Qemu-KVM IOthread.
> >> We recently try to use Linux AIO from guest OS and find that the IOthread mechanism of Qemu-KVM will reorder I/O requests from guest OS
> >> even when the AIO write requests are issued from a single thread in order. This does not happen on the host OS however.
> >> We are not sure whether this is a feature of Qemu-KVM IOthread mechanism or a Bug.
> >>
> >> The testbd is as following: (the guest disk device cache is configured to writethrough.)
> >> CPU: Intel(R) Xeon(R) CPU E5-2650
> >> QEMU version: 1.5.3
> >> Host/Guest Kernel: Both Linux 4.1.8 & Linux 2.6.32, OS type CentOS 6.5
> >> Simplified Guest OS qemu cmd:
> >> /usr/libexec/qemu-kvm -machine rhel6.3.0,accel=kvm,usb=off -cpu kvm64 -smp 8,sockets=8,cores=1,threads=1
> >> -drive file=/var/lib/libvirt/images/song-disk.img,if=none,id=drive-virtio-disk0,format=qcow2,serial=UCLOUD_DISK_VDA,cache=writethrough
> >> -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:22:d5:52,bus=pci.0,addr=0x4
> >
> >You mentioned iothread above but it's not in your command line?
> I means the thread pool mechanism used by qemu-kvm to accelerate I/O
> processing.This is used by paio_submit (block/raw-posix.c) by default and
> with pool->max_threads = 64 as I know. (qemu-kvm version 1.5.3)
The thread pool parallism may reorder non-overlapping requests, but it
shouldn't cause any reordering of overlapping requests like the case in your io
pattern. QEMU ensures that.
Do you see this with aio=native?
Fam
>
> >
> >>
> >> The test code triggerring this phenomenon work as following: it use linux aio API to issue concurrent async write requests to a file. During exection it will
> >> continuously write data into target test file. There are total 'X' jobs, and each job is assigned a job id JOB_ID which starts from 0. Each job will write 16 * 512
> >> Byte data into the target file at offset = JOB_ID * 512. (the data is repeated uint64_t JOB_ID).
> >> There is only one thread handling 'X' jobs one by one through Linux AIO (io_submit) cmd. When handling jobs, it will continuously
> >> issuing AIO requests without waiting for AIO Callbacks. When it finishes, the file should look like:
> >> [0....0][1...1][2...2][3...3]...[X-1...X-1]
> >> Then we use a check program to test the resulting file, it can continuously read the first 8 byte (uint64_t) of each sector and print it out. In normal cases,
> >> it's output is like:
> >> 0 1 2 3 .... X-1
> >>
> >> Exec output: (Set X=32)
> >> In our guest OS, the output is abnormal: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 18 18 18 18 18 24 25 26 27 28 29 30 31.
> >> It can be seen that job20~job24 are overwrited by job19.
> >> In our host OS, the output is as expected, 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31.
> >
> >I'm not 100% sure but I don't think the returning of io_submit guarantees any
> >ordering, usually you need to wait for the callback to ensure that.
> Is there any proof or artical about the ordering of io_submit requests?
> >
> >Fam
> >
> >>
> >>
> >> I can provide the example code if needed.
> >>
> >> Best regards, song
> >>
> >> 2015-10-08
> >>
> >>
> >> charlie.song
> >>
> >> --
> >> To unsubscribe from this list: send the line "unsubscribe kvm" in
> >> the body of a message to majordomo@vger.kernel.org
> >> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 7+ messages in thread* Re: Re: Re: AIO requests may be disordered by Qemu-kvm iothread with disk cache=writethrough, Bug or Feature?
2015-10-09 4:16 ` Fam Zheng
@ 2015-10-09 4:33 ` charlie.song
[not found] ` <56174390.80804@ucloud.cn>
1 sibling, 0 replies; 7+ messages in thread
From: charlie.song @ 2015-10-09 4:33 UTC (permalink / raw)
To: Fam Zheng; +Cc: kvm
At 2015-10-09 12:16:03, "Fam Zheng" <famz@redhat.com> wrote:
>On Fri, 10/09 11:25, charlie.song wrote:
>> At 2015-10-08 23:37:02, "Fam Zheng" <famz@redhat.com> wrote:
>> >On Thu, 10/08 19:59, charlie.song wrote:
>> >> Dear KVM Developers:
>> >> I am Xiang Song from UCloud company. We currently encounter a weird phenomenon about Qemu-KVM IOthread.
>> >> We recently try to use Linux AIO from guest OS and find that the IOthread mechanism of Qemu-KVM will reorder I/O requests from guest OS
>> >> even when the AIO write requests are issued from a single thread in order. This does not happen on the host OS however.
>> >> We are not sure whether this is a feature of Qemu-KVM IOthread mechanism or a Bug.
>> >>
>> >> The testbd is as following: (the guest disk device cache is configured to writethrough.)
>> >> CPU: Intel(R) Xeon(R) CPU E5-2650
>> >> QEMU version: 1.5.3
>> >> Host/Guest Kernel: Both Linux 4.1.8 & Linux 2.6.32, OS type CentOS 6.5
>> >> Simplified Guest OS qemu cmd:
>> >> /usr/libexec/qemu-kvm -machine rhel6.3.0,accel=kvm,usb=off -cpu kvm64 -smp 8,sockets=8,cores=1,threads=1
>> >> -drive file=/var/lib/libvirt/images/song-disk.img,if=none,id=drive-virtio-disk0,format=qcow2,serial=UCLOUD_DISK_VDA,cache=writethrough
>> >> -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:22:d5:52,bus=pci.0,addr=0x4
>> >
>> >You mentioned iothread above but it's not in your command line?
>> I means the thread pool mechanism used by qemu-kvm to accelerate I/O
>> processing.This is used by paio_submit (block/raw-posix.c) by default and
>> with pool->max_threads = 64 as I know. (qemu-kvm version 1.5.3)
>
>The thread pool parallism may reorder non-overlapping requests, but it
>shouldn't cause any reordering of overlapping requests like the case in your io
>pattern. QEMU ensures that.
>
>Do you see this with aio=native?
We see this with both aio=threads and aio=native.
Are you sure "QEMU ensures the ordering of overlapping requests"?
I can provide the example code if needed.
>
>Fam
>
>>
>> >
>> >>
>> >> The test code triggerring this phenomenon work as following: it use linux aio API to issue concurrent async write requests to a file. During exection it will
>> >> continuously write data into target test file. There are total 'X' jobs, and each job is assigned a job id JOB_ID which starts from 0. Each job will write 16 * 512
>> >> Byte data into the target file at offset = JOB_ID * 512. (the data is repeated uint64_t JOB_ID).
>> >> There is only one thread handling 'X' jobs one by one through Linux AIO (io_submit) cmd. When handling jobs, it will continuously
>> >> issuing AIO requests without waiting for AIO Callbacks. When it finishes, the file should look like:
>> >> [0....0][1...1][2...2][3...3]...[X-1...X-1]
>> >> Then we use a check program to test the resulting file, it can continuously read the first 8 byte (uint64_t) of each sector and print it out. In normal cases,
>> >> it's output is like:
>> >> 0 1 2 3 .... X-1
>> >>
>> >> Exec output: (Set X=32)
>> >> In our guest OS, the output is abnormal: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 18 18 18 18 18 24 25 26 27 28 29 30 31.
>> >> It can be seen that job20~job24 are overwrited by job19.
>> >> In our host OS, the output is as expected, 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31.
>> >
>> >I'm not 100% sure but I don't think the returning of io_submit guarantees any
>> >ordering, usually you need to wait for the callback to ensure that.
>> Is there any proof or artical about the ordering of io_submit requests?
>> >
>> >Fam
>> >
>> >>
>> >>
>> >> I can provide the example code if needed.
>> >>
>> >> Best regards, song
>> >>
>> >> 2015-10-08
>> >>
>> >>
>> >> charlie.song
>> >>
>> >> --
>> >> To unsubscribe from this list: send the line "unsubscribe kvm" in
>> >> the body of a message to majordomo@vger.kernel.org
>> >> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe kvm" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 7+ messages in thread[parent not found: <56174390.80804@ucloud.cn>]
* Re: Re: Re: AIO requests may be disordered by Qemu-kvm iothread with disk cache=writethrough, Bug or Feature?
[not found] ` <56174390.80804@ucloud.cn>
@ 2015-10-09 4:44 ` charlie.song
0 siblings, 0 replies; 7+ messages in thread
From: charlie.song @ 2015-10-09 4:44 UTC (permalink / raw)
To: Fam Zheng; +Cc: kvm
At 2015-10-09 12:33:20, "charlie.song" <charlie.song@ucloud.cn> wrote:
>At 2015-10-09 12:16:03, "Fam Zheng" <famz@redhat.com> wrote:
>>On Fri, 10/09 11:25, charlie.song wrote:
>>> At 2015-10-08 23:37:02, "Fam Zheng" <famz@redhat.com> wrote:
>>> >On Thu, 10/08 19:59, charlie.song wrote:
>>> >> Dear KVM Developers:
>>> >> I am Xiang Song from UCloud company. We currently encounter a weird phenomenon about Qemu-KVM IOthread.
>>> >> We recently try to use Linux AIO from guest OS and find that the IOthread mechanism of Qemu-KVM will reorder I/O requests from guest OS
>>> >> even when the AIO write requests are issued from a single thread in order. This does not happen on the host OS however.
>>> >> We are not sure whether this is a feature of Qemu-KVM IOthread mechanism or a Bug.
>>> >>
>>> >> The testbd is as following: (the guest disk device cache is configured to writethrough.)
>>> >> CPU: Intel(R) Xeon(R) CPU E5-2650
>>> >> QEMU version: 1.5.3
>>> >> Host/Guest Kernel: Both Linux 4.1.8 & Linux 2.6.32, OS type CentOS 6.5
>>> >> Simplified Guest OS qemu cmd:
>>> >> /usr/libexec/qemu-kvm -machine rhel6.3.0,accel=kvm,usb=off -cpu kvm64 -smp 8,sockets=8,cores=1,threads=1
>>> >> -drive file=/var/lib/libvirt/images/song-disk.img,if=none,id=drive-virtio-disk0,format=qcow2,serial=UCLOUD_DISK_VDA,cache=writethrough
>>> >> -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:22:d5:52,bus=pci.0,addr=0x4
>>> >
>>> >You mentioned iothread above but it's not in your command line?
>>> I means the thread pool mechanism used by qemu-kvm to accelerate I/O
>>> processing.This is used by paio_submit (block/raw-posix.c) by default and
>>> with pool->max_threads = 64 as I know. (qemu-kvm version 1.5.3)
>>
>>The thread pool parallism may reorder non-overlapping requests, but it
>>shouldn't cause any reordering of overlapping requests like the case in your io
>>pattern. QEMU ensures that.
>>
>>Do you see this with aio=native?
>We see this with both aio=threads and aio=native.
>Are you sure "QEMU ensures the ordering of overlapping requests"?
>I can provide the example code if needed.
If I configure the guest disk IO with "cache=directsync,aio=native", this phenomenon disappears.
>
>>
>>Fam
>>
>>>
>>> >
>>> >>
>>> >> The test code triggerring this phenomenon work as following: it use linux aio API to issue concurrent async write requests to a file. During exection it will
>>> >> continuously write data into target test file. There are total 'X' jobs, and each job is assigned a job id JOB_ID which starts from 0. Each job will write 16 * 512
>>> >> Byte data into the target file at offset = JOB_ID * 512. (the data is repeated uint64_t JOB_ID).
>>> >> There is only one thread handling 'X' jobs one by one through Linux AIO (io_submit) cmd. When handling jobs, it will continuously
>>> >> issuing AIO requests without waiting for AIO Callbacks. When it finishes, the file should look like:
>>> >> [0....0][1...1][2...2][3...3]...[X-1...X-1]
>>> >> Then we use a check program to test the resulting file, it can continuously read the first 8 byte (uint64_t) of each sector and print it out. In normal cases,
>>> >> it's output is like:
>>> >> 0 1 2 3 .... X-1
>>> >>
>>> >> Exec output: (Set X=32)
>>> >> In our guest OS, the output is abnormal: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 18 18 18 18 18 24 25 26 27 28 29 30 31.
>>> >> It can be seen that job20~job24 are overwrited by job19.
>>> >> In our host OS, the output is as expected, 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31.
>>> >
>>> >I'm not 100% sure but I don't think the returning of io_submit guarantees any
>>> >ordering, usually you need to wait for the callback to ensure that.
>>> Is there any proof or artical about the ordering of io_submit requests?
>>> >
>>> >Fam
>>> >
>>> >>
>>> >>
>>> >> I can provide the example code if needed.
>>> >>
>>> >> Best regards, song
>>> >>
>>> >> 2015-10-08
>>> >>
>>> >>
>>> >> charlie.song
>>> >>
>>> >> --
>>> >> To unsubscribe from this list: send the line "unsubscribe kvm" in
>>> >> the body of a message to majordomo@vger.kernel.org
>>> >> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>>
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe kvm" in
>>> the body of a message to majordomo@vger.kernel.org
>>> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: AIO requests may be disordered by Qemu-kvm iothread with disk cache=writethrough, Bug or Feature?
2015-10-08 11:59 AIO requests may be disordered by Qemu-kvm iothread with disk cache=writethrough, Bug or Feature? charlie.song
2015-10-08 15:37 ` Fam Zheng
@ 2015-10-14 9:32 ` Stefan Hajnoczi
1 sibling, 0 replies; 7+ messages in thread
From: Stefan Hajnoczi @ 2015-10-14 9:32 UTC (permalink / raw)
To: charlie.song; +Cc: kvm
On Thu, Oct 08, 2015 at 07:59:56PM +0800, charlie.song wrote:
> We recently try to use Linux AIO from guest OS and find that the IOthread mechanism of Qemu-KVM will reorder I/O requests from guest OS
> even when the AIO write requests are issued from a single thread in order. This does not happen on the host OS however.
I think you are describing a situation where a guest submits multiple
overlapping I/O requests at the same time.
virtio-blk does not guarantee a specific request ordering, so the
application needs to wait for request completion if ordering matters.
io_submit(2) also does not make guarantees about ordering.
Stefan
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2015-10-14 13:51 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-10-08 11:59 AIO requests may be disordered by Qemu-kvm iothread with disk cache=writethrough, Bug or Feature? charlie.song
2015-10-08 15:37 ` Fam Zheng
2015-10-09 3:25 ` charlie.song
2015-10-09 4:16 ` Fam Zheng
2015-10-09 4:33 ` charlie.song
[not found] ` <56174390.80804@ucloud.cn>
2015-10-09 4:44 ` charlie.song
2015-10-14 9:32 ` Stefan Hajnoczi
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).