qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Hanna Czenczek <hreitz@redhat.com>
To: Fiona Ebner <f.ebner@proxmox.com>,
	QEMU Developers <qemu-devel@nongnu.org>
Cc: "open list:Network Block Dev..." <qemu-block@nongnu.org>,
	Thomas Lamprecht <t.lamprecht@proxmox.com>,
	John Snow <jsnow@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>
Subject: Re: Deadlock with ide_issue_trim and draining
Date: Wed, 8 Mar 2023 19:03:58 +0100	[thread overview]
Message-ID: <2c5b2bff-2ab2-e6e4-b324-19c2e11d9c54@redhat.com> (raw)
In-Reply-To: <97638730-0dfa-918b-3c66-7874171b3e5c@redhat.com>

[-- Attachment #1: Type: text/plain, Size: 4157 bytes --]

On 07.03.23 14:44, Hanna Czenczek wrote:
> On 07.03.23 13:22, Fiona Ebner wrote:
>> Hi,
>> I am suspecting that commit 7e5cdb345f ("ide: Increment BB in-flight
>> counter for TRIM BH") introduced an issue in combination with draining.
>>
>>  From a debug session on a costumer's machine I gathered the following
>> information:
>> * The QEMU process hangs in aio_poll called during draining and doesn't
>> progress.
>> * The in_flight counter for the BlockDriverState is 0 and for the
>> BlockBackend it is 1.
>> * There is a blk_aio_pdiscard_entry request in the BlockBackend's
>> queued_requests.
>> * The drive is attached via ahci.
>>
>> I suspect that something like the following happened:
>>
>> 1. ide_issue_trim is called, and increments the in_flight counter.
>> 2. ide_issue_trim_cb calls blk_aio_pdiscard.
>> 3. somebody else starts draining.
>> 4. ide_issue_trim_cb is called as the completion callback for
>> blk_aio_pdiscard.
>> 5. ide_issue_trim_cb issues yet another blk_aio_pdiscard request.
>> 6. The request is added to the wait queue via blk_wait_while_drained,
>> because draining has been started.
>> 7. Nobody ever decrements the in_flight counter and draining can't 
>> finish.
>
> Sounds about right.
>
>> The issue occurs very rarely and is difficult to reproduce, but with the
>> help of GDB, I'm able to do it rather reliably:
>> 1. Use GDB to break on blk_aio_pdiscard.
>> 2. Run mkfs.ext4 on a huge disk in the guest.
>> 3. Issue a drive-backup QMP command after landing on the breakpoint.
>> 4. Continue a few times in GDB.
>> 5. After that I can observe the same situation as described above.
>>
>> I'd be happy about suggestions for how to fix it. Unfortunately, I don't
>> see a clear-cut way at the moment. The only idea I have right now is to
>> change the code to issue all discard requests at the same time, but I
>> fear there might pitfalls with that?
>
> The point of 7e5cdb345f was that we need any in-flight count to 
> accompany a set s->bus->dma->aiocb.  While blk_aio_pdiscard() is 
> happening, we don’t necessarily need another count.  But we do need it 
> while there is no blk_aio_pdiscard().
>
> ide_issue_trim_cb() returns in two cases (and, recursively through its 
> callers, leaves s->bus->dma->aiocb set):
> 1. After calling blk_aio_pdiscard(), which will keep an in-flight count,
> 2. After calling replay_bh_schedule_event() (i.e. qemu_bh_schedule()), 
> which does not keep an in-flight count.
>
> Perhaps we just need to move the blk_inc_in_flight() above the 
> replay_bh_schedule_event() call?

While writing the commit message for this, I noticed it isn’t quite 
right: ide_cancel_dma_sync() drains s->blk only once, so once the 
in-flight counter goes to 0, s->blk is considered drained and 
ide_cancel_dma_sync() will go on to assert that s->bus->dma->aiocb is 
now NULL.  However, if we do have a blk_aio_pdiscard() in flight, the 
drain will wait only for that one to complete, not for the whole trim 
operation to complete, i.e. the next discard or ide_trim_bh_cb() will be 
scheduled, but neither will necessarily be run before blk_drain() returns.

I’ve attached a reproducer that issues two trim requests.  Before 
7e5cdb345f, it makes qemu crash because the assertion fails (one or two 
of the blk_aio_pdiscard()s is drained, but the trim isn’t settled yet).  
After 7e5cdb345f, qemu hangs because of what you describe (the second 
blk_aio_pdiscard() is enqueued, so the drain can’t make progress, 
resulting in a deadlock).  With my proposed fix, qemu crashes again.

(Reproducer is run like this:
$ qemu-system-x86_64 -drive if=ide,file=/tmp/test.bin,format=raw
)

What comes to my mind is either what you’ve proposed initially (issuing 
all discards simultaneously), or to still use my proposed fix, but also 
have ide_cancel_dma_sync() run blk_drain() in a loop until 
s->bus->dma->aiocb becomes NULL.  (Kind of like my original patch 
(https://lists.nongnu.org/archive/html/qemu-block/2022-01/msg00024.html), 
only that we can still use blk_drain() instead of aio_poll() because we 
increment the in-flight counter while the completion BH is scheduled.)

Hanna

[-- Attachment #2: test.bin --]
[-- Type: application/octet-stream, Size: 1024 bytes --]

[-- Attachment #3: test.asm --]
[-- Type: text/plain, Size: 1637 bytes --]

format binary
use16

org 0x7c00

DMA_BUF = 0x7e00

cld

xor     ax, ax
mov     ds, ax
mov     es, ax

; clear DMA buffer
mov     di, DMA_BUF
xor     ax, ax
mov     cx, 256
repnz stosw

; two TRIM requests (both are the same: one sector, starting at sector index 1)
mov     di, DMA_BUF
mov     dword [di+ 0], 0x00000001
mov     dword [di+ 4], 0x00010000
mov     dword [di+ 8], 0x00000001
mov     dword [di+12], 0x00010000

; find IDE PCI device
mov     ax, 0xb102
mov     dx, 0x8086
mov     cx, 0x7010
xor     si, si
int     0x1a

; bx has PCI address
push    bx

; enable BM+MEM+IO

mov     di, 0x04 ; command/status
mov     ax, 0xb10a ; read config dword
int     0x1a

pop     bx
push    bx

or      cl, 0x7 ; BM+MEM+IO
mov     di, 0x04
mov     ax, 0xb10d ; write config dword
int     0x1a

pop     bx
push    bx

; read BAR4 (DMA I/O space)

mov     di, 0x20 ; bar4
mov     ax, 0xb10a
int     0x1a

and     cx, 0xfffc ; DMA I/O base
push    cx

mov     dx, cx

; set up DMA

xor     al, al ; status: 0
out     dx, al

mov     al, 0x04 ; clear pending interrupts
add     dx, 2
out     dx, al

mov     eax, prdt
add     dx, 2
out     dx, eax

; send TRIM command

mov     dx, 0x1f0
mov     si, dsm_trim_cmd
mov     cx, 8
out_loop:
lodsb
out     dx, al
inc     dx
loop    out_loop

; start DMA transfer

pop     dx
mov     al, 0x01
out     dx, al

; immediately reset device, cancelling ongoing TRIM

mov     dx, 0x3f6
mov     al, 0x04
out     dx, al

cli
hlt


dsm_trim_cmd:
db 0x00, 0x01, 0x01, 0x00, 0x00, 0x00, 0xc0, 0x06

pciaddr:
dw ?

align(8)
prdt:
dd DMA_BUF
dd 512 or 0x80000000

times 510-($-$$) db 0
dw 0xaa55

times 512 db 0

      parent reply	other threads:[~2023-03-08 18:04 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-03-07 12:22 Deadlock with ide_issue_trim and draining Fiona Ebner
2023-03-07 13:44 ` Hanna Czenczek
2023-03-07 14:27   ` Hanna Czenczek
2023-03-08 10:35     ` Fiona Ebner
2023-03-08 15:02       ` Hanna Czenczek
2023-03-08 18:03   ` Hanna Czenczek [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2c5b2bff-2ab2-e6e4-b324-19c2e11d9c54@redhat.com \
    --to=hreitz@redhat.com \
    --cc=f.ebner@proxmox.com \
    --cc=jsnow@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=qemu-block@nongnu.org \
    --cc=qemu-devel@nongnu.org \
    --cc=t.lamprecht@proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).