From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 2B2D4C61DA4 for ; Mon, 13 Mar 2023 12:30:00 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pbhJG-0006S6-O3; Mon, 13 Mar 2023 08:29:50 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pbhJA-0006Ql-Uf; Mon, 13 Mar 2023 08:29:47 -0400 Received: from proxmox-new.maurer-it.com ([94.136.29.106]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pbhJ8-0004vR-HF; Mon, 13 Mar 2023 08:29:44 -0400 Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1]) by proxmox-new.maurer-it.com (Proxmox) with ESMTP id 48FCC43FA3; Mon, 13 Mar 2023 13:29:26 +0100 (CET) Message-ID: <9bc7a6d8-744e-9593-1de0-88f19a1e1bc1@proxmox.com> Date: Mon, 13 Mar 2023 13:29:25 +0100 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.8.0 Subject: Re: [PATCH for-8.0] ide: Fix manual in-flight count for TRIM BH Content-Language: en-US To: Paolo Bonzini , Kevin Wolf Cc: Hanna Czenczek , qemu-block@nongnu.org, qemu-devel@nongnu.org, John Snow , Thomas Lamprecht References: <20230309114430.33684-1-hreitz@redhat.com> <88de2e68-61e2-9397-b202-d611247002ba@redhat.com> <7ca18cb4-eeb1-4cba-feea-90f28fb9c2fc@redhat.com> <3e695f64-13bb-1311-6cd6-09bffc312873@redhat.com> From: Fiona Ebner In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Received-SPF: pass client-ip=94.136.29.106; envelope-from=f.ebner@proxmox.com; helo=proxmox-new.maurer-it.com X-Spam_score_int: -18 X-Spam_score: -1.9 X-Spam_bar: - X-Spam_report: (-1.9 / 5.0 requ) BAYES_00=-1.9, NICE_REPLY_A=-0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Am 10.03.23 um 16:13 schrieb Paolo Bonzini: > On Fri, Mar 10, 2023 at 3:25 PM Kevin Wolf wrote: >>> 1. The TRIM operation should be completed on the IDE level before >>> draining ends. >>> 2. Block layer requests issued after draining has begun are queued. >>> >>> To me, the conclusion seems to be: >>> Issue all block layer requests belonging to the IDE TRIM operation up >>> front. >>> >>> The other alternative I see is to break assumption 2, introduce a way >>> to not queue certain requests while drained, and use it for the >>> recursive requests issued by ide_issue_trim_cb. But not the initial >>> one, if that would defeat the purpose of request queuing. Of course >>> this can't be done if QEMU relies on the assumption in other places >>> already. >> >> I feel like this should be allowed because if anyone has exclusive >> access in this scenario, it's IDE, so it should be able to bypass the >> queuing. Of course, the queuing is still needed if someone else drained >> the backend, so we can't just make TRIM bypass it in general. And if you >> make it conditional on IDE being in blk_drain(), it already starts to >> become ugly again... >> >> So maybe the while loop is unavoidable. >> >> Hmm... But could ide_cancel_dma_sync() just directly use >> AIO_WAIT_WHILE(s->bus->dma->aiocb) instead of using blk_drain()? > > While that should work, it would not fix other uses of > bdrv_drain_all(), for example in softmmu/cpus.c. Stopping the device > model relies on those to run *until the device model has finished > submitting requests*. > > So I still think that this bug is a symptom of a problem in the design > of request queuing. > > In fact, shouldn't request queuing was enabled at the _end_ of > bdrv_drained_begin (once the BlockBackend has reached a quiescent > state on its own terms), rather than at the beginning (which leads to > deadlocks like this one)? > > blk->quiesce_counter becomes just a nesting counter for > drained_begin/end, with no uses outside, and blk_wait_while_drained > uses a new counter. Then you have something like this in > blk_root_drained_poll: > > if (blk->dev_ops && blk->dev_ops->drained_poll) { > busy = blk->dev_ops->drained_poll(blk->dev_opaque); > } > busy |= !!blk->in_flight; > if (!busy) { > qatomic_set(&blk->queue_requests, true); > } > return busy; > > And there's no need to touch IDE at all. > Couldn't this lead to scenarios where a busy or malicious guest, which continues to submit new requests, slows down draining or even prevents it from finishing? Best Regards, Fiona