From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 423A7CCFA18 for ; Tue, 11 Nov 2025 13:40:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:In-Reply-To:Content-Type: MIME-Version:References:Message-ID:Subject:Cc:To:From:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=3dH/Ivxzrn2sFpyLsAuHV1lUKuW6z7TWtOf0gHe+qiY=; b=vrOV0ufBgOmLa0NGN4AiEvwpFY vy2TpQyVswb4e2twSCelyJn5abUYGg+izoJKqmNMUkKOFZUcaczJ7SyrepS1orK3K2V4AxP3tgtvm FTIgEl73KCXe9GIn2TUNcNrTYA8RFOhBAvcPH1AkCmDS4+fWPb7xFGpYrHR9evgfTTXKy7P8t/UKd wkItpM6Hv9DW3E9Rvr99zsc4IkTOyHimOC06AqwHfUsTDUUhAewpz+lzS7KXejHS9F4WV6Yspr+HL 4Z3RhHDz8CoG5IigBoMSkqE4wyipOxmaDFEe94xQrWkPeH8Z1vEsHoGgWUg+f29EThAtdVEH8sIf9 CcbY1TDw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98.2 #2 (Red Hat Linux)) id 1vIobS-00000007Fof-2Tra; Tue, 11 Nov 2025 13:40:10 +0000 Received: from verein.lst.de ([213.95.11.211]) by bombadil.infradead.org with esmtps (Exim 4.98.2 #2 (Red Hat Linux)) id 1vIobQ-00000007FoB-08bS for linux-nvme@lists.infradead.org; Tue, 11 Nov 2025 13:40:09 +0000 Received: by verein.lst.de (Postfix, from userid 2407) id 65B86227AAA; Tue, 11 Nov 2025 14:40:01 +0100 (CET) Date: Tue, 11 Nov 2025 14:40:01 +0100 From: Christoph Hellwig To: Keith Busch Cc: Yu Kuai , Christoph Hellwig , Matthew Wilcox , Keith Busch , linux-nvme@lists.infradead.org, linux-block@vger.kernel.org, axboe@kernel.dk Subject: Re: [PATCHv5 1/2] block: accumulate memory segment gaps per bio Message-ID: <20251111134001.GA708@lst.de> References: <20251014150456.2219261-1-kbusch@meta.com> <20251014150456.2219261-2-kbusch@meta.com> <20251111071439.GA4240@lst.de> <024631dc-3c65-49a8-a97a-f9110fd00e9a@fnnas.com> <20251111093903.GB14438@lst.de> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.17 (2007-11-01) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20251111_054008_220867_683A7AD2 X-CRM114-Status: GOOD ( 10.47 ) X-BeenThere: linux-nvme@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "Linux-nvme" Errors-To: linux-nvme-bounces+linux-nvme=archiver.kernel.org@lists.infradead.org On Tue, Nov 11, 2025 at 08:25:38AM -0500, Keith Busch wrote: > Ah, so we're merging a discard for a device that doesn't support > vectored discard. I think we still want to be able to front/back merge > such requests, though. Yes, but purely based on bi_sector/bi_size, not based on the payload.