From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BF5F3CCA479 for ; Thu, 2 Jun 2022 05:26:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229930AbiFBF0i (ORCPT ); Thu, 2 Jun 2022 01:26:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54242 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229647AbiFBF0h (ORCPT ); Thu, 2 Jun 2022 01:26:37 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8815B22E690; Wed, 1 Jun 2022 22:26:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=In-Reply-To:Content-Type:MIME-Version :References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=kqwi6XHwPlv0etYm4RkwSAM0sr2Z6yzbSuDJz6QO5LY=; b=T8i+lMh9Pddi+Qh9BH9jKYIIcX zngHpAzl6O+P3ONBkXzaOgwwybnJn4Ri0RN0yrCles6i0D0PaNsCxOhGTQI52ph0qqPzWfZpB/IZo qf5Zp9DtPviRSOGjbp6gTvuLX1Zqq6i/DCPR5CgIBtyn7XMtqGsm54SmGx8ZA/7xIt5IlZOgfqmOw p83aU6XtIZZwzm8DzjO9IispMM6WonAUTXjxFk1QB/M9vVMQ6ZUnQaW+ZWZ9gxJUGibmUMQVfgYbv E6idulsJWzFXnTRCrRuORYaw2bxiBHQTmOZoWjatv/ZivXTBL7A0kmq2nKnM5XeIWXRab5yD70pbr 4O5Js4rg==; Received: from hch by bombadil.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nwdLm-001RYo-0A; Thu, 02 Jun 2022 05:26:26 +0000 Date: Wed, 1 Jun 2022 22:26:25 -0700 From: Christoph Hellwig To: Eric Wheeler Cc: Adriano Silva , Keith Busch , Matthias Ferdinand , Bcache Linux , Coly Li , Christoph Hellwig , "linux-block@vger.kernel.org" Subject: Re: [RFC] Add sysctl option to drop disk flushes in bcache? (was: Bcache in writes direct with fsync) Message-ID: References: <7759781b-dac-7f84-ff42-86f4b1983ca1@ewheeler.net> <24456292.2324073.1653742646974@mail.yahoo.com> <2064546094.2440522.1653825057164@mail.yahoo.com> <1295433800.3263424.1654111657911@mail.yahoo.com> <8a95d4f-b263-5231-537d-b1f88fdd5090@ewheeler.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <8a95d4f-b263-5231-537d-b1f88fdd5090@ewheeler.net> X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org On Wed, Jun 01, 2022 at 02:11:35PM -0700, Eric Wheeler wrote: > It looks like the NVMe works well except in 512b situations. Its > interesting that --force-unit-access doesn't increase the latency: Perhaps > the NVMe ignores sync flags since it knows it has a non-volatile cache. NVMe (and other interface) SSDs generally come in two flavors: - consumer ones have a volatile write cache and FUA/Flush has a lot of overhead - enterprise ones with the grossly nisnamed "power loss protection" feature have a non-volatile write cache and FUA/Flush has no overhead at all If this is an enterprise drive the behavior is expected. If on the other hand it is a cheap consumer driver chances are it just lies, which there have been a few instances of.