From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 012D9C433EF for ; Tue, 15 Mar 2022 21:08:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351838AbiCOVKA (ORCPT ); Tue, 15 Mar 2022 17:10:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51742 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1343868AbiCOVKA (ORCPT ); Tue, 15 Mar 2022 17:10:00 -0400 Received: from drax.kayaks.hungrycats.org (drax.kayaks.hungrycats.org [174.142.148.226]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 0F4C833E91 for ; Tue, 15 Mar 2022 14:08:47 -0700 (PDT) Received: by drax.kayaks.hungrycats.org (Postfix, from userid 1002) id 714DF25D2A0; Tue, 15 Mar 2022 17:08:47 -0400 (EDT) Date: Tue, 15 Mar 2022 17:08:47 -0400 From: Zygo Blaxell To: Remi Gauvin Cc: linux-btrfs Subject: Re: Btrfs autodefrag wrote 5TB in one day to a 0.5TB SSD without a measurable benefit Message-ID: References: <87ee34cnaq.fsf@vps.thesusis.net> <23441a6c-3860-4e99-0e56-43490d8c0ac2@georgianit.com> <97800cf4-b96d-27f9-1ed9-b508501e5532@georgianit.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <97800cf4-b96d-27f9-1ed9-b508501e5532@georgianit.com> Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org On Tue, Mar 15, 2022 at 03:22:43PM -0400, Remi Gauvin wrote: > On 2022-03-15 2:51 p.m., Zygo Blaxell wrote: > > > The main advantage of larger extents is smaller metadata, and it doesn't > > matter very much whether it's SSD or HDD. Adjacent extents will be in > > the same metadata page, so not much is lost with 256K extents even on HDD, > > as long as they are physically allocated adjacent to each other. > > > > When I tried enabling compress-force on my HDD storage, it *killed* > sequential read performance. I could write a file out at over > 100MB/s... but trying to read that same file sequentially would trash > the drives with less than 5MB/s actually being able to be read. > > > No such problems were observed on ssd storage. I've seen a similar effect. I wonder if the small extents are breaking readahead or something. > I was under the impression this problem was caused trying to read files > with the 127k extents,, which, for whatever reason, could not be done > without excessive seeking.