From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 098A43E315E for ; Mon, 27 Apr 2026 15:30:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777303859; cv=none; b=NbOqNJCUBuyhTzJWiUpx3g3OKvDX2HBCZhwKNTYp3/tV5184Af8ObAEAjBcBjkysFO5Jwf26khSUZ3dPdccMfu7t4rjiQBSs1uqduHn4Rct4s+l7w4RAnEzSnWc7oP36SlWYr3m/Rk/MsF8VdBua1yPTHdTpJQlVD/AOzTKfbIE= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777303859; c=relaxed/simple; bh=Hls64bsLlj6noJlM3TRUh03Fmmi4w1KwMCcwqEYCqZ4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=oEFpka/Sg5+N3b8wwC09xdm5LyNOVbww9YG3lNQjtZyqq/Q90pkKV2veuCKMi7D7ts5EAmP/w2szWNEe3YBtQJ9jQ/u/1lX4unkJtmwp/Ln6DYi8E5BmWhc5usn5lbHi4W+zcDQiq5ytOuNGXeYHZgpj+4Xi+XYtzFa2BbCr/fM= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=T4VH28hD; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="T4VH28hD" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1777303857; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=+l6afX5dOckJckSkiMhYLh2ewUrSBegiqJdH6fYSD7g=; b=T4VH28hDOF8lItNwHgoLsJdyq8i8j2Fyp5O3OcTMbrR0KXCcV0ymRtx2ZRxWKjSIxLz6V9 YSqPDNUlcz3kCFMm+iGfFyEjmrGip0AmwjG9wBpNcfVIR3BmP9tThFc25dq3Kzpnvisfke BTVfaJaMQSoZENKrkJuqcjvCawy70XE= Received: from mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-349-2oaCF1f2MlulWcOufD5DKQ-1; Mon, 27 Apr 2026 11:30:51 -0400 X-MC-Unique: 2oaCF1f2MlulWcOufD5DKQ-1 X-Mimecast-MFC-AGG-ID: 2oaCF1f2MlulWcOufD5DKQ_1777303850 Received: from mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.17]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id DE9071956077; Mon, 27 Apr 2026 15:30:49 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.44.32.126]) by mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id EA7A6195608E; Mon, 27 Apr 2026 15:30:46 +0000 (UTC) From: David Howells To: Christian Brauner Cc: David Howells , Paulo Alcantara , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Matthew Wilcox Subject: [PATCH v4 10/22] netfs: Defer the emission of trace_netfs_folio() Date: Mon, 27 Apr 2026 16:29:37 +0100 Message-ID: <20260427152953.180038-11-dhowells@redhat.com> In-Reply-To: <20260427152953.180038-1-dhowells@redhat.com> References: <20260427152953.180038-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Scanned-By: MIMEDefang 3.0 on 10.30.177.17 Change netfs_perform_write() to keep the netfs_folio trace value in a variable and emit it later to make it easier to choose the value displayed. This is a prerequisite for a subsequent patch. Closes: https://sashiko.dev/#/patchset/20260414082004.3756080-1-dhowells%40redhat.com Signed-off-by: David Howells cc: Paulo Alcantara cc: Matthew Wilcox cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org --- fs/netfs/buffered_write.c | 18 ++++++++++-------- 1 file changed, 10 insertions(+), 8 deletions(-) diff --git a/fs/netfs/buffered_write.c b/fs/netfs/buffered_write.c index fc94eb1ef27b..7ac128d0b4e5 100644 --- a/fs/netfs/buffered_write.c +++ b/fs/netfs/buffered_write.c @@ -149,6 +149,7 @@ ssize_t netfs_perform_write(struct kiocb *iocb, struct iov_iter *iter, } do { + enum netfs_folio_trace trace; struct netfs_folio *finfo; struct netfs_group *group; unsigned long long fpos; @@ -222,7 +223,7 @@ ssize_t netfs_perform_write(struct kiocb *iocb, struct iov_iter *iter, if (unlikely(copied == 0)) goto copy_failed; netfs_set_group(folio, netfs_group); - trace_netfs_folio(folio, netfs_folio_is_uptodate); + trace = netfs_folio_is_uptodate; goto copied; } @@ -238,7 +239,7 @@ ssize_t netfs_perform_write(struct kiocb *iocb, struct iov_iter *iter, folio_zero_segment(folio, offset + copied, flen); __netfs_set_group(folio, netfs_group); folio_mark_uptodate(folio); - trace_netfs_folio(folio, netfs_modify_and_clear); + trace = netfs_modify_and_clear; goto copied; } @@ -256,7 +257,7 @@ ssize_t netfs_perform_write(struct kiocb *iocb, struct iov_iter *iter, } __netfs_set_group(folio, netfs_group); folio_mark_uptodate(folio); - trace_netfs_folio(folio, netfs_whole_folio_modify); + trace = netfs_whole_folio_modify; goto copied; } @@ -283,7 +284,7 @@ ssize_t netfs_perform_write(struct kiocb *iocb, struct iov_iter *iter, if (unlikely(copied == 0)) goto copy_failed; netfs_set_group(folio, netfs_group); - trace_netfs_folio(folio, netfs_just_prefetch); + trace = netfs_just_prefetch; goto copied; } @@ -297,7 +298,7 @@ ssize_t netfs_perform_write(struct kiocb *iocb, struct iov_iter *iter, if (offset == 0 && copied == flen) { __netfs_set_group(folio, netfs_group); folio_mark_uptodate(folio); - trace_netfs_folio(folio, netfs_streaming_filled_page); + trace = netfs_streaming_filled_page; goto copied; } @@ -312,7 +313,7 @@ ssize_t netfs_perform_write(struct kiocb *iocb, struct iov_iter *iter, finfo->dirty_len = copied; folio_attach_private(folio, (void *)((unsigned long)finfo | NETFS_FOLIO_INFO)); - trace_netfs_folio(folio, netfs_streaming_write); + trace = netfs_streaming_write; goto copied; } @@ -332,9 +333,9 @@ ssize_t netfs_perform_write(struct kiocb *iocb, struct iov_iter *iter, folio_detach_private(folio); folio_mark_uptodate(folio); kfree(finfo); - trace_netfs_folio(folio, netfs_streaming_cont_filled_page); + trace = netfs_streaming_cont_filled_page; } else { - trace_netfs_folio(folio, netfs_streaming_write_cont); + trace = netfs_streaming_write_cont; } goto copied; } @@ -350,6 +351,7 @@ ssize_t netfs_perform_write(struct kiocb *iocb, struct iov_iter *iter, continue; copied: + trace_netfs_folio(folio, trace); flush_dcache_folio(folio); /* Update the inode size if we moved the EOF marker */