From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C8481182C5 for ; Mon, 6 Jan 2025 04:07:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=90.155.50.34 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736136470; cv=none; b=J0qwXFLFc/MIJREvYaSu1WbhGWbzz/T+Q6VEL7ce+rIraMbgcnjGZTe7U2VDxIqnPytdRMc2iEnb10pbsptlAHr/hXjiv+NwF5+7KfWk4jOcWnEeJO4f+l2Zj5+DkTxNmlyuE2UVy6HPd8Eeu7Y+xIZUqS3N1Ont6ZT9K0WnWfY= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736136470; c=relaxed/simple; bh=C3dxwIvDUqhASxev24tTDawZcHSsdGeKtPmLd98A5/I=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=an/27zRIfp7UfSuG+ChFSxrnNBwt4EFh701rEVdCjoVCywyfOgQEmhOsTQ4eLo1DT0JAuhTT24oVqXxFon960kxnx21b1rVcXqnDTF7EpYiBi4jIi7mr5j9Rk3Bp8ta2eMUOsO93/du3vWD6sRon6gOI8nroL7KkDDyHpi2Z7lA= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org; spf=none smtp.mailfrom=infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=sb5vWirE; arc=none smtp.client-ip=90.155.50.34 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="sb5vWirE" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=sYMyMbTdYj1kzW8Ct7l0VX5/wlDLBHvFkwCcVPyK9EA=; b=sb5vWirEWYkqnT7/uSWwgMjWwW rkAlYfUCcTwm/H4N71Eqp4O2Ckcki2zBw8T304XYNwH2XqIbh48rxZwEtNYyUVoBUGv/AQYD+61ye MXRd+sjQ5ry9Bock2Tf7WCA1xKVslXLWmsVcos7wM25X7qw3azpiRRT3JGMVjrA+hzY+O/+xqrxvV KlKuo+wyWqCB+KnaEdfkVdGySMw3us1GlwyNiMzIWixWOkby3SmPIWURA7duOf70nxLIWZAL/FzVI G1pD8+fYCJgtoLCIenJtqKFZShlAjiLAW4befo1WmxjcEtjH5C0maWTZswtWoxReVMI3dQxRLx8uW gM1nfvoA==; Received: from willy by casper.infradead.org with local (Exim 4.98 #2 (Red Hat Linux)) id 1tUeOf-0000000Fd0k-2gMk; Mon, 06 Jan 2025 04:07:21 +0000 Date: Mon, 6 Jan 2025 04:07:21 +0000 From: Matthew Wilcox To: Baolin Wang Cc: akpm@linux-foundation.org, hughd@google.com, david@redhat.com, wangkefeng.wang@huawei.com, kasong@tencent.com, ying.huang@linux.alibaba.com, 21cnbao@gmail.com, ryan.roberts@arm.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v2] mm: shmem: skip swapcache for swapin of synchronous swap device Message-ID: References: <04997e54c276eff40a6119a90d36a4e71aade89c.1735806921.git.baolin.wang@linux.alibaba.com> <8344980d-4c22-4694-9a76-2e5a7ada50cb@linux.alibaba.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <8344980d-4c22-4694-9a76-2e5a7ada50cb@linux.alibaba.com> On Mon, Jan 06, 2025 at 11:46:04AM +0800, Baolin Wang wrote: > On 2025/1/2 21:10, Matthew Wilcox wrote: > > On Thu, Jan 02, 2025 at 04:40:17PM +0800, Baolin Wang wrote: > > > With fast swap devices (such as zram), swapin latency is crucial to applications. > > > For shmem swapin, similar to anonymous memory swapin, we can skip the swapcache > > > operation to improve swapin latency. > > > > OK, but now we have more complexity. Why can't we always skip the > > swapcache on swapin? > > Skipping swapcache is used to swap-in shmem large folios, avoiding the large > folios being split. Meanwhile, since the IO latency of syncing swap devices > is relatively small, it won't cause the IO latency amplification issue. > > But for async swap devices, if we swap-in the large folio one-time, I am > afraid the IO latency can be amplified. And I remember we still haven't > reached an agreement here[1], so let's step by step and start with the sync > swap devices first. Regardless of whether we choose to swap-in an order-0 or a large folio, my point is that we should always do it to the pagecache rather than the swap cache.