From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 09B8AC433FE for ; Tue, 21 Sep 2021 09:49:29 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 914D2600D3 for ; Tue, 21 Sep 2021 09:49:28 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 914D2600D3 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.dev Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id C2E926B0071; Tue, 21 Sep 2021 05:49:27 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BDEF16B0072; Tue, 21 Sep 2021 05:49:27 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AF4D3900002; Tue, 21 Sep 2021 05:49:27 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0098.hostedemail.com [216.40.44.98]) by kanga.kvack.org (Postfix) with ESMTP id 9ED3C6B0071 for ; Tue, 21 Sep 2021 05:49:27 -0400 (EDT) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 40EDE28499 for ; Tue, 21 Sep 2021 09:49:27 +0000 (UTC) X-FDA: 78611107974.05.3E5707B Received: from out0.migadu.com (out0.migadu.com [94.23.1.103]) by imf09.hostedemail.com (Postfix) with ESMTP id 83F58300010C for ; Tue, 21 Sep 2021 09:49:26 +0000 (UTC) Date: Tue, 21 Sep 2021 18:49:15 +0900 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1632217764; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=Quq+0jhOskeL/K3WU9X/OZRgaVP3m6IJiCXAdpAnFeA=; b=MkwFFnRmyXHMxZ11CLpYFa99w3IOyuVE6xQBoEO8hx/6pSBEG0fUnXVpV2hk6CUTiLUZ// 21mf6T5dlVnQNkJEOFY70W8foUumsNIp57hsb1X8FqmfzsM+FJJJMti6+jRET/P8DQerB3 ZnWt5HDBfJTUGIMz5whC/XEt6pC7Mfc= X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Naoya Horiguchi To: Yang Shi Cc: naoya.horiguchi@nec.com, hughd@google.com, kirill.shutemov@linux.intel.com, willy@infradead.org, osalvador@suse.de, akpm@linux-foundation.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH 3/4] mm: shmem: don't truncate page if memory failure happens Message-ID: <20210921094915.GA817765@u2004> References: <20210914183718.4236-1-shy828301@gmail.com> <20210914183718.4236-4-shy828301@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <20210914183718.4236-4-shy828301@gmail.com> X-Migadu-Flow: FLOW_OUT X-Migadu-Auth-User: naoya.horiguchi@linux.dev X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 83F58300010C X-Stat-Signature: u5jui4uc5jbhh3kos4wx6azxy18bjogy Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=MkwFFnRm; spf=pass (imf09.hostedemail.com: domain of naoya.horiguchi@linux.dev designates 94.23.1.103 as permitted sender) smtp.mailfrom=naoya.horiguchi@linux.dev; dmarc=pass (policy=none) header.from=linux.dev X-HE-Tag: 1632217766-78363 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Sep 14, 2021 at 11:37:17AM -0700, Yang Shi wrote: > The current behavior of memory failure is to truncate the page cache > regardless of dirty or clean. If the page is dirty the later access > will get the obsolete data from disk without any notification to the > users. This may cause silent data loss. It is even worse for shmem > since shmem is in-memory filesystem, truncating page cache means > discarding data blocks. The later read would return all zero. > > The right approach is to keep the corrupted page in page cache, any > later access would return error for syscalls or SIGBUS for page fault, > until the file is truncated, hole punched or removed. The regular > storage backed filesystems would be more complicated so this patch > is focused on shmem. This also unblock the support for soft > offlining shmem THP. > > Signed-off-by: Yang Shi > --- > mm/memory-failure.c | 3 ++- > mm/shmem.c | 25 +++++++++++++++++++++++-- > 2 files changed, 25 insertions(+), 3 deletions(-) > > diff --git a/mm/memory-failure.c b/mm/memory-failure.c > index 54879c339024..3e06cb9d5121 100644 > --- a/mm/memory-failure.c > +++ b/mm/memory-failure.c > @@ -1101,7 +1101,8 @@ static int page_action(struct page_state *ps, struct page *p, > result = ps->action(p, pfn); > > count = page_count(p) - 1; > - if (ps->action == me_swapcache_dirty && result == MF_DELAYED) > + if ((ps->action == me_swapcache_dirty && result == MF_DELAYED) || > + (ps->action == me_pagecache_dirty && result == MF_FAILED)) This new line seems to affect the cases of dirty page cache on other filesystems, whose result is to miss "still referenced" messages for some unmap failure cases (although it's not so critical). So checking filesystem type (for example with shmem_mapping()) might be helpful? And I think that if we might want to have some refactoring to pass *ps to each ps->action() callback, then move this refcount check to the needed places. I don't think that we always need the refcount check, for example in MF_MSG_KERNEL and MF_MSG_UNKNOWN cases (because no one knows the expected values for these cases). > count--; > if (count > 0) { > pr_err("Memory failure: %#lx: %s still referenced by %d users\n", > diff --git a/mm/shmem.c b/mm/shmem.c > index 88742953532c..ec33f4f7173d 100644 > --- a/mm/shmem.c > +++ b/mm/shmem.c > @@ -2456,6 +2456,7 @@ shmem_write_begin(struct file *file, struct address_space *mapping, > struct inode *inode = mapping->host; > struct shmem_inode_info *info = SHMEM_I(inode); > pgoff_t index = pos >> PAGE_SHIFT; > + int ret = 0; > > /* i_rwsem is held by caller */ > if (unlikely(info->seals & (F_SEAL_GROW | > @@ -2466,7 +2467,19 @@ shmem_write_begin(struct file *file, struct address_space *mapping, > return -EPERM; > } > > - return shmem_getpage(inode, index, pagep, SGP_WRITE); > + ret = shmem_getpage(inode, index, pagep, SGP_WRITE); > + > + if (!ret) { Maybe this "!ret" check is not necessary because *pagep is set non-NULL only when ret is 0. It could save one indent level. > + if (*pagep) { > + if (PageHWPoison(*pagep)) { > + unlock_page(*pagep); > + put_page(*pagep); > + ret = -EIO; > + } > + } > + } > + > + return ret; > } > > static int > @@ -2555,6 +2568,11 @@ static ssize_t shmem_file_read_iter(struct kiocb *iocb, struct iov_iter *to) > unlock_page(page); > } > > + if (page && PageHWPoison(page)) { > + error = -EIO; > + break; > + } > + > /* > * We must evaluate after, since reads (unlike writes) > * are called without i_rwsem protection against truncate > @@ -3782,7 +3800,6 @@ const struct address_space_operations shmem_aops = { > #ifdef CONFIG_MIGRATION > .migratepage = migrate_page, > #endif > - .error_remove_page = generic_error_remove_page, This change makes truncate_error_page() calls invalidate_inode_page(), and in my testing it fails with "Failed to invalidate" message. So as a result memory_failure() finally returns with -EBUSY. I'm not sure it's expected because this patchset changes to keep error pages in page cache as a proper error handling. Maybe you can avoid this by defining .error_remove_page in shmem_aops which simply returns 0. > }; > EXPORT_SYMBOL(shmem_aops); > > @@ -4193,6 +4210,10 @@ struct page *shmem_read_mapping_page_gfp(struct address_space *mapping, > page = ERR_PTR(error); > else > unlock_page(page); > + > + if (PageHWPoison(page)) > + page = NULL; > + > return page; One more comment ... - I guess that you add PageHWPoison() checks after some call sites of shmem_getpage() and shmem_getpage_gfp(), but seems not cover all. For example, mcontinue_atomic_pte() in mm/userfaultfd.c can properly handle PageHWPoison? I'm trying to test more detail, but in my current understanding, this patch looks promising to me. Thank you for your effort. Thanks, Naoya Horiguchi