From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 84064C433EF for ; Tue, 5 Apr 2022 20:47:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B2B1F6B0073; Tue, 5 Apr 2022 16:46:34 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id AAFC66B0074; Tue, 5 Apr 2022 16:46:34 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8B4FD6B0075; Tue, 5 Apr 2022 16:46:34 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.28]) by kanga.kvack.org (Postfix) with ESMTP id 79E4D6B0073 for ; Tue, 5 Apr 2022 16:46:34 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 3A022125F for ; Tue, 5 Apr 2022 20:46:24 +0000 (UTC) X-FDA: 79324008288.01.0D6B6CE Received: from mail-lj1-f169.google.com (mail-lj1-f169.google.com [209.85.208.169]) by imf21.hostedemail.com (Postfix) with ESMTP id D410B1C0017 for ; Tue, 5 Apr 2022 20:46:23 +0000 (UTC) Received: by mail-lj1-f169.google.com with SMTP id bx37so608610ljb.4 for ; Tue, 05 Apr 2022 13:46:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=tsMlamnLL+xPaAogxdZq/m3kEis+PT2uPIAvStAiqIs=; b=Vixm/9GBY/y1C2Gqy+bOxqmSkaP2+J6krgmgzHSUuzh9IgCCqz2do7L9ZhZVsCqZqJ nDqIzvLIJtcsq0aJgyBb0lp5VbSK4JMH0daHSthpU8aio49GqpFxH4FuKbW+ER7T5V7W NHLNoMCrYl4sEuP5yzt2BciBc6pMdXSKXY5NLot8fIje/M1tYF4MPt0qQg1mCCSo19RX zQXqpd6FJbqrw7AMQROhijiaVNxNvWD8mU6mBh8cphaddCWhWP1BBsPYpG8wwZppJRZO 73z+6vKhCDbXguLbt3R6UUeNz5sIvcgAGzlF8xLDd8mhrLD/ojsWVqyOeVTwIfXPILYY hl+Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=tsMlamnLL+xPaAogxdZq/m3kEis+PT2uPIAvStAiqIs=; b=mPkKpfgCbeWTHeqBB1xUuV88itlCsQdqYnFiP4LdQtm5lQ2uXsM0KtBj0RIHo4I1Zt XPRYeobP8YTiEmdgTq66HNxMH8QT4TxjRXkPeNDP7P7tfKTh8JkW1RzxLtbpwpqDZ4pK FkEPLfnIihSWQxk4eQpyt0hPAZMCz5V6Rf1srsxpTvL8K9/BqPOAKflduyeu7Fc0mtXg oD/TjUSuIqee0ZYmE764RGBqyp+iMz3CrR4z89V6hWYIOi04ASVxzeaZRk0y+RMvl8z5 euIt5MWc0Jg3La/JBuZ9B/Ju5aTxCcpIiUO/Kd4ExaT76AehzNh4ns5G0Pe+DwAQ9sEG hxqQ== X-Gm-Message-State: AOAM533Qpb0dwFFgi/e5hFnlw50r++I/qWvRN56aG3LxL+XdIBwF3S7R gUJoPoIqhb3zs+EM/olLzdEMK0CfANCcP9f4RLCmkw== X-Google-Smtp-Source: ABdhPJwDEeIo69ZfrAZv/jrcg8Zrqep/4tpjyh8lpSU+ktmZ6tZQd98MdvHYYyvIPofOUdATOtPlq9FMT7EQ65vkrmg= X-Received: by 2002:a05:651c:2105:b0:249:8810:269f with SMTP id a5-20020a05651c210500b002498810269fmr3072463ljq.524.1649191581939; Tue, 05 Apr 2022 13:46:21 -0700 (PDT) MIME-Version: 1.0 References: <20220323232929.3035443-1-jiaqiyan@google.com> <20220323232929.3035443-3-jiaqiyan@google.com> In-Reply-To: From: Jiaqi Yan Date: Tue, 5 Apr 2022 13:46:10 -0700 Message-ID: Subject: Re: [RFC v1 2/2] mm: khugepaged: recover from poisoned file-backed memory To: Yang Shi Cc: Tony Luck , =?UTF-8?B?SE9SSUdVQ0hJIE5BT1lBKOWggOWPoyDnm7TkuZ8p?= , "Kirill A. Shutemov" , Miaohe Lin , Jue Wang , Linux MM Content-Type: text/plain; charset="UTF-8" X-Stat-Signature: dbt6bzg4ndtism86eqi1r5sxhyy4jq4c X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: D410B1C0017 Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b="Vixm/9GB"; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf21.hostedemail.com: domain of jiaqiyan@google.com designates 209.85.208.169 as permitted sender) smtp.mailfrom=jiaqiyan@google.com X-Rspam-User: X-HE-Tag: 1649191583-277171 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Mar 28, 2022 at 4:37 PM Yang Shi wrote: > > On Wed, Mar 23, 2022 at 4:29 PM Jiaqi Yan wrote: > > > > Make collapse_file roll back when copying pages failed. > > More concretely: > > * extract copy operations into a separate loop > > * postpone the updates for nr_none until both scan and copy succeeded > > * postpone joining small xarray entries until both scan and copy > > succeeded > > * as for update operations to NR_XXX_THPS > > * for SHMEM file, postpone until both scan and copy succeeded > > * for other file, roll back if scan succeeded but copy failed > > > > Signed-off-by: Jiaqi Yan > > --- > > include/linux/highmem.h | 18 ++++++++++ > > mm/khugepaged.c | 75 +++++++++++++++++++++++++++-------------- > > 2 files changed, 67 insertions(+), 26 deletions(-) > > > > diff --git a/include/linux/highmem.h b/include/linux/highmem.h > > index 15d0aa4d349c..fc5aa221bdb5 100644 > > --- a/include/linux/highmem.h > > +++ b/include/linux/highmem.h > > @@ -315,6 +315,24 @@ static inline void copy_highpage(struct page *to, struct page *from) > > kunmap_local(vfrom); > > } > > > > +/* > > + * Machine check exception handled version of copy_highpage. > > + * Return true if copying page content failed; otherwise false. > > + */ > > +static inline bool copy_highpage_mc(struct page *to, struct page *from) > > +{ > > + char *vfrom, *vto; > > + unsigned long ret; > > + > > + vfrom = kmap_local_page(from); > > + vto = kmap_local_page(to); > > + ret = copy_mc_to_kernel(vto, vfrom, PAGE_SIZE); > > + kunmap_local(vto); > > + kunmap_local(vfrom); > > + > > + return ret > 0; > > +} > > + > > #endif > > > > static inline void memcpy_page(struct page *dst_page, size_t dst_off, > > diff --git a/mm/khugepaged.c b/mm/khugepaged.c > > index 84ed177f56ff..ed2b1cd4bbc6 100644 > > --- a/mm/khugepaged.c > > +++ b/mm/khugepaged.c > > @@ -1708,12 +1708,13 @@ static void collapse_file(struct mm_struct *mm, > > { > > struct address_space *mapping = file->f_mapping; > > gfp_t gfp; > > - struct page *new_page; > > + struct page *new_page, *page, *tmp; > > It seems you removed the "struct page *page" from " if (result == > SCAN_SUCCEED)", but keep the "struct page *page" under "for (index = > start; index < end; index++)". I think the "struct page *page" in the > for loop could be removed too. I agree. Removed in the next version. > > > pgoff_t index, end = start + HPAGE_PMD_NR; > > LIST_HEAD(pagelist); > > XA_STATE_ORDER(xas, &mapping->i_pages, start, HPAGE_PMD_ORDER); > > int nr_none = 0, result = SCAN_SUCCEED; > > bool is_shmem = shmem_file(file); > > + bool copy_failed = false; > > int nr; > > > > VM_BUG_ON(!IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) && !is_shmem); > > @@ -1936,9 +1937,7 @@ static void collapse_file(struct mm_struct *mm, > > } > > nr = thp_nr_pages(new_page); > > > > - if (is_shmem) > > - __mod_lruvec_page_state(new_page, NR_SHMEM_THPS, nr); > > - else { > > + if (!is_shmem) { > > __mod_lruvec_page_state(new_page, NR_FILE_THPS, nr); > > filemap_nr_thps_inc(mapping); > > /* > > @@ -1956,34 +1955,39 @@ static void collapse_file(struct mm_struct *mm, > > } > > } > > > > - if (nr_none) { > > - __mod_lruvec_page_state(new_page, NR_FILE_PAGES, nr_none); > > - if (is_shmem) > > - __mod_lruvec_page_state(new_page, NR_SHMEM, nr_none); > > - } > > - > > - /* Join all the small entries into a single multi-index entry */ > > - xas_set_order(&xas, start, HPAGE_PMD_ORDER); > > - xas_store(&xas, new_page); > > xa_locked: > > xas_unlock_irq(&xas); > > xa_unlocked: > > > > if (result == SCAN_SUCCEED) { > > - struct page *page, *tmp; > > - > > /* > > * Replacing old pages with new one has succeeded, now we > > - * need to copy the content and free the old pages. > > + * attempt to copy the contents. > > */ > > index = start; > > - list_for_each_entry_safe(page, tmp, &pagelist, lru) { > > + list_for_each_entry(page, &pagelist, lru) { > > while (index < page->index) { > > clear_highpage(new_page + (index % HPAGE_PMD_NR)); > > index++; > > } > > - copy_highpage(new_page + (page->index % HPAGE_PMD_NR), > > - page); > > + if (copy_highpage_mc(new_page + (page->index % HPAGE_PMD_NR), page)) { > > + copy_failed = true; > > + break; > > + } > > + index++; > > + } > > + while (!copy_failed && index < end) { > > + clear_highpage(new_page + (page->index % HPAGE_PMD_NR)); > > + index++; > > + } > > + } > > + > > + if (result == SCAN_SUCCEED && !copy_failed) { > > I think you could set "result" to SCAN_COPY_MC (as same as the > anonymous one), then you could drop !copy_failed and use "result" > alone afterwards. Yes, copy_failed variable will be eliminated in the next version. > > > + /* > > + * Copying old pages to huge one has succeeded, now we > > + * need to free the old pages. > > + */ > > + list_for_each_entry_safe(page, tmp, &pagelist, lru) { > > list_del(&page->lru); > > page->mapping = NULL; > > page_ref_unfreeze(page, 1); > > @@ -1991,12 +1995,20 @@ static void collapse_file(struct mm_struct *mm, > > ClearPageUnevictable(page); > > unlock_page(page); > > put_page(page); > > - index++; > > } > > - while (index < end) { > > - clear_highpage(new_page + (index % HPAGE_PMD_NR)); > > - index++; > > + > > + xas_lock_irq(&xas); > > + if (is_shmem) > > + __mod_lruvec_page_state(new_page, NR_SHMEM_THPS, nr); > > + if (nr_none) { > > + __mod_lruvec_page_state(new_page, NR_FILE_PAGES, nr_none); > > + if (is_shmem) > > + __mod_lruvec_page_state(new_page, NR_SHMEM, nr_none); > > } > > + /* Join all the small entries into a single multi-index entry. */ > > + xas_set_order(&xas, start, HPAGE_PMD_ORDER); > > + xas_store(&xas, new_page); > > + xas_unlock_irq(&xas); > > > > SetPageUptodate(new_page); > > page_ref_add(new_page, HPAGE_PMD_NR - 1); > > @@ -2012,9 +2024,11 @@ static void collapse_file(struct mm_struct *mm, > > > > khugepaged_pages_collapsed++; > > } else { > > - struct page *page; > > - > > - /* Something went wrong: roll back page cache changes */ > > + /* > > + * Something went wrong: > > + * either result != SCAN_SUCCEED or copy_failed, > > + * roll back page cache changes > > + */ > > xas_lock_irq(&xas); > > mapping->nrpages -= nr_none; > > > > @@ -2047,6 +2061,15 @@ static void collapse_file(struct mm_struct *mm, > > xas_lock_irq(&xas); > > } > > VM_BUG_ON(nr_none); > > + /* > > + * Undo the updates of thp_nr_pages(new_page) for non-SHMEM file, > > + * which is not updated yet for SHMEM file. > > + * These undos are not needed if result is not SCAN_SUCCEED. > > + */ > > + if (!is_shmem && result == SCAN_SUCCEED) { > > Handling the error case when "result == SCAN_SUCCEED" looks awkward. > With the above fixed (set result to SCAN_COPY_MC) we could avoid the > awkwardness. > Will look less awkward in the next version :) > > + __mod_lruvec_page_state(new_page, NR_FILE_THPS, -nr); > > I'm wondering whether we may defer NR_FILE_THPS update just like SHMEM > because it has not to be updated in advance so that we have the user > visible counters update in the single place. Just filemap_nr_thps > needs to be updated in advance since it is used to sync with file open > path to truncate huge pages. In version 2, I will postpone __mod_lruvec_page_state() until copying succeeded, and undo filemap_nr_thps_inc if (!is_shmem && result == SCAN_COPY_MC). > > > + filemap_nr_thps_dec(mapping); > > + } > > xas_unlock_irq(&xas); > > > > new_page->mapping = NULL; > > -- > > 2.35.1.894.gb6a874cedc-goog > > Thanks for the valuable feedback. I will send out the v2 RFC separately, and please leave/reply there. Thanks!