From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pa0-f42.google.com (mail-pa0-f42.google.com [209.85.220.42]) by kanga.kvack.org (Postfix) with ESMTP id C216C6B0038 for ; Tue, 12 May 2015 06:18:53 -0400 (EDT) Received: by pacyx8 with SMTP id yx8so4315261pac.1 for ; Tue, 12 May 2015 03:18:53 -0700 (PDT) Received: from mx2.parallels.com (mx2.parallels.com. [199.115.105.18]) by mx.google.com with ESMTPS id mq6si5426550pbb.191.2015.05.12.03.18.51 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 12 May 2015 03:18:52 -0700 (PDT) From: Vladimir Davydov Subject: [PATCH v2] rmap: fix theoretical race between do_wp_page and shrink_active_list Date: Tue, 12 May 2015 13:18:39 +0300 Message-ID: <1431425919-28057-1-git-send-email-vdavydov@parallels.com> MIME-Version: 1.0 Content-Type: text/plain Sender: owner-linux-mm@kvack.org List-ID: To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Paul E. McKenney" , "Kirill A. Shutemov" , Rik van Riel , Hugh Dickins As noted by Paul the compiler is free to store a temporary result in a variable on stack, heap or global unless it is explicitly marked as volatile, see: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2015/n4455.html#sample-optimizations This can result in a race between do_wp_page() and shrink_active_list() as follows. In do_wp_page() we can call page_move_anon_rmap(), which sets page->mapping as follows: anon_vma = (void *) anon_vma + PAGE_MAPPING_ANON; page->mapping = (struct address_space *) anon_vma; The page in question may be on an LRU list, because nowhere in do_wp_page() we remove it from the list, neither do we take any LRU related locks. Although the page is locked, shrink_active_list() can still call page_referenced() on it concurrently, because the latter does not require an anonymous page to be locked: CPU0 CPU1 ---- ---- do_wp_page shrink_active_list lock_page page_referenced PageAnon->yes, so skip trylock_page page_move_anon_rmap page->mapping = anon_vma rmap_walk PageAnon->no rmap_walk_file BUG page->mapping += PAGE_MAPPING_ANON This patch fixes this race by explicitly forbidding the compiler to split page->mapping store in page_move_anon_rmap() with the aid of WRITE_ONCE. Signed-off-by: Vladimir Davydov Cc: "Paul E. McKenney" Cc: "Kirill A. Shutemov" Cc: Rik van Riel Cc: Hugh Dickins --- Changes in v2: - do not add READ_ONCE to PageAnon and WRITE_ONCE to __page_set_anon_rmap and __hugepage_set_anon_rmap (Kirill) mm/rmap.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/rmap.c b/mm/rmap.c index 24dd3f9fee27..8b18fd4227d1 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -950,7 +950,7 @@ void page_move_anon_rmap(struct page *page, VM_BUG_ON_PAGE(page->index != linear_page_index(vma, address), page); anon_vma = (void *) anon_vma + PAGE_MAPPING_ANON; - page->mapping = (struct address_space *) anon_vma; + WRITE_ONCE(page->mapping, (struct address_space *) anon_vma); } /** -- 1.7.10.4 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org