From mboxrd@z Thu Jan 1 00:00:00 1970 From: Sergey Senozhatsky Subject: Re: [PATCH v2 01/13] mm: support madvise(MADV_FREE) Date: Wed, 4 Nov 2015 11:29:51 +0900 Message-ID: <20151104022951.GA3744@swordfish> References: <1446600367-7976-1-git-send-email-minchan@kernel.org> <1446600367-7976-2-git-send-email-minchan@kernel.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Return-path: Content-Disposition: inline In-Reply-To: <1446600367-7976-2-git-send-email-minchan@kernel.org> Sender: owner-linux-mm@kvack.org To: Minchan Kim Cc: Andrew Morton , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Michael Kerrisk , linux-api@vger.kernel.org, Hugh Dickins , Johannes Weiner , Rik van Riel , Mel Gorman , KOSAKI Motohiro , Jason Evans , Daniel Micay , "Kirill A. Shutemov" , Shaohua Li , Michal Hocko , yalin.wang2010@gmail.com, Sergey Senozhatsky List-Id: linux-api@vger.kernel.org On (11/04/15 10:25), Minchan Kim wrote: [..] > +static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr, > + unsigned long end, struct mm_walk *walk) > + > +{ > + struct mmu_gather *tlb = walk->private; > + struct mm_struct *mm = tlb->mm; > + struct vm_area_struct *vma = walk->vma; > + spinlock_t *ptl; > + pte_t *pte, ptent; > + struct page *page; I'll just ask (probably I'm missing something) + pmd_trans_huge_lock() ? > + split_huge_page_pmd(vma, addr, pmd); > + if (pmd_trans_unstable(pmd)) > + return 0; > + > + pte = pte_offset_map_lock(mm, pmd, addr, &ptl); > + arch_enter_lazy_mmu_mode(); > + for (; addr != end; pte++, addr += PAGE_SIZE) { > + ptent = *pte; > + > + if (!pte_present(ptent)) > + continue; > + > + page = vm_normal_page(vma, addr, ptent); > + if (!page) > + continue; > + -ss -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org