From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from gate.crashing.org (gate.crashing.org [63.228.1.57]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id DE1531A0688 for ; Thu, 26 Mar 2015 08:11:20 +1100 (AEDT) Message-ID: <1427317797.6468.86.camel@kernel.crashing.org> Subject: Re: [PATCH v3 2/2] powerpc/mm: Tracking vDSO remap From: Benjamin Herrenschmidt To: Ingo Molnar Date: Thu, 26 Mar 2015 08:09:57 +1100 In-Reply-To: <20150325183316.GA9090@gmail.com> References: <20150325121118.GA2542@gmail.com> <20150325183316.GA9090@gmail.com> Content-Type: text/plain; charset="UTF-8" Mime-Version: 1.0 Cc: linux-arch@vger.kernel.org, linux-s390@vger.kernel.org, Laurent Dufour , user-mode-linux-devel@lists.sourceforge.net, Arnd Bergmann , Jeff Dike , "H. Peter Anvin" , x86@kernel.org, linux-kernel@vger.kernel.org, criu@openvz.org, linux-mm@kvack.org, Ingo Molnar , Paul Mackerras , cov@codeaurora.org, user-mode-linux-user@lists.sourceforge.net, Richard Weinberger , Thomas Gleixner , Guan Xuetao , linuxppc-dev@lists.ozlabs.org List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , On Wed, 2015-03-25 at 19:33 +0100, Ingo Molnar wrote: > * Laurent Dufour wrote: > > > +static inline void arch_unmap(struct mm_struct *mm, > > + struct vm_area_struct *vma, > > + unsigned long start, unsigned long end) > > +{ > > + if (start <= mm->context.vdso_base && mm->context.vdso_base < end) > > + mm->context.vdso_base = 0; > > +} > > So AFAICS PowerPC can have multi-page vDSOs, right? > > So what happens if I munmap() the middle or end of the vDSO? The above > condition only seems to cover unmaps that affect the first page. I > think 'affects any page' ought to be the right condition? (But I know > nothing about PowerPC so I might be wrong.) You are right, we have at least two pages. > > > +#define __HAVE_ARCH_REMAP > > +static inline void arch_remap(struct mm_struct *mm, > > + unsigned long old_start, unsigned long old_end, > > + unsigned long new_start, unsigned long new_end) > > +{ > > + /* > > + * mremap() doesn't allow moving multiple vmas so we can limit the > > + * check to old_start == vdso_base. > > + */ > > + if (old_start == mm->context.vdso_base) > > + mm->context.vdso_base = new_start; > > +} > > mremap() doesn't allow moving multiple vmas, but it allows the > movement of multi-page vmas and it also allows partial mremap()s, > where it will split up a vma. > > In particular, what happens if an mremap() is done with > old_start == vdso_base, but a shorter end than the end of the vDSO? > (i.e. a partial mremap() with fewer pages than the vDSO size) Is there a way to forbid splitting ? Does x86 deal with that case at all or it doesn't have to for some other reason ? Cheers, Ben. > Thanks, > > Ingo > -- > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html > Please read the FAQ at http://www.tux.org/lkml/