From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.4 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE, SPF_PASS,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0CA97C4363A for ; Fri, 23 Oct 2020 12:32:01 +0000 (UTC) Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 58CB920829 for ; Fri, 23 Oct 2020 12:32:00 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 58CB920829 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=csgroup.eu Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=linuxppc-dev-bounces+linuxppc-dev=archiver.kernel.org@lists.ozlabs.org Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 4CHkBP3pqpzDqxf for ; Fri, 23 Oct 2020 23:31:57 +1100 (AEDT) Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=csgroup.eu (client-ip=93.17.236.30; helo=pegase1.c-s.fr; envelope-from=christophe.leroy@csgroup.eu; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=none (p=none dis=none) header.from=csgroup.eu Received: from pegase1.c-s.fr (pegase1.c-s.fr [93.17.236.30]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 4CHk7H0nvtzDqwb for ; Fri, 23 Oct 2020 23:29:07 +1100 (AEDT) Received: from localhost (mailhub1-int [192.168.12.234]) by localhost (Postfix) with ESMTP id 4CHk722DKjz9v0CW; Fri, 23 Oct 2020 14:29:02 +0200 (CEST) X-Virus-Scanned: Debian amavisd-new at c-s.fr Received: from pegase1.c-s.fr ([192.168.12.234]) by localhost (pegase1.c-s.fr [192.168.12.234]) (amavisd-new, port 10024) with ESMTP id 4Bi4vCxJo23W; Fri, 23 Oct 2020 14:29:02 +0200 (CEST) Received: from messagerie.si.c-s.fr (messagerie.si.c-s.fr [192.168.25.192]) by pegase1.c-s.fr (Postfix) with ESMTP id 4CHk720T1Pz9v0CM; Fri, 23 Oct 2020 14:29:02 +0200 (CEST) Received: from localhost (localhost [127.0.0.1]) by messagerie.si.c-s.fr (Postfix) with ESMTP id 4193D8B86A; Fri, 23 Oct 2020 14:29:03 +0200 (CEST) X-Virus-Scanned: amavisd-new at c-s.fr Received: from messagerie.si.c-s.fr ([127.0.0.1]) by localhost (messagerie.si.c-s.fr [127.0.0.1]) (amavisd-new, port 10023) with ESMTP id tdfQKIjC4FR2; Fri, 23 Oct 2020 14:29:03 +0200 (CEST) Received: from [192.168.4.90] (unknown [192.168.4.90]) by messagerie.si.c-s.fr (Postfix) with ESMTP id 341098B85E; Fri, 23 Oct 2020 14:29:02 +0200 (CEST) Subject: Re: [PATCH] x86/mpx: fix recursive munmap() corruption To: Laurent Dufour , Michael Ellerman References: <20190401141549.3F4721FE@viggo.jf.intel.com> <87d0lht1c0.fsf@concordia.ellerman.id.au> <6718ede2-1fcb-1a8f-a116-250eef6416c7@linux.vnet.ibm.com> <4f43d4d4-832d-37bc-be7f-da0da735bbec@intel.com> <4e1bbb14-e14f-8643-2072-17b4cdef5326@linux.vnet.ibm.com> <87k1faa2i0.fsf@concordia.ellerman.id.au> <9c2b2826-4083-fc9c-5a4d-c101858dd560@linux.vnet.ibm.com> From: Christophe Leroy Message-ID: <12313ba8-75b5-d44d-dbc0-0bf2c87dfb59@csgroup.eu> Date: Fri, 23 Oct 2020 14:28:52 +0200 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101 Thunderbird/68.12.1 MIME-Version: 1.0 In-Reply-To: <9c2b2826-4083-fc9c-5a4d-c101858dd560@linux.vnet.ibm.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: fr Content-Transfer-Encoding: 8bit X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: mhocko@suse.com, rguenther@suse.de, linux-mm@kvack.org, Dave Hansen , x86@kernel.org, stable@vger.kernel.org, LKML , Dave Hansen , Thomas Gleixner , luto@amacapital.net, linuxppc-dev@lists.ozlabs.org, Andrew Morton , vbabka@suse.cz Errors-To: linuxppc-dev-bounces+linuxppc-dev=archiver.kernel.org@lists.ozlabs.org Sender: "Linuxppc-dev" Hi Laurent Le 07/05/2019 à 18:35, Laurent Dufour a écrit : > Le 01/05/2019 à 12:32, Michael Ellerman a écrit : >> Laurent Dufour writes: >>> Le 23/04/2019 à 18:04, Dave Hansen a écrit : >>>> On 4/23/19 4:16 AM, Laurent Dufour wrote: >> ... >>>>> There are 2 assumptions here: >>>>>    1. 'start' and 'end' are page aligned (this is guaranteed by __do_munmap(). >>>>>    2. the VDSO is 1 page (this is guaranteed by the union vdso_data_store on powerpc) >>>> >>>> Are you sure about #2?  The 'vdso64_pages' variable seems rather >>>> unnecessary if the VDSO is only 1 page. ;) >>> >>> Hum, not so sure now ;) >>> I got confused, only the header is one page. >>> The test is working as a best effort, and don't cover the case where >>> only few pages inside the VDSO are unmmapped (start > >>> mm->context.vdso_base). This is not what CRIU is doing and so this was >>> enough for CRIU support. >>> >>> Michael, do you think there is a need to manage all the possibility >>> here, since the only user is CRIU and unmapping the VDSO is not a so >>> good idea for other processes ? >> >> Couldn't we implement the semantic that if any part of the VDSO is >> unmapped then vdso_base is set to zero? That should be fairly easy, eg: >> >>     if (start < vdso_end && end >= mm->context.vdso_base) >>         mm->context.vdso_base = 0; >> >> >> We might need to add vdso_end to the mm->context, but that should be OK. >> >> That seems like it would work for CRIU and make sense in general? > > Sorry for the late answer, yes this would make more sense. > > Here is a patch doing that. > In your patch, the test seems overkill: + if ((start <= vdso_base && vdso_end <= end) || /* 1 */ + (vdso_base <= start && start < vdso_end) || /* 3,4 */ + (vdso_base < end && end <= vdso_end)) /* 2,3 */ + mm->context.vdso_base = mm->context.vdso_end = 0; What about if (start < vdso_end && vdso_start < end) mm->context.vdso_base = mm->context.vdso_end = 0; This should cover all cases, or am I missing something ? And do we really need to store vdso_end in the context ? I think it should be possible to re-calculate it: the size of the VDSO should be (&vdso32_end - &vdso32_start) + PAGE_SIZE for 32 bits VDSO, and (&vdso64_end - &vdso64_start) + PAGE_SIZE for the 64 bits VDSO. Christophe