From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 41E1BC4321E for ; Tue, 29 Nov 2022 11:40:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232453AbiK2LkW (ORCPT ); Tue, 29 Nov 2022 06:40:22 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44768 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231545AbiK2LkQ (ORCPT ); Tue, 29 Nov 2022 06:40:16 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BD78ABC09 for ; Tue, 29 Nov 2022 03:39:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1669721959; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=eRu/Hx0QeU09vI6dag4oeFynexJuKZWNYSe4MiEdyGs=; b=LvgyXO8f3BbpbjtYX3oZFa/hY+T4UU8Gg8/6yXU3Z+ROljnWq/R7ozcAu0iKYlwv2zEhVi UgIY6pXsdbBERoB7yfBoF6SNkgsIgyGfEis4g1OPiZkRInB+WdKeeQoDKE3tsgL6BM8EuV 6DsgTU1EJtPPifhhu06WVY47P4KDo9A= Received: from mail-wm1-f71.google.com (mail-wm1-f71.google.com [209.85.128.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id us-mta-620-RQHeYYmTPJOOEAo3lP7eUA-1; Tue, 29 Nov 2022 06:39:10 -0500 X-MC-Unique: RQHeYYmTPJOOEAo3lP7eUA-1 Received: by mail-wm1-f71.google.com with SMTP id f1-20020a1cc901000000b003cf703a4f08so4627259wmb.2 for ; Tue, 29 Nov 2022 03:39:09 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:in-reply-to:organization:from:references :cc:to:content-language:subject:user-agent:mime-version:date :message-id:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=eRu/Hx0QeU09vI6dag4oeFynexJuKZWNYSe4MiEdyGs=; b=iFrX7NdsPu+KIsG9P+H6ewMvGnWQwkLDKjLRIw85oLvhcXVmEAHKQjpEj4OMjVIvJg 7Oaft2uQ0NU5dhMwuoikufEoudBb/uCZVS5Ro+TJos5R5BCmXZk14B4emerccg1+Jwjb kmTiEwM4LEhdUu/g5zw9zQfjXMQZR59Lvk8yCFnRQlTSvtnS539btHnfChj1Gf0zbkqY T2Dx8y2hdJXL14xP7B/9C52af5bZq0Nrx5TMzviay0Sm6nloEVohk+dDkyPMl3jpKJPo i1PEHzmPpAYQsCQsSxOlBx1loK0O8B8rc6IGEUNSmUmRXW2bw9EfP8/x0r+HWd0TtiWb m11w== X-Gm-Message-State: ANoB5pmx82OlduX5p+flKqwNTt7qE+PcjkPFzwbm8sIHvQDQKcEeJeGs iJNa30/A509IdBIxSFIAnpQvX5xveCJxXTc1UVbU4PjlwsF6uYHWr9SwaBrDan5SB49T9OJoiaZ Z6gggFQ3nWrw0Px0XjSOI X-Received: by 2002:a7b:ca45:0:b0:3c4:bda1:7c57 with SMTP id m5-20020a7bca45000000b003c4bda17c57mr44926197wml.6.1669721948941; Tue, 29 Nov 2022 03:39:08 -0800 (PST) X-Google-Smtp-Source: AA0mqf6s03pDZ7+AzRjiuWChsKfmd6tu6WfpxK1aogO4cyP+8Olozy3bxlj+8EqBduo9KzTHvAm5sQ== X-Received: by 2002:a7b:ca45:0:b0:3c4:bda1:7c57 with SMTP id m5-20020a7bca45000000b003c4bda17c57mr44926175wml.6.1669721948661; Tue, 29 Nov 2022 03:39:08 -0800 (PST) Received: from [192.168.3.108] (p5b0c6623.dip0.t-ipconnect.de. [91.12.102.35]) by smtp.gmail.com with ESMTPSA id bg11-20020a05600c3c8b00b003d069fc7372sm1471927wmb.1.2022.11.29.03.39.06 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 29 Nov 2022 03:39:08 -0800 (PST) Message-ID: <6d7f7775-5703-c27a-e57b-03aafb4de712@redhat.com> Date: Tue, 29 Nov 2022 12:39:06 +0100 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.4.1 Subject: Re: [PATCH v9 1/8] mm: Introduce memfd_restricted system call to create restricted user memory Content-Language: en-US To: "Kirill A. Shutemov" , Michael Roth Cc: Chao Peng , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, linux-doc@vger.kernel.org, qemu-devel@nongnu.org, Paolo Bonzini , Jonathan Corbet , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Shuah Khan , Mike Rapoport , Steven Price , "Maciej S . Szmigiero" , Vlastimil Babka , Vishal Annapurve , Yu Zhang , "Kirill A . Shutemov" , luto@kernel.org, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, aarcange@redhat.com, ddutile@redhat.com, dhildenb@redhat.com, Quentin Perret , tabba@google.com, mhocko@suse.com, Muchun Song , wei.w.wang@intel.com References: <20221025151344.3784230-1-chao.p.peng@linux.intel.com> <20221025151344.3784230-2-chao.p.peng@linux.intel.com> <20221129000632.sz6pobh6p7teouiu@amd.com> <20221129112139.usp6dqhbih47qpjl@box.shutemov.name> From: David Hildenbrand Organization: Red Hat In-Reply-To: <20221129112139.usp6dqhbih47qpjl@box.shutemov.name> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: linux-api@vger.kernel.org On 29.11.22 12:21, Kirill A. Shutemov wrote: > On Mon, Nov 28, 2022 at 06:06:32PM -0600, Michael Roth wrote: >> On Tue, Oct 25, 2022 at 11:13:37PM +0800, Chao Peng wrote: >>> From: "Kirill A. Shutemov" >>> >> >> >> >>> +static struct file *restrictedmem_file_create(struct file *memfd) >>> +{ >>> + struct restrictedmem_data *data; >>> + struct address_space *mapping; >>> + struct inode *inode; >>> + struct file *file; >>> + >>> + data = kzalloc(sizeof(*data), GFP_KERNEL); >>> + if (!data) >>> + return ERR_PTR(-ENOMEM); >>> + >>> + data->memfd = memfd; >>> + mutex_init(&data->lock); >>> + INIT_LIST_HEAD(&data->notifiers); >>> + >>> + inode = alloc_anon_inode(restrictedmem_mnt->mnt_sb); >>> + if (IS_ERR(inode)) { >>> + kfree(data); >>> + return ERR_CAST(inode); >>> + } >>> + >>> + inode->i_mode |= S_IFREG; >>> + inode->i_op = &restrictedmem_iops; >>> + inode->i_mapping->private_data = data; >>> + >>> + file = alloc_file_pseudo(inode, restrictedmem_mnt, >>> + "restrictedmem", O_RDWR, >>> + &restrictedmem_fops); >>> + if (IS_ERR(file)) { >>> + iput(inode); >>> + kfree(data); >>> + return ERR_CAST(file); >>> + } >>> + >>> + file->f_flags |= O_LARGEFILE; >>> + >>> + mapping = memfd->f_mapping; >>> + mapping_set_unevictable(mapping); >>> + mapping_set_gfp_mask(mapping, >>> + mapping_gfp_mask(mapping) & ~__GFP_MOVABLE); >> >> Is this supposed to prevent migration of pages being used for >> restrictedmem/shmem backend? > > Yes, my bad. I expected it to prevent migration, but it is not true. Maybe add a comment that these pages are not movable and we don't want to place them into movable pageblocks (including CMA and ZONE_MOVABLE). That's the primary purpose of the GFP mask here. -- Thanks, David / dhildenb