From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5DB2FC47DD9 for ; Mon, 22 Jan 2024 17:44:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C4C848D0002; Mon, 22 Jan 2024 12:44:41 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id BFC3B8D0001; Mon, 22 Jan 2024 12:44:41 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AC4928D0002; Mon, 22 Jan 2024 12:44:41 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 9D1528D0001 for ; Mon, 22 Jan 2024 12:44:41 -0500 (EST) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 5C91A140A4E for ; Mon, 22 Jan 2024 17:44:41 +0000 (UTC) X-FDA: 81707671962.03.0C1D11F Received: from mail-lf1-f53.google.com (mail-lf1-f53.google.com [209.85.167.53]) by imf03.hostedemail.com (Postfix) with ESMTP id 679BE20015 for ; Mon, 22 Jan 2024 17:44:39 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=C6RXjjWt; spf=pass (imf03.hostedemail.com: domain of urezki@gmail.com designates 209.85.167.53 as permitted sender) smtp.mailfrom=urezki@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1705945479; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ck+/Ha3vTj1dYVpahC4s/e5ENc+3sMxjTzjbmp7H+qY=; b=oYIW3YBKwi+zDqnGoVrk73aVAvC5hHT8TSJUx5cbIx9qdFAhE1RLvbCGpQ+7eO2AO6eOUY ow89ygOXbG1ZKX8lFHl1Kz/a5W6ZGoamuoIvYcrQNKC1cdDelq9Lbc82T9zPRNETtrqUzT W38G0O6D4z7UAAqjHZfF2eQ4+C1MFBc= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1705945479; a=rsa-sha256; cv=none; b=Uaad7TsTO00OOw7Gs+Wma85pBjCR0FSSARLy5a/hoFNCDxJMlt+83CODDVObJDp5Sc7JPU /v/zoMrdeSipo3F5LxEPUlNs06aYc74kAfMheHVF7t4kjs7WQ38Z0znf+qv5UwJ5sXkkA3 vlAgWyo0NgSXfSWz3ynV4odO8+OBsfk= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=C6RXjjWt; spf=pass (imf03.hostedemail.com: domain of urezki@gmail.com designates 209.85.167.53 as permitted sender) smtp.mailfrom=urezki@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-lf1-f53.google.com with SMTP id 2adb3069b0e04-50eac018059so4632850e87.0 for ; Mon, 22 Jan 2024 09:44:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1705945477; x=1706550277; darn=kvack.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:date:from:from:to:cc:subject:date:message-id:reply-to; bh=ck+/Ha3vTj1dYVpahC4s/e5ENc+3sMxjTzjbmp7H+qY=; b=C6RXjjWtmKjogFcEtIRz9cv7bMuezb0SHemxPFbnfn/n3w2MoixjbR5d/bt3QWM9ZN vz4J/Csz/p92BSwvfzmL3dMFlHto2FDHO3PSq47VbuigjgES/YFXr8TWr+CU5q8vmj0N 9WWvKlyWULmGMXqCJFoPjUshIvFHPwNpnDA9zmFXXlMpFqrmeK1ZRwCwFRGQ3cEWEVjx qfMUIGL7vwlSI9v/dAHPGGGAvHEthEWNSz5Gu+UmMU7iNtMoVBoF7+hdlL7JHLWhTf66 EISPfNyR8kQIb1NuEoBO9M5aS7ZqmSLo9NvsKkBBdaCA3f1rvdMcf3AMNIXFQyXJKfCL 1g/g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1705945477; x=1706550277; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:date:from:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=ck+/Ha3vTj1dYVpahC4s/e5ENc+3sMxjTzjbmp7H+qY=; b=xIwkC7YBuuRE3ZzxlCyMy863eEDLaLnYLtt+5WyLK/7sKivjilcjWtgVYopkAl98/q ldz0bf8838GIdzLiD0R2fu8hbnDXLNiNEVEeVIXWJ9I14qVR6cQ710+aOlqkcHCrk2E3 ZwixB4n+oK2zh34+Y3QaDY6kg9WF5LYaDJz+lDTPTb3wJSeU7IUJZXsxPlX4PItfIDi1 RG4U1O18U/+U3Vo+jB7jiR4urZk5u9DLoI6RWGPVd2xK6azky+D0KloP3MR51h894WBj 25dmYeuvRNLgetdXHFJHT06N88UVSD9cJon388znxhDfcssyxMa5MvcR/rx/nmBqHPs1 0wsw== X-Gm-Message-State: AOJu0YwF7Op680+j+EyxjVXvT3OqEl8BGvJK1VHY3O+FHL11iEZuVNeP g3J1NdBCqUQkijPGD+hfeHa1zlVmNpB23sAiZAi8HzgtwjMyaZmY X-Google-Smtp-Source: AGHT+IECdOknBBn1ho4bjXcV2ESKfrBMsHcPHlBdlntDg/VNbXktACmNxt1d51VJjFzaTMr8qADe6A== X-Received: by 2002:a05:6512:2205:b0:50e:b2cf:4e17 with SMTP id h5-20020a056512220500b0050eb2cf4e17mr1785559lfu.100.1705945476985; Mon, 22 Jan 2024 09:44:36 -0800 (PST) Received: from pc636 (host-90-235-23-195.mobileonline.telia.com. [90.235.23.195]) by smtp.gmail.com with ESMTPSA id er17-20020a05651248d100b0050e9a8057f6sm2058360lfb.259.2024.01.22.09.44.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 22 Jan 2024 09:44:36 -0800 (PST) From: Uladzislau Rezki X-Google-Original-From: Uladzislau Rezki Date: Mon, 22 Jan 2024 18:44:34 +0100 To: Lorenzo Stoakes Cc: Uladzislau Rezki , linux-mm@kvack.org, Andrew Morton , LKML , Baoquan He , Christoph Hellwig , Matthew Wilcox , "Liam R . Howlett" , Dave Chinner , "Paul E . McKenney" , Joel Fernandes , Oleksiy Avramchenko Subject: Re: [PATCH v3 04/11] mm: vmalloc: Remove global vmap_area_root rb-tree Message-ID: References: <20240102184633.748113-1-urezki@gmail.com> <20240102184633.748113-5-urezki@gmail.com> <63104f8e-2fe3-46b2-842c-f11f8bb4b336@lucifer.local> <2c318a40-9e0f-4d24-b5cc-e712f7b2c334@lucifer.local> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <2c318a40-9e0f-4d24-b5cc-e712f7b2c334@lucifer.local> X-Stat-Signature: tyjwiuuphyqke6w7tpr14t1tt4btzydm X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 679BE20015 X-Rspam-User: X-HE-Tag: 1705945479-517221 X-HE-Meta: U2FsdGVkX1+G7PMCSyM8zXFYPP9S1CQdjkmVXhYmdz/xVStZccxHz/TNcuf7Vyn+pTnfykqmM7eRf02O5SafvqeMClp48kZIxEJkU3/ZJMkksd5CftWtddnR3GgR8DES5FrooVZBr8oGrFYl77HVM6YbecHbBYXw1W5YAvTyAdqYLyx+Wds5KwUVReQ0OsdumGDpi4NcXQ67KztmXDWD2O3AjLZfMvTRP3X1ircWPyCJYwkM2GXsled8oTHcHFWLxTYjVsA2myQ/0CLmxpcVSZh5OJ+yg8qQ/jZILU7Bt+r899dOdFEPaq6/Ovd8Y62YGhfxuqvT65HBWva7/lx8ylTEDeaP/DfVPNehaQO5v5lgxqLq3pSArYck25otucJi0T1OyFcmU+IxVJi2khQAiCdILQsLTc3QfPd+E+0q98LXl/UBFBRxpUvhR6Xx6TepTB41IC+hmWp6L0S7c+tDOi/J+SgvJdF8tXVDiEF6FaDdh8ukNsMRH7o6rM1eDlPmuI8VP8E4y7cZcHXjoYUGpe//SDHpCGLUmmAdkzx7/Kb9ykWROFbLHDe1R9XuMlf7ymfBquSHJVJDcv/CBL4HuC0HfocrX/ZfjbGgTwT9HE1C91sujYJx+1LG0pmHDu+rwB/4Nk6RA18ak9hkdJBr1z/GRL1iK/ppPMUWdlTNLZyl4yOs1nIhuO8G7yeLBBB1HyuZTH6Rbzn/I0hX8z0Maezgd6lIXqeRqRWYDMyIE7ZUiZP44WdDDYqoZIGk83TvsJVKpc14d+8d5qBzcNem/akLoioM3uASF6E4eY/PwKdNuze7KbQtG5l9cLj09kx3SZ6KvFKTJqAJ9vUJlW8650jRsTBHoNSrxXGCUWXJo12/HPVes734qJ+Lb6ZWoj07PCYQKuXFOMIn8yxYx8Vst5ih1pX49whX41JCxj5qd2/ufxwjMdR58zYROD3g6D9x140LnyAe7huJDt4ds+3 eH00Bgmx nkvjsjuQsy97JtspKxXDDt3Q4w9m9wKgzO6bgXX4vJe4Qme1ZFvqa1ZbxREOMiF3AXKYQ6NZ6vOfEZj8mJv60kzYbEdlcJ3QWqDRNhW3Q9sLeFMZ7/vcr2R0kR7PRu2g5ChKtGF8d93VKyKV+q/7ZKitquDNdNDCyEHC2VHJfcXH4Db5ELgfuSjOxYUXX0L0TMeqAqK/pR6fprK3wWoh7jKsfhRIFAtc0MJIQuloBxsUVa7lmksY4Qj3TLpCHmXwl7/LZGY8w321AGH09sZP7qz15NgUDGJL5YZY5DEviOGe6dytHu8xUhMvM3qw6M8DPlWGIhCSpcrRxo+GH6TxAgW22TsmzlupMX6nf6O8JhXQdT79z+JmsWq3r9pw8t8Ju7UU+W/P29HwH542uqar8IYx7/tySLOH87qC5A7OS2+Uxms3SdjJguPFDhc5xYq0gjSpRC6yRaHHP2nix+MntSB0Hb80TkPmXIUhaesK8al+EcaY+oh7I53HL5xoyVzdFHLwA X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Sat, Jan 20, 2024 at 12:55:10PM +0000, Lorenzo Stoakes wrote: > On Thu, Jan 18, 2024 at 02:15:31PM +0100, Uladzislau Rezki wrote: > > [snip] > > > > > > > + struct rb_root root; > > > > + struct list_head head; > > > > + spinlock_t lock; > > > > +}; > > > > + > > > > +static struct vmap_node { > > > > + /* Bookkeeping data of this node. */ > > > > + struct rb_list busy; > > > > +} single; > > > > > > This may be a thing about encapsulation/naming or similar, but I'm a little > > > confused as to why the rb_list type is maintained as a field rather than > > > its fields embedded? > > > > > The "struct vmap_node" will be extended by the following patches in the > > series. > > > > Yeah sorry I missed this, only realising after I sent...! > > > > > + > > > > +static struct vmap_node *vmap_nodes = &single; > > > > +static __read_mostly unsigned int nr_vmap_nodes = 1; > > > > +static __read_mostly unsigned int vmap_zone_size = 1; > > > > > > It might be worth adding a comment here explaining that we're binding to a > > > single node for now to maintain existing behaviour (and a brief description > > > of what these values mean - for instance what unit vmap_zone_size is > > > expressed in?) > > > > > Right. Agree on it :) > > > > Indeed :) > > [snip] > > > > > /* Look up the first VA which satisfies addr < va_end, NULL if none. */ > > > > -static struct vmap_area *find_vmap_area_exceed_addr(unsigned long addr) > > > > +static struct vmap_area * > > > > +find_vmap_area_exceed_addr(unsigned long addr, struct rb_root *root) > > > > { > > > > struct vmap_area *va = NULL; > > > > - struct rb_node *n = vmap_area_root.rb_node; > > > > + struct rb_node *n = root->rb_node; > > > > > > > > addr = (unsigned long)kasan_reset_tag((void *)addr); > > > > > > > > @@ -1552,12 +1583,14 @@ __alloc_vmap_area(struct rb_root *root, struct list_head *head, > > > > */ > > > > static void free_vmap_area(struct vmap_area *va) > > > > { > > > > + struct vmap_node *vn = addr_to_node(va->va_start); > > > > + > > > > > > I'm being nitty here, and while I know it's a vmalloc convention to use > > > 'va' and 'vm', perhaps we can break away from the super short variable name > > > convention and use 'vnode' or something for these values? > > > > > > I feel people might get confused between 'vm' and 'vn' for instance. > > > > > vnode, varea? > > I think 'vm' and 'va' are fine, just scanning through easy to mistake 'vn' > and 'vm'. Obviously a litle nitpicky! You could replace all but a bit > churny, so I think vn -> vnode works best imo. > > [snip] > > > > > struct vmap_area *find_vmap_area(unsigned long addr) > > > > { > > > > + struct vmap_node *vn; > > > > struct vmap_area *va; > > > > + int i, j; > > > > > > > > - spin_lock(&vmap_area_lock); > > > > - va = __find_vmap_area(addr, &vmap_area_root); > > > > - spin_unlock(&vmap_area_lock); > > > > + /* > > > > + * An addr_to_node_id(addr) converts an address to a node index > > > > + * where a VA is located. If VA spans several zones and passed > > > > + * addr is not the same as va->va_start, what is not common, we > > > > + * may need to scan an extra nodes. See an example: > > > > > > For my understading when you say 'scan an extra nodes' do you mean scan > > > just 1 extra node, or multiple? If the former I'd replace this with 'may > > > need to scan an extra node' if the latter then 'may ened to scan extra > > > nodes'. > > > > > > It's a nitty language thing, but also potentially changes the meaning of > > > this! > > > > > Typo, i should replace it to: scan extra nodes. > > Thanks. > > > > > > > + * > > > > + * <--va--> > > > > + * -|-----|-----|-----|-----|- > > > > + * 1 2 0 1 > > > > + * > > > > + * VA resides in node 1 whereas it spans 1 and 2. If passed > > > > + * addr is within a second node we should do extra work. We > > > > + * should mention that it is rare and is a corner case from > > > > + * the other hand it has to be covered. > > > > > > A very minor language style nit, but you've already said this is not > > > common, I don't think you need this 'We should mention...' bit. It's not a > > > big deal however! > > > > > No problem. We can remove it! > > Thanks. > > > > > > > + */ > > > > + i = j = addr_to_node_id(addr); > > > > + do { > > > > + vn = &vmap_nodes[i]; > > > > > > > > - return va; > > > > + spin_lock(&vn->busy.lock); > > > > + va = __find_vmap_area(addr, &vn->busy.root); > > > > + spin_unlock(&vn->busy.lock); > > > > + > > > > + if (va) > > > > + return va; > > > > + } while ((i = (i + 1) % nr_vmap_nodes) != j); > > > > > > If you comment above suggests that only 1 extra node might need to be > > > scanned, should we stop after one iteration? > > > > > Not really. Though we can improve it further to scan backward. > > I think it'd be good to clarify in the comment above that the VA could span > more than 1 node then, as the diagram seems to imply only 1 (I think just > simply because of the example you were showing). > > [snip] > > > > > static struct vmap_area *find_unlink_vmap_area(unsigned long addr) > > > > { > > > > + struct vmap_node *vn; > > > > struct vmap_area *va; > > > > + int i, j; > > > > > > > > - spin_lock(&vmap_area_lock); > > > > - va = __find_vmap_area(addr, &vmap_area_root); > > > > - if (va) > > > > - unlink_va(va, &vmap_area_root); > > > > - spin_unlock(&vmap_area_lock); > > > > + i = j = addr_to_node_id(addr); > > > > + do { > > > > + vn = &vmap_nodes[i]; > > > > > > > > - return va; > > > > + spin_lock(&vn->busy.lock); > > > > + va = __find_vmap_area(addr, &vn->busy.root); > > > > + if (va) > > > > + unlink_va(va, &vn->busy.root); > > > > + spin_unlock(&vn->busy.lock); > > > > + > > > > + if (va) > > > > + return va; > > > > + } while ((i = (i + 1) % nr_vmap_nodes) != j); > > > > > > Maybe worth adding a comment saying to refer to the comment in > > > find_vmap_area() to see why this loop is necessary. > > > > > OK. We can do it to make it better for reading. > > Thanks! > > [snip] > > > > > @@ -3728,8 +3804,11 @@ long vread_iter(struct iov_iter *iter, const char *addr, size_t count) > > > > > > Unrelated to your change but makes me feel a little unwell to see 'const > > > char *addr'! Can we change this at some point? Or maybe I can :) > > > > > You are welcome :) > > Haha ;) yes I think I might tbh, I have noted it down. > > > > > > > > > > > remains = count; > > > > > > > > - spin_lock(&vmap_area_lock); > > > > - va = find_vmap_area_exceed_addr((unsigned long)addr); > > > > + /* Hooked to node_0 so far. */ > > > > + vn = addr_to_node(0); > > > > > > Why can't we use addr for this call? We already enforce the node-0 only > > > thing by setting nr_vmap_nodes to 1 right? And won't this be potentially > > > subtly wrong when we later increase this? > > > > > I used to have 0 here. But please note, it is changed by the next patch in > > this series. > > Yeah sorry, again hadn't noticed this. > > [snip] > > > > > + spin_lock(&vn->busy.lock); > > > > + insert_vmap_area(vas[area], &vn->busy.root, &vn->busy.head); > > > > setup_vmalloc_vm_locked(vms[area], vas[area], VM_ALLOC, > > > > pcpu_get_vm_areas); > > > > + spin_unlock(&vn->busy.lock); > > > > > > Hmm, before we were locking/unlocking once before the loop, now we're > > > locking on each iteration, this seems inefficient. > > > > > > Seems like we need logic like: > > > > > > /* ... something to check nr_vms > 0 ... */ > > > struct vmap_node *last_node = NULL; > > > > > > for (...) { > > > struct vmap_node *vnode = addr_to_node(vas[area]->va_start); > > > > > > if (vnode != last_node) { > > > spin_unlock(last_node->busy.lock); > > > spin_lock(vnode->busy.lock); > > > last_node = vnode; > > > } > > > > > > ... > > > } > > > > > > if (last_node) > > > spin_unlock(last_node->busy.lock); > > > > > > To minimise the lock twiddling. What do you think? > > > > > This per-cpu-allocator prefetches several VA units per-cpu. I do not > > find it as critical because it is not a hot path for the per-cpu allocator. > > When its buffers are exhausted it does an extra prefetch. So it is not > > frequent. > > OK, sure I mean this is simpler and more readable so if not a huge perf > concern then not a big deal. > > > > > > > > > > } > > > > - spin_unlock(&vmap_area_lock); > > > > > > > > /* > > > > * Mark allocated areas as accessible. Do it now as a best-effort > > > > @@ -4253,55 +4333,57 @@ bool vmalloc_dump_obj(void *object) > > > > { > > > > void *objp = (void *)PAGE_ALIGN((unsigned long)object); > > > > const void *caller; > > > > - struct vm_struct *vm; > > > > struct vmap_area *va; > > > > + struct vmap_node *vn; > > > > unsigned long addr; > > > > unsigned int nr_pages; > > > > + bool success = false; > > > > > > > > - if (!spin_trylock(&vmap_area_lock)) > > > > - return false; > > > > > > Nitpick on style for this, I really don't know why you are removing this > > > early exit? It's far neater to have a guard clause than to nest a whole > > > bunch of code below. > > > > > Hm... I can return back as it used to be. I do not have a strong opinion here. > > Yeah that'd be ideal just for readability. > > [snip the rest as broadly fairly trivial comment stuff on which we agree] > > > > > Thank you for the review! I can fix the comments as separate patches if > > no objections. > > Yes, overall it's style/comment improvement stuff nothing major, feel free > to send as follow-up patches. > > I don't want to hold anything up here so for the rest, feel free to add: > > Reviewed-by: Lorenzo Stoakes > Appreciate! I will go through again and send out the patch that adds more detailed explanation as requested in this review. Again, thank you! -- Uladzislau Rezki