From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D7E60E810D0 for ; Wed, 27 Sep 2023 11:50:03 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3DEBA8D0035; Wed, 27 Sep 2023 07:50:03 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 38F208D0002; Wed, 27 Sep 2023 07:50:03 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 27E648D0035; Wed, 27 Sep 2023 07:50:03 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 18C2D8D0002 for ; Wed, 27 Sep 2023 07:50:03 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id D7DE01A1009 for ; Wed, 27 Sep 2023 11:50:02 +0000 (UTC) X-FDA: 81282208644.25.A2A54DA Received: from mail-ej1-f46.google.com (mail-ej1-f46.google.com [209.85.218.46]) by imf16.hostedemail.com (Postfix) with ESMTP id E4DF518001C for ; Wed, 27 Sep 2023 11:50:00 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=mIxc3no2; spf=pass (imf16.hostedemail.com: domain of urezki@gmail.com designates 209.85.218.46 as permitted sender) smtp.mailfrom=urezki@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1695815401; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=/E53cfwHh2rEnDg5UcpYwC/JM02aXoEehL2O5KPOt7o=; b=uBCOm0RN9wTyuUWoqUniWJ191G31gX2RZ96McBWAouPyxKt4SECLAwK68znKK9+/izIu0+ o5s3OlNJ8j+mBQbmGNVlciiEIbqzw1fa96qfm+3KzIGEiD2tkGkF2/Le8ZcfyqyKtSmGSh Rk6gCZRh57tTXzA6OeU2iBWAg53Yjqs= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1695815401; a=rsa-sha256; cv=none; b=z6V9p3RW60ORP8iI01xfkfUZNVNxd/gFvNf/VdP8+6VIOcV27RWEnUiToy5YXWBZ7omQU3 lqwd2+4U7N7TW/U3X8UW2jy5JQrPuJGkw9VH+RtdtC4gdJ8cFYPB5fduh4VlS7OerUlnJ9 QSNaP+9kQkj/JySJVBQ1SJoCTnGK+L0= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=mIxc3no2; spf=pass (imf16.hostedemail.com: domain of urezki@gmail.com designates 209.85.218.46 as permitted sender) smtp.mailfrom=urezki@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-ej1-f46.google.com with SMTP id a640c23a62f3a-9b29186e20aso697518166b.2 for ; Wed, 27 Sep 2023 04:50:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1695815399; x=1696420199; darn=kvack.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:date:from:from:to:cc:subject:date:message-id:reply-to; bh=/E53cfwHh2rEnDg5UcpYwC/JM02aXoEehL2O5KPOt7o=; b=mIxc3no2nHJeTsR48eqYv3Alx61iFGRlJLuSTTzjnqdUO6MFIcK7jF/Lt687M9VHHK pufqqbRjDhgjx9Lc6eNP4oSaVxpHPHZoC4rYfQmxEDTP8+KMCn/zy0+gmJdu+whVedsS SKhXQU0zNnn3HxsvsxbC0j4HuOXZrEdIWwk3xptx7FytNsgfs96Sn5mvXf5xCXPl7Fev JJ+f4Yg+zXW9xhsapmMJdZoK8V9HG3/mWy2UPe0AvDnVc7ugCysQLZzMK4pWkAxpn5Np F8G0LKpuqAEBhaj3AXzJ9DmeHiIzJiTWR8rc4E0Coo+CWiDqU8UUC0sIOW8v0n7nPnSc p9eQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1695815399; x=1696420199; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:date:from:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=/E53cfwHh2rEnDg5UcpYwC/JM02aXoEehL2O5KPOt7o=; b=jV6vOkgYLL9vYnsg2dSYH59xxfkcZlGCPxzulWRtR5BXNoWnISshyHY1NKGxU/BIYN pZz+uo4GKookcIp8f0kEIPYOSGwlooGxhkWTdPvW5rYF8wOSRCulsgmwIeLzDwKkrt7C +J+sK66cOlSH468LrZttm8YlqUwWvsVbOhX9x8r7N49PvOEuD5bCqTNUReL0fn5l5Z31 K2lQdt+B/TGsLeMVS4UvTRgSmOTj/zzSnujYUM8jsJytL/N4Q0h+uI7NhUL9bFFJtd7/ E2AOQRauumGMBLU7P6Nc4y1M5cGpmoJGHJ2yFMvYl+q1sj8E17IXyE7/iLEVk65fvj9c sFvg== X-Gm-Message-State: AOJu0YyJYtvt7pIYmPWU/xZS+1HiD+7lH8SBj+7nydMJypRVH2wZQc0P d7sDMLYIxfFlZVbo3HlkFtA= X-Google-Smtp-Source: AGHT+IETXVJDmZsof7tV3c8vaJTOGTpafdIYmVusi8y090eHC6ckJIASlG60RZFFBks4nW/EzV6zgQ== X-Received: by 2002:a17:906:4c9:b0:9ae:6a60:81a2 with SMTP id g9-20020a17090604c900b009ae6a6081a2mr1450944eja.25.1695815399079; Wed, 27 Sep 2023 04:49:59 -0700 (PDT) Received: from pc638.lan ([155.137.26.201]) by smtp.gmail.com with ESMTPSA id y4-20020a17090614c400b00992b510089asm9155681ejc.84.2023.09.27.04.49.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 27 Sep 2023 04:49:58 -0700 (PDT) From: Uladzislau Rezki X-Google-Original-From: Uladzislau Rezki Date: Wed, 27 Sep 2023 13:49:57 +0200 To: Jaeseon Sim Cc: Jaeseon Sim , "bhe@redhat.com" , "akpm@linux-foundation.org" , "hch@infradead.org" , "lstoakes@gmail.com" , "linux-mm@kvack.org" , "linux-kernel@vger.kernel.org" , Jaewon Kim Subject: Re: [PATCH] mm/vmalloc: Remove WARN_ON_ONCE related to adjust_va_to_fit_type Message-ID: References: <20230922062704epcms1p1722f24d4489a0435b339ce21db754ded@epcms1p1> <20230925105154epcms1p782c335c2355f39a9b583489c56e972f6@epcms1p7> <20230926052158epcms1p7fd7f3e3f523e5209977d3f5c62e85afa@epcms1p7> <20230926120549epcms1p4d41733c1c3698bd00eaa7e5ea0de041d@epcms1p4> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Rspamd-Queue-Id: E4DF518001C X-Rspam-User: X-Stat-Signature: xz5afkxhwto1herc89pjrzrq6mhoxnae X-Rspamd-Server: rspam03 X-HE-Tag: 1695815400-534590 X-HE-Meta: U2FsdGVkX1/VC4ToC8qz11RFJEafeUVISoV0TCN30l2MrKAPvef+mj5ahDYT/pPlevUZRYEEVtadCaD/2J2+F48pb3yjsk4i+Xeem/LGajQ8pV4Ascu5pI0H5DQ/fy2nxe6oBpHJofapEgyybBwOAqvv5mcWNMdR5G8gpwv0WWsfxy8HuT2H52ZIkrxIknN91f+UBCqQHNoYwBApPO+2oghk2SWSkeeRywMx9+Oomi55/JLmG4MsW9vSKOWUzbpR4+cnV44bGlVSXZtqYUA9y5QYGbIVIHvGUptNYQ9JjTWs+e8xuo2UuaaEf4M9X7br1+0O2ajuOuOfluPXUEe7inA2SijcmyOLPj0fmkbqaYGzpk/3j4H+RkaL7MxBC5AqltP0y1jVPMjmomuoGde7Wx/awW1b398r3cuVvnh29AtYl27kC9jLm6AFtRqDWXE8JQjMNa0KSqVz0aa9F21lguXw6D6o+3z18mv8W2zzJakpB5GaGH41efJk8MzKLB2E5d4I+3qZixCbPf7z4toad1YSP55dquYsM5h4J9b+I8rT+S5pe3O4TgUyWr6PP7QvXDQ/7oHRGqHWs2cgc3F3j9KihDv/P1oUvLarKnxkPe9a57A8+gcI9/JSbuJOwbfg0lmHVmZlANpb5MWauUQUfYozXULiCPrScSycXf6/gfTDq5Gwq5FvIcB9FN3GTk+slM/dzWnmbpwX61rJ04PEVvebdCkaPipxhudtJL+bSCW4mrELb3iVmBWmIAtinFtJsI4ThA816BH8dacbjh8I1wfEu5Zol6vWefpHkbZoAVs4aj6MUqpzWERzVmzUJCV0FaV1HkSgO0JhDrsZOBS5f0JUKZpkiPigjw71YpLNSp8K1NtC2Qt1XBv7rgpO3DH7CfZt5DBztp9DRzUhgnbf2fuB1AVeC6v6mfREVDrW3q4PBOeVDz5iN8jYAv3I0umCOBT4gqZkEOhW95TSA0T S4Erc2mT GLpLdJsnFCFBgko0lQLR7i+RNhMZurBECCOiafGt/Bc3rcH8LeYYMp0GB3NZNi+Jn2cY/cI8rudMGzNhGRozETwT7D7aunORx8fkCzt0y3lViU94VKc1vOsepSeSlka9nuKCNRApT+/26UJ6r8uz1vTtvFe6yHmnFjm7Bx9Vm7pQs+0t8/lnGjyXsIVZbNAbBmSbPlbVTyJB+dIRPlS25a8kQtbFnzrGnsfLMzKl17tPB9t+hN0i9/6yO7TLayzYQtMefuNHDvFQH/63HPTXPZKMZph8bIwd5ZTsraK4i6Bk4VramWGombJaZt7XDr+mA8ozOxkJEIgcBoadlM9VM8lmXNSx1sBnx++s6pB5l/NEhG99Lyqc8Zzd6jpv5emt071YQ64q7n+B+d1SL+mpiIlfMI6om3ddtPSRMlxoH2/UPOB1GSJ2ZizlZfMpV1oE+MHWxEVDyDmDNIMJYxo3EKynHVNDzyaTKbPZ4 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: > > Yes, but GFP_NOWAIT-alloc-error can easily occur for low-memory device. > > > Agree. You are really in a low memory condition. We end up here only if > pre-loading also has not succeeded, i.e. GFP_KERNEL also fails. > > But i agree with you, we should "improve the warning" because we drain > and repeat. > > > How about changing fix as below?: > > > > > > --- a/mm/vmalloc.c > > +++ b/mm/vmalloc.c > > @@ -1468,6 +1468,7 @@ adjust_va_to_fit_type(struct rb_root *root, struct list_head *head, > > */ > > va->va_start = nva_start_addr + size; > > } else { > > + WARN_ON_ONCE(1); > > return -1; > > } > > > > @@ -1522,7 +1523,7 @@ __alloc_vmap_area(struct rb_root *root, struct list_head *head, > > > > /* Update the free vmap_area. */ > > ret = adjust_va_to_fit_type(root, head, va, nva_start_addr, size); > > - if (WARN_ON_ONCE(ret)) > > + if (ret) > > return vend; > > > > #if DEBUG_AUGMENT_LOWEST_MATCH_CHECK > > @@ -4143,7 +4144,7 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets, > > ret = adjust_va_to_fit_type(&free_vmap_area_root, > > &free_vmap_area_list, > > va, start, size); > > - if (WARN_ON_ONCE(unlikely(ret))) > > + if (unlikely(ret)) > > /* It is a BUG(), but trigger recovery instead. */ > > goto recovery; > > > > > > It will WARN_ONCE_ONCE() only if classify_va_fit_type() is "(type == NOTHING_FIT)". > > > This is good but i think it should be improved further. We need to > understand from the warning when no memory and when there is no a > vmap space, so: > > - if NOTHING_FIT, we should WARN() for sure; > - Second place in the pcpu_get_vm_area(), we do not use NE_FIT. Only in > the begging after boot, but potentially we can trigger -ENOMEM and we > should warn in this case. Otherwise you just hide it; > - And last one if after repeating we still do not manage to allocate. > We should understand a reason of failing. I think error handling should be improved. Something like: diff --git a/mm/vmalloc.c b/mm/vmalloc.c index ef8599d394fd..03a36921a3fc 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -1454,7 +1454,7 @@ adjust_va_to_fit_type(struct rb_root *root, struct list_head *head, */ lva = kmem_cache_alloc(vmap_area_cachep, GFP_NOWAIT); if (!lva) - return -1; + return -ENOMEM; } /* @@ -1468,7 +1468,7 @@ adjust_va_to_fit_type(struct rb_root *root, struct list_head *head, */ va->va_start = nva_start_addr + size; } else { - return -1; + return -EINVAL; } if (type != FL_FIT_TYPE) { @@ -1488,7 +1488,8 @@ adjust_va_to_fit_type(struct rb_root *root, struct list_head *head, static __always_inline unsigned long __alloc_vmap_area(struct rb_root *root, struct list_head *head, unsigned long size, unsigned long align, - unsigned long vstart, unsigned long vend) + unsigned long vstart, unsigned long vend, + int *error) { bool adjust_search_size = true; unsigned long nva_start_addr; @@ -1508,8 +1509,10 @@ __alloc_vmap_area(struct rb_root *root, struct list_head *head, adjust_search_size = false; va = find_vmap_lowest_match(root, size, align, vstart, adjust_search_size); - if (unlikely(!va)) + if (unlikely(!va)) { + *error = -ENOENT; return vend; + } if (va->va_start > vstart) nva_start_addr = ALIGN(va->va_start, align); @@ -1517,13 +1520,17 @@ __alloc_vmap_area(struct rb_root *root, struct list_head *head, nva_start_addr = ALIGN(vstart, align); /* Check the "vend" restriction. */ - if (nva_start_addr + size > vend) + if (nva_start_addr + size > vend) { + *error = -ERANGE; return vend; + } /* Update the free vmap_area. */ ret = adjust_va_to_fit_type(root, head, va, nva_start_addr, size); - if (WARN_ON_ONCE(ret)) + if (ret) { + *error = ret; return vend; + } #if DEBUG_AUGMENT_LOWEST_MATCH_CHECK find_vmap_lowest_match_check(root, head, size, align); @@ -1589,7 +1596,7 @@ static struct vmap_area *alloc_vmap_area(unsigned long size, unsigned long freed; unsigned long addr; int purged = 0; - int ret; + int ret, error; if (unlikely(!size || offset_in_page(size) || !is_power_of_2(align))) return ERR_PTR(-EINVAL); @@ -1613,7 +1620,7 @@ static struct vmap_area *alloc_vmap_area(unsigned long size, retry: preload_this_cpu_lock(&free_vmap_area_lock, gfp_mask, node); addr = __alloc_vmap_area(&free_vmap_area_root, &free_vmap_area_list, - size, align, vstart, vend); + size, align, vstart, vend, &error); spin_unlock(&free_vmap_area_lock); trace_alloc_vmap_area(addr, size, align, vstart, vend, addr == vend); @@ -1662,8 +1669,9 @@ static struct vmap_area *alloc_vmap_area(unsigned long size, } if (!(gfp_mask & __GFP_NOWARN) && printk_ratelimit()) - pr_warn("vmap allocation for size %lu failed: use vmalloc= to increase size\n", - size); + pr_warn("vmap allocation for size %lu failed: " + "use vmalloc= to increase size, errno: (%d)\n", + size, error); kmem_cache_free(vmap_area_cachep, va); return ERR_PTR(-EBUSY); @@ -4143,9 +4151,10 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets, ret = adjust_va_to_fit_type(&free_vmap_area_root, &free_vmap_area_list, va, start, size); - if (WARN_ON_ONCE(unlikely(ret))) - /* It is a BUG(), but trigger recovery instead. */ + if (unlikely(ret)) { + WARN_ONCE(1, "%s error: errno (%d)\n", __func__, ret); goto recovery; + } /* Allocated area. */ va = vas[area]; Any thoughts? -- Uladzislau Rezki