linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: David Hildenbrand <david@redhat.com>
To: Michal Hocko <mhocko@kernel.org>
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	"Rafael J. Wysocki" <rafael@kernel.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Pavel Tatashin <pasha.tatashin@soleen.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Oscar Salvador <osalvador@suse.de>
Subject: Re: [PATCH v1] drivers/base/memory.c: Don't store end_section_nr in memory blocks
Date: Wed, 31 Jul 2019 15:42:53 +0200	[thread overview]
Message-ID: <58bd9479-051b-a13b-b6d0-c93aac2ed1b3@redhat.com> (raw)
In-Reply-To: <20190731132534.GQ9330@dhcp22.suse.cz>

On 31.07.19 15:25, Michal Hocko wrote:
> On Wed 31-07-19 15:12:12, David Hildenbrand wrote:
>> On 31.07.19 14:43, Michal Hocko wrote:
>>> On Wed 31-07-19 14:22:13, David Hildenbrand wrote:
>>>> Each memory block spans the same amount of sections/pages/bytes. The size
>>>> is determined before the first memory block is created. No need to store
>>>> what we can easily calculate - and the calculations even look simpler now.
>>>
>>> While this cleanup helps a bit, I am not sure this is really worth
>>> bothering. I guess we can agree when I say that the memblock interface
>>> is suboptimal (to put it mildly).  Shouldn't we strive for making it
>>> a real hotplug API in the future? What do I mean by that? Why should
>>> be any memblock fixed in size? Shouldn't we have use hotplugable units
>>> instead (aka pfn range that userspace can work with sensibly)? Do we
>>> know of any existing userspace that would depend on the current single
>>> section res. 2GB sized memblocks?
>>
>> Short story: It is already ABI (e.g.,
>> /sys/devices/system/memory/block_size_bytes) - around since 2005 (!) -
>> since we had memory block devices.
>>
>> I suspect that it is mainly manually used. But I might be wrong.
> 
> Any pointer to the real userspace depending on it? Most usecases I am
> aware of rely on udev events and either onlining or offlining the memory
> in the handler.

Yes, that's also what I know - onlining and triggering kexec().

On s390x, admins online sub-increments to selectively add memory to a VM
- but we could still emulate that by adding memory for that use case in
the kernel in the current granularity. See

https://books.google.de/books?id=afq4CgAAQBAJ&pg=PA117&lpg=PA117&dq=/sys/devices/system/memory/block_size_bytes&source=bl&ots=iYk_vW5O4G&sig=ACfU3U0s-O-SOVaQO-7HpKO5Hj866w9Pxw&hl=de&sa=X&ved=2ahUKEwjOjPqIot_jAhVPfZoKHcxpAqcQ6AEwB3oECAgQAQ#v=onepage&q=%2Fsys%2Fdevices%2Fsystem%2Fmemory%2Fblock_size_bytes&f=false

> 
> I know we have documented this as an ABI and it is really _sad_ that
> this ABI didn't get through normal scrutiny any user visible interface
> should go through but these are sins of the past...

A quick google search indicates that

Kata containers queries the block size:
https://github.com/kata-containers/runtime/issues/796

Powerpc userspace queries it:
https://groups.google.com/forum/#!msg/powerpc-utils-devel/dKjZCqpTxus/AwkstV2ABwAJ

I can imagine that ppc dynamic memory onlines only pieces of added
memory - DIMMs AFAIK (haven't looked at the details).

There might be more users.

> 
>> Long story:
>>
>> How would you want to number memory blocks? At least no longer by phys
>> index. For now, memory blocks are ordered and numbered by their block id.
> 
> memory_${mem_section_nr_of_start_pfn}
> 

Fair enough, although this could break some scripts where people
manually offline/online specific blocks. (but who knows what
people/scripts do :( )

>> Admins might want to online parts of a DIMM MOVABLE/NORMAL, to more
>> reliably use huge pages but still have enough space for kernel memory
>> (e.g., page tables). They might like that a DIMM is actually a set of
>> memory blocks instead of one big chunk.
> 
> They might. Do they though? There are many theoretical usecases but
> let's face it, there is a cost given to the current state. E.g. the
> number of memblock directories is already quite large on machines with a
> lot of memory even though they use large blocks. That has negative
> implications already (e.g. the number of events you get, any iteration
> on the /sys etc.). Also 2G memblocks are quite arbitrary and they
> already limit the above usecase some, right?

I mean there are other theoretical issues: Onlining a very big DIMM in
one shot might trigger OOM, while slowly adding/onlining would currently
works. Who knows if that is relevant in practice.

Also, it would break the current use case of memtrace, which removes
memory in a granularity that wasn't added. But luckily, memtrace is an
exception :)

> 
>> IOW: You can consider it a restriction to add e.g., DIMMs only in one
>> bigger chunks.
>>
>>>
>>> All that being said, I do not oppose to the patch but can we start
>>> thinking about the underlying memblock limitations rather than micro
>>> cleanups?
>>
>> I am pro cleaning up what we have right now, not expect it to eventually
>> change some-when in the future. (btw, I highly doubt it will change)
> 
> I do agree, but having the memblock fixed size doesn't really go along
> with variable memblock size if we ever go there. But as I've said I am
> not really against the patch.

Fair enough, for now I am not convinced that we will actually see
variable memory blocks in the near future.

Thanks for the discussion (I was thinking about the same concept a while
back when trying to find out if there could be an easy way to identify
which memory blocks belong to a single DIMM you want to eventually
unplug and therefore online it all to the MOVABLE zone).

-- 

Thanks,

David / dhildenb


  reply	other threads:[~2019-07-31 13:42 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-07-31 12:22 [PATCH v1] drivers/base/memory.c: Don't store end_section_nr in memory blocks David Hildenbrand
2019-07-31 12:43 ` Michal Hocko
2019-07-31 13:12   ` David Hildenbrand
2019-07-31 13:25     ` Michal Hocko
2019-07-31 13:42       ` David Hildenbrand [this message]
2019-07-31 14:04         ` David Hildenbrand
2019-07-31 14:15           ` Michal Hocko
2019-07-31 14:23             ` David Hildenbrand
2019-07-31 14:14         ` Michal Hocko
2019-07-31 14:21           ` David Hildenbrand
2019-07-31 14:37             ` Michal Hocko
2019-07-31 14:43               ` David Hildenbrand
2019-08-01  6:13                 ` Michal Hocko
2019-08-01  7:00                   ` David Hildenbrand
2019-08-01  8:27                     ` Michal Hocko
2019-08-01  8:36                       ` David Hildenbrand
2019-07-31 20:57 ` Andrew Morton
2019-08-01  6:48   ` David Hildenbrand

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=58bd9479-051b-a13b-b6d0-c93aac2ed1b3@redhat.com \
    --to=david@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=dan.j.williams@intel.com \
    --cc=gregkh@linuxfoundation.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@kernel.org \
    --cc=osalvador@suse.de \
    --cc=pasha.tatashin@soleen.com \
    --cc=rafael@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).