public inbox for linux-s390@vger.kernel.org
 help / color / mirror / Atom feed
From: Alexandra Winter <wintera@linux.ibm.com>
To: dust.li@linux.alibaba.com,
	Niklas Schnelle <schnelle@linux.ibm.com>,
	Julian Ruess <julianr@linux.ibm.com>,
	Wenjia Zhang <wenjia@linux.ibm.com>,
	Jan Karcher <jaka@linux.ibm.com>,
	Gerd Bayer <gbayer@linux.ibm.com>,
	Halil Pasic <pasic@linux.ibm.com>,
	"D. Wythe" <alibuda@linux.alibaba.com>,
	Tony Lu <tonylu@linux.alibaba.com>,
	Wen Gu <guwen@linux.alibaba.com>,
	Peter Oberparleiter <oberpar@linux.ibm.com>,
	David Miller <davem@davemloft.net>,
	Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>,
	Eric Dumazet <edumazet@google.com>,
	Andrew Lunn <andrew+netdev@lunn.ch>
Cc: Thorsten Winkler <twinkler@linux.ibm.com>,
	netdev@vger.kernel.org, linux-s390@vger.kernel.org,
	Heiko Carstens <hca@linux.ibm.com>,
	Vasily Gorbik <gor@linux.ibm.com>,
	Alexander Gordeev <agordeev@linux.ibm.com>,
	Christian Borntraeger <borntraeger@linux.ibm.com>,
	Sven Schnelle <svens@linux.ibm.com>,
	Simon Horman <horms@kernel.org>
Subject: Re: [RFC net-next 0/7] Provide an ism layer
Date: Mon, 10 Feb 2025 10:38:27 +0100	[thread overview]
Message-ID: <1e96806f-0a4e-4292-9483-928b1913d311@linux.ibm.com> (raw)
In-Reply-To: <20250210050851.GS89233@linux.alibaba.com>



On 10.02.25 06:08, Dust Li wrote:
> On 2025-01-28 17:04:53, Alexandra Winter wrote:
>>
>>
>> On 18.01.25 16:31, Dust Li wrote:
>>> On 2025-01-17 11:38:39, Niklas Schnelle wrote:
>>>> On Fri, 2025-01-17 at 10:13 +0800, Dust Li wrote:
>>>>>>
>>>> ---8<---
>>>>>> Here are some of my thoughts on the matter:
>>>>>>>>
>>>>>>>> Naming and Structure: I suggest we refer to it as SHD (Shared Memory
>>>>>>>> Device) instead of ISM (Internal Shared Memory). 
>>>>>>
>>>>>>
>>>>>> So where does the 'H' come from? If you want to call it Shared Memory _D_evice?
>>>>>
>>>>> Oh, I was trying to refer to SHM(Share memory file in the userspace, see man
>>>>> shm_open(3)). SMD is also OK.
>>>>>
>>>>>>
>>>>>>
>>>>>> To my knowledge, a
>>>>>>>> "Shared Memory Device" better encapsulates the functionality we're
>>>>>>>> aiming to implement. 
>>>>>>
>>>>>>
>>>>>> Could you explain why that would be better?
>>>>>> 'Internal Shared Memory' is supposed to be a bit of a counterpart to the
>>>>>> Remote 'R' in RoCE. Not the greatest name, but it is used already by our ISM
>>>>>> devices and by ism_loopback. So what is the benefit in changing it?
>>>>>
>>>>> I believe that if we are going to separate and refine the code, and add
>>>>> a common subsystem, we should choose the most appropriate name.
>>>>>
>>>>> In my opinion, "ISM" doesn’t quite capture what the device provides.
>>>>> Since we’re adding a "Device" that enables different entities (such as
>>>>> processes or VMs) to perform shared memory communication, I think a more
>>>>> fitting name would be better. If you have any alternative suggestions,
>>>>> I’m open to them.
>>>>
>>>> I kept thinking about this a bit and I'd like to propose yet another
>>>> name for this group of devices: Memory Communication Devices (MCD)
>>>>
>>>> One important point I see is that there is a bit of a misnomer in the
>>>> existing ISM name in that our ISM device does in fact *not* share
>>>> memory in the common sense of the "shared memory" wording. Instead it
>>>> copies data between partitions of memory that share a common
>>>> cache/memory hierarchy while not sharing the memory itself. loopback-
>>>> ism and a possibly future virtio-ism on the other hand would share
>>>> memory in the "shared memory" sense. Though I'd very much hope they
>>>> will retain a copy mode to allow use in partition scenarios.
>>>>
>>>> With that background I think the common denominator between them and
>>>> the main idea behind ISM is that they facilitate communication via
>>>> memory buffers and very simple and reliable copy/share operations. I
>>>> think this would also capture our planned use-case of devices (TTYs,
>>>> block devices, framebuffers + HID etc) provided by a peer on top of
>>>> such a memory communication device.
>>>
>>> Make sense, I agree with MCD.
>>>
>>> Best regard,
>>> Dust
>>>
>>
>>
> 
> Hi Winter,
> 
> Sorry for the late reply; we were on break for the Chinese Spring
> Festival.
> 
>>
>> In the discussion with Andrew Lunn, it showed that
>> a) we need an abstract description of 'ISM' devices (noted)
>> b) DMBs (Direct Memory Buffers) are a critical differentiator.
>>
>> So what do your think of Direct Memory Communication (DMC) as class name for these devices?
>>
>> I don't have a strong preference (we could also stay with ISM). But DMC may be a bit more
>> concrete than MCD or ISM.
> 
> I personally prefer MCD over Direct Memory Communication (DMC).
> 
> For loopback or Virtio-ISM, DMC seems like a good choice. However, for
> IBM ISM, since there's a DMA copy involved, it doesn’t seem truly "Direct,"
> does it?
> 
> Additionally, since we are providing a device, MCD feels like a more
> fitting choice, as it aligns better with the concept of a "device."
> 
> Best regards,
> Dust

Thank you for your thoughts, Dust.
For me the 'D as 'direct' is not so much about the number of copies, but more about the
aspect, that you can directly write at any offset into the buffer. I.e. no queues.
More like the D in DMA or RDMA.

I am preparing a talk for netdev in March about this subject, and the more I work on it,
it seems to me that the buffers ('B'), that are
a) only authorized for a single remote device and
b) can be accessed at any offset
are the important differentiator compared other virtual devices.
So maybe 'D' for Dedicated?

I even came up with
dibs - Dedicated Internal Buffer Sharing or
dibc - Dedicated Internal Buffer Communication
(ok, I like the sound and look of the 'I'. But being on the same hardware as opposed
to RDMA is also an important aspect.)


MCD - 'memory communication device' sounds rather vague to me. But if it is the
smallest common denominator, i.e. the only thing we can all agree on, I could live with it.



  reply	other threads:[~2025-02-10  9:38 UTC|newest]

Thread overview: 61+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-01-15 19:55 [RFC net-next 0/7] Provide an ism layer Alexandra Winter
2025-01-15 19:55 ` [RFC net-next 1/7] net/ism: Create net/ism Alexandra Winter
2025-01-16 20:08   ` Andrew Lunn
2025-01-17 12:06     ` Alexandra Winter
2025-01-15 19:55 ` [RFC net-next 2/7] net/ism: Remove dependencies between ISM_VPCI and SMC Alexandra Winter
2025-01-15 19:55 ` [RFC net-next 3/7] net/ism: Use uuid_t for ISM GID Alexandra Winter
2025-01-20 17:18   ` Simon Horman
2025-01-22 14:46     ` Alexandra Winter
2025-01-15 19:55 ` [RFC net-next 4/7] net/ism: Add kernel-doc comments for ism functions Alexandra Winter
2025-01-15 22:06   ` Halil Pasic
2025-01-20  6:32   ` Dust Li
2025-01-20  9:56     ` Alexandra Winter
2025-01-20 10:07       ` Julian Ruess
2025-01-20 11:35         ` Alexandra Winter
2025-01-20 10:34     ` Niklas Schnelle
2025-01-22 15:02       ` Dust Li
2025-01-15 19:55 ` [RFC net-next 5/7] net/ism: Move ism_loopback to net/ism Alexandra Winter
2025-01-20  3:55   ` Dust Li
2025-01-20  9:31     ` Alexandra Winter
2025-02-06 17:36   ` Julian Ruess
2025-02-10 10:39     ` Alexandra Winter
2025-01-15 19:55 ` [RFC net-next 6/7] s390/ism: Define ismvp_dev Alexandra Winter
2025-01-15 19:55 ` [RFC net-next 7/7] net/smc: Use only ism_ops Alexandra Winter
2025-01-16  9:32 ` [RFC net-next 0/7] Provide an ism layer Dust Li
2025-01-16 11:55   ` Julian Ruess
2025-01-16 16:17     ` Alexandra Winter
2025-01-16 17:08       ` Julian Ruess
2025-01-17  2:13       ` Dust Li
2025-01-17 10:38         ` Niklas Schnelle
2025-01-17 15:02           ` Andrew Lunn
2025-01-17 16:00             ` Niklas Schnelle
2025-01-17 16:33               ` Andrew Lunn
2025-01-17 16:57                 ` Niklas Schnelle
2025-01-17 20:29                   ` Andrew Lunn
2025-01-20  6:21                     ` Dust Li
2025-01-20 12:03                       ` Alexandra Winter
2025-01-20 16:01                         ` Andrew Lunn
2025-01-20 17:25                           ` Alexandra Winter
2025-01-18 15:31           ` Dust Li
2025-01-28 16:04             ` Alexandra Winter
2025-02-10  5:08               ` Dust Li
2025-02-10  9:38                 ` Alexandra Winter [this message]
2025-02-11  1:57                   ` Dust Li
2025-02-16 15:40                   ` Wen Gu
2025-02-19 11:25                     ` [RFC net-next 0/7] Provide an ism layer - naming Alexandra Winter
2025-02-25  1:36                       ` Dust Li
2025-02-25  8:40                         ` Alexandra Winter
2025-01-17 13:00         ` [RFC net-next 0/7] Provide an ism layer Alexandra Winter
2025-01-17 15:10           ` Andrew Lunn
2025-01-17 16:20             ` Alexandra Winter
2025-01-20 10:28           ` Alexandra Winter
2025-01-22  3:04             ` Dust Li
2025-01-22 12:02               ` Alexandra Winter
2025-01-22 12:05                 ` Alexandra Winter
2025-01-22 14:10                   ` Dust Li
2025-01-17 15:06       ` Andrew Lunn
2025-01-17 15:38         ` Alexandra Winter
2025-02-16 15:38       ` Wen Gu
2025-01-17 11:04   ` Alexandra Winter
2025-01-18 15:24     ` Dust Li
2025-01-20 11:45       ` Alexandra Winter

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1e96806f-0a4e-4292-9483-928b1913d311@linux.ibm.com \
    --to=wintera@linux.ibm.com \
    --cc=agordeev@linux.ibm.com \
    --cc=alibuda@linux.alibaba.com \
    --cc=andrew+netdev@lunn.ch \
    --cc=borntraeger@linux.ibm.com \
    --cc=davem@davemloft.net \
    --cc=dust.li@linux.alibaba.com \
    --cc=edumazet@google.com \
    --cc=gbayer@linux.ibm.com \
    --cc=gor@linux.ibm.com \
    --cc=guwen@linux.alibaba.com \
    --cc=hca@linux.ibm.com \
    --cc=horms@kernel.org \
    --cc=jaka@linux.ibm.com \
    --cc=julianr@linux.ibm.com \
    --cc=kuba@kernel.org \
    --cc=linux-s390@vger.kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=oberpar@linux.ibm.com \
    --cc=pabeni@redhat.com \
    --cc=pasic@linux.ibm.com \
    --cc=schnelle@linux.ibm.com \
    --cc=svens@linux.ibm.com \
    --cc=tonylu@linux.alibaba.com \
    --cc=twinkler@linux.ibm.com \
    --cc=wenjia@linux.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox