linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Ira Weiny <ira.weiny@intel.com>
To: Jonathan Cameron <Jonathan.Cameron@huawei.com>,
	Ira Weiny <ira.weiny@intel.com>
Cc: "Li, Ming4" <ming4.li@intel.com>,
	Dave Jiang <dave.jiang@intel.com>, "Fan Ni" <fan.ni@samsung.com>,
	Navneet Singh <navneet.singh@intel.com>,
	"Jonathan Corbet" <corbet@lwn.net>,
	Andrew Morton <akpm@linux-foundation.org>,
	"Dan Williams" <dan.j.williams@intel.com>,
	Davidlohr Bueso <dave@stgolabs.net>,
	Alison Schofield <alison.schofield@intel.com>,
	Vishal Verma <vishal.l.verma@intel.com>,
	<linux-btrfs@vger.kernel.org>, <linux-cxl@vger.kernel.org>,
	<linux-doc@vger.kernel.org>, <nvdimm@lists.linux.dev>,
	<linux-kernel@vger.kernel.org>
Subject: Re: [PATCH v4 21/28] cxl/extent: Process DCD events and realize region extents
Date: Mon, 21 Oct 2024 09:04:36 -0500	[thread overview]
Message-ID: <67165f7447c77_8cb17294f0@iweiny-mobl.notmuch> (raw)
In-Reply-To: <20241018100307.000008a9@Huawei.com>

Jonathan Cameron wrote:
> On Thu, 17 Oct 2024 16:15:03 -0500
> Ira Weiny <ira.weiny@intel.com> wrote:
> 
> > Jonathan Cameron wrote:
> > > On Wed, 9 Oct 2024 14:49:09 -0500
> > > Ira Weiny <ira.weiny@intel.com> wrote:
> > >   
> > > > Li, Ming4 wrote:  
> > > > > On 10/8/2024 7:16 AM, ira.weiny@intel.com wrote:    
> > > > > > From: Navneet Singh <navneet.singh@intel.com>
> > > > > >    
> > 
> > [snip]
> > 

[snip]

> > 
> > So...  for clarity among all of us here is the new function.  I'm not thrilled
> > with the use of a goto but I think it is ok here.
> 
> Easy enough to avoid and I don't think it hurts readability much to do so.

I disagree...  See below.

> 
> Your code should work though.
> 
> > 
> > Ira
> > 
> > static int cxl_send_dc_response(struct cxl_memdev_state *mds, int opcode,      
> >                                struct xarray *extent_array, int cnt)           
> > {                                                                              
> >        struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox;                    
> >        struct cxl_mbox_dc_response *p;                                         
> >        struct cxl_mbox_cmd mbox_cmd;                                           
> >        struct cxl_extent *extent;                                              
> >        unsigned long index;                                                    
> >        u32 pl_index;                                                           
> >        int rc;                                                                 
> >                                                                                
> >        size_t pl_size = struct_size(p, extent_list, cnt);                      
> >        u32 max_extents = cnt;                                              
> >                                                                                
> >        /* May have to use more bit on response. */                             
> >        if (pl_size > cxl_mbox->payload_size) {                                 
> >                max_extents = (cxl_mbox->payload_size - sizeof(*p)) /           
> >                              sizeof(struct updated_extent_list);               
> >                pl_size = struct_size(p, extent_list, max_extents);
>              
> >        }                                                                       
> >                                                                                
> >        struct cxl_mbox_dc_response *response __free(kfree) =                   
> >                                                kzalloc(pl_size, GFP_KERNEL);   
> >        if (!response)                                                          
> >                return -ENOMEM;                                                 
> >                                                                                
> >        pl_index = 0;                                                           
> >        if (cnt == 0)                                                           
> >                goto send_zero_accepted;
> >        xa_for_each(extent_array, index, extent) {                              
> >                response->extent_list[pl_index].dpa_start = extent->start_dpa;  
> >                response->extent_list[pl_index].length = extent->length;        
> >                pl_index++;                                                     
> >                response->extent_list_size = cpu_to_le32(pl_index);    
> 
> Why set this here - to me makes more sense to set it only once but I can
> see the logic either way.

I put it here to group it with the changing of pl_index.  It is extra work.

Since I'm resending I'll make the quick change.

>          
> >   
> >                if (pl_index == max_extents) {                                  
> >                        mbox_cmd = (struct cxl_mbox_cmd) {                      
> >                                .opcode = opcode,                               
> >                                .size_in = struct_size(response, extent_list,   
> >                                                       pl_index),               
> >                                .payload_in = response,                         
> >                        };                                                      
> >                                                                                
> >                        response->flags = 0;                                    
> >                        if (pl_index < cnt)                                     
> >                                response->flags &= CXL_DCD_EVENT_MORE;          
> >                                                                                
> >                        rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd);        
> >                        if (rc)                                                 
> >                                return rc;                                      
> >                        cnt -= pl_index;                                        
> >                        pl_index = 0;                                          
> >                }                                                               
> >        }                                                                       
> >                                                                                
> >        if (!pl_index)                                                          
> >                return 0;                                                       
> >                                                                                
> > send_zero_accepted:                                                            
> >        mbox_cmd = (struct cxl_mbox_cmd) {                                      
> >                .opcode = opcode,                                               
> >                .size_in = struct_size(response, extent_list,                   
> >                                       pl_index),                               
> >                .payload_in = response,                                         
> >        };                                                                      
> >                                                                                
> >        response->flags = 0;                                                    
> >        return cxl_internal_send_cmd(cxl_mbox, &mbox_cmd);                      
> > }                
> 
> 
> Alternative form for what you have...

Sure but lots of indentation on the common path which I have grown
to avoid...  :-/

Looking at this fresh...  A helper function works best.



static int send_one_response(struct cxl_mailbox *cxl_mbox,
                             struct cxl_mbox_dc_response *response,
                             int opcode, u32 extent_list_size, u8 flags)
{
        struct cxl_mbox_cmd mbox_cmd = (struct cxl_mbox_cmd) {
                .opcode = opcode,
                .size_in = struct_size(response, extent_list, extent_list_size),
                .payload_in = response,
        };

        response->extent_list_size = cpu_to_le32(extent_list_size);
        response->flags = flags;
        return cxl_internal_send_cmd(cxl_mbox, &mbox_cmd);
}

static int cxl_send_dc_response(struct cxl_memdev_state *mds, int opcode,
                                struct xarray *extent_array, int cnt)
{
        struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox;
        struct cxl_mbox_dc_response *p;
        struct cxl_extent *extent;
        unsigned long index;
        u32 pl_index;

        size_t pl_size = struct_size(p, extent_list, cnt);
        u32 max_extents = cnt;

        /* May have to use more bit on response. */
        if (pl_size > cxl_mbox->payload_size) {
                max_extents = (cxl_mbox->payload_size - sizeof(*p)) /
                              sizeof(struct updated_extent_list);
                pl_size = struct_size(p, extent_list, max_extents);
        }

        struct cxl_mbox_dc_response *response __free(kfree) =
                                                kzalloc(pl_size, GFP_KERNEL);
        if (!response)
                return -ENOMEM;

        if (cnt == 0)
                return send_one_response(cxl_mbox, response, opcode, 0, 0);

        pl_index = 0;
        xa_for_each(extent_array, index, extent) {
                response->extent_list[pl_index].dpa_start = extent->start_dpa;
                response->extent_list[pl_index].length = extent->length;
                pl_index++;

                if (pl_index == max_extents) {
                        u8 flags = 0;
                        int rc;

                        if (pl_index < cnt)
                                flags &= CXL_DCD_EVENT_MORE;
                        rc = send_one_response(cxl_mbox, response, opcode,
                                               pl_index, flags);
                        if (rc) 
                                return rc;
                        cnt -= pl_index;
                        pl_index = 0;
                }
        }

        if (!pl_index) /* nothing more to do */
                return 0;
        return send_one_response(cxl_mbox, response, opcode, pl_index, 0);
}

  reply	other threads:[~2024-10-21 14:05 UTC|newest]

Thread overview: 134+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-10-07 23:16 [PATCH v4 00/28] DCD: Add support for Dynamic Capacity Devices (DCD) Ira Weiny
2024-10-07 23:16 ` [PATCH v4 01/28] test printk: Add very basic struct resource tests Ira Weiny
2024-10-08 16:35   ` Andy Shevchenko
2024-10-09 12:24   ` Jonathan Cameron
2024-10-09 17:09   ` Fan Ni
2024-10-10 14:59   ` Petr Mladek
2024-10-11 14:49     ` Ira Weiny
2024-10-07 23:16 ` [PATCH v4 02/28] printk: Add print format (%pra) for struct range Ira Weiny
2024-10-08 16:56   ` Andy Shevchenko
2024-10-09 12:27     ` Jonathan Cameron
2024-10-09 14:42       ` Andy Shevchenko
2024-10-09 13:30   ` Rasmus Villemoes
2024-10-09 14:41     ` Andy Shevchenko
2024-10-14  0:08       ` Ira Weiny
2024-10-11 16:54     ` Ira Weiny
2024-10-09 17:33   ` Fan Ni
2024-10-11  2:09   ` Bagas Sanjaya
2024-10-17 20:57     ` Ira Weiny
2024-10-25 12:42       ` Bagas Sanjaya
2024-10-07 23:16 ` [PATCH v4 03/28] cxl/cdat: Use %pra for dpa range outputs Ira Weiny
2024-10-09 12:33   ` Jonathan Cameron
2024-10-09 17:34   ` Fan Ni
2024-10-07 23:16 ` [PATCH v4 04/28] range: Add range_overlaps() Ira Weiny
2024-10-08 16:10   ` David Sterba
2024-10-09 14:45     ` Andy Shevchenko
2024-10-09 14:46       ` Andy Shevchenko
2024-10-14  0:12         ` Ira Weiny
2024-10-09 15:36       ` David Sterba
2024-10-09 16:04         ` Andy Shevchenko
2024-10-10 15:24     ` Ira Weiny
2024-10-07 23:16 ` [PATCH v4 05/28] dax: Document dax dev range tuple Ira Weiny
2024-10-09 12:42   ` Jonathan Cameron
2024-10-11 20:40     ` Ira Weiny
2024-10-16 15:48       ` Jonathan Cameron
2024-10-07 23:16 ` [PATCH v4 06/28] cxl/pci: Delay event buffer allocation Ira Weiny
2024-10-09 17:47   ` Fan Ni
2024-10-07 23:16 ` [PATCH v4 07/28] cxl/mbox: Flag support for Dynamic Capacity Devices (DCD) ira.weiny
2024-10-07 23:16 ` [PATCH v4 08/28] cxl/mem: Read dynamic capacity configuration from the device ira.weiny
2024-10-09 12:49   ` Jonathan Cameron
2024-10-14  0:05     ` Ira Weiny
2024-10-16 15:54       ` Jonathan Cameron
2024-10-16 16:59         ` Kees Cook
2024-10-07 23:16 ` [PATCH v4 09/28] cxl/core: Separate region mode from decoder mode ira.weiny
2024-10-09 12:51   ` Jonathan Cameron
2024-10-09 18:06   ` Fan Ni
2024-10-07 23:16 ` [PATCH v4 10/28] cxl/region: Add dynamic capacity decoder and region modes ira.weiny
2024-10-07 23:16 ` [PATCH v4 11/28] cxl/hdm: Add dynamic capacity size support to endpoint decoders ira.weiny
2024-10-10 12:45   ` Jonathan Cameron
2024-10-07 23:16 ` [PATCH v4 12/28] cxl/cdat: Gather DSMAS data for DCD regions Ira Weiny
2024-10-09 14:42   ` Rafael J. Wysocki
2024-10-11 20:38     ` Ira Weiny
2024-10-14 20:52       ` Wysocki, Rafael J
2024-10-09 18:16   ` Fan Ni
2024-10-14  1:16     ` Ira Weiny
2024-10-10 12:51   ` Jonathan Cameron
2024-10-07 23:16 ` [PATCH v4 13/28] cxl/mem: Expose DCD partition capabilities in sysfs ira.weiny
2024-10-09 20:46   ` Fan Ni
2024-10-14  1:34     ` Ira Weiny
2024-10-10 13:04   ` Jonathan Cameron
2024-10-16 21:34     ` Ira Weiny
2024-10-11  2:15   ` Bagas Sanjaya
2024-10-07 23:16 ` [PATCH v4 14/28] cxl/port: Add endpoint decoder DC mode support to sysfs ira.weiny
2024-10-10 13:14   ` Jonathan Cameron
2024-10-17 17:51     ` Ira Weiny
2024-10-07 23:16 ` [PATCH v4 15/28] cxl/region: Refactor common create region code Ira Weiny
2024-10-10 13:18   ` Jonathan Cameron
2024-10-17 20:29     ` Ira Weiny
2024-10-10 16:27   ` Fan Ni
2024-10-24  2:17   ` Alison Schofield
2024-10-07 23:16 ` [PATCH v4 16/28] cxl/region: Add sparse DAX region support ira.weiny
2024-10-10 13:46   ` Jonathan Cameron
2024-10-10 17:41   ` Fan Ni
2024-10-07 23:16 ` [PATCH v4 17/28] cxl/events: Split event msgnum configuration from irq setup Ira Weiny
2024-10-10 13:49   ` Jonathan Cameron
2024-10-10 17:58   ` Fan Ni
2024-10-24  2:33     ` Ira Weiny
2024-10-07 23:16 ` [PATCH v4 18/28] cxl/pci: Factor out interrupt policy check Ira Weiny
2024-10-10 18:07   ` Fan Ni
2024-10-07 23:16 ` [PATCH v4 19/28] cxl/mem: Configure dynamic capacity interrupts ira.weiny
2024-10-10 14:15   ` Jonathan Cameron
2024-10-10 18:25   ` Fan Ni
2024-10-24  3:09     ` Ira Weiny
2024-10-07 23:16 ` [PATCH v4 20/28] cxl/core: Return endpoint decoder information from region search Ira Weiny
2024-10-10 14:21   ` Jonathan Cameron
2024-10-10 18:29   ` Fan Ni
2024-10-24  2:30   ` Alison Schofield
2024-10-07 23:16 ` [PATCH v4 21/28] cxl/extent: Process DCD events and realize region extents ira.weiny
2024-10-09  1:56   ` Li, Ming4
2024-10-09 19:49     ` Ira Weiny
2024-10-10  3:06       ` Li, Ming4
2024-10-14  2:05         ` Ira Weiny
2024-10-10 14:50       ` Jonathan Cameron
2024-10-11 19:14         ` Fan Ni
2024-10-17 21:15         ` Ira Weiny
2024-10-18  9:03           ` Jonathan Cameron
2024-10-21 14:04             ` Ira Weiny [this message]
2024-10-21 14:47               ` Jonathan Cameron
2024-10-10 14:58   ` Jonathan Cameron
2024-10-17 21:39     ` Ira Weiny
2024-10-18  9:09       ` Jonathan Cameron
2024-10-21 18:45         ` Ira Weiny
2024-10-22 17:01           ` Jonathan Cameron
2024-10-07 23:16 ` [PATCH v4 22/28] cxl/region/extent: Expose region extent information in sysfs ira.weiny
2024-10-10 15:01   ` Jonathan Cameron
2024-10-18 18:26     ` Ira Weiny
2024-10-21  9:37       ` Jonathan Cameron
2024-10-14 16:08   ` Fan Ni
2024-10-07 23:16 ` [PATCH v4 23/28] dax/bus: Factor out dev dax resize logic Ira Weiny
2024-10-10 15:06   ` Jonathan Cameron
2024-10-21 21:16     ` Ira Weiny
2024-10-14 16:56   ` Fan Ni
2024-10-07 23:16 ` [PATCH v4 24/28] dax/region: Create resources on sparse DAX regions ira.weiny
2024-10-10 15:27   ` Jonathan Cameron
2024-10-23  1:20     ` Ira Weiny
2024-10-23 11:22       ` Jonathan Cameron
2024-10-24  3:50         ` Ira Weiny
2024-10-07 23:16 ` [PATCH v4 25/28] cxl/region: Read existing extents on region creation ira.weiny
2024-10-10 15:33   ` Jonathan Cameron
2024-10-24  1:41     ` Ira Weiny
2024-10-07 23:16 ` [PATCH v4 26/28] cxl/mem: Trace Dynamic capacity Event Record ira.weiny
2024-10-10 15:41   ` Jonathan Cameron
2024-10-24  1:52     ` Ira Weiny
2024-10-07 23:16 ` [PATCH v4 27/28] tools/testing/cxl: Make event logs dynamic Ira Weiny
2024-10-10 15:49   ` Jonathan Cameron
2024-10-24  1:59     ` Ira Weiny
2024-10-07 23:16 ` [PATCH v4 28/28] tools/testing/cxl: Add DC Regions to mock mem data Ira Weiny
2024-10-10 15:58   ` Jonathan Cameron
2024-10-24  2:23     ` Ira Weiny
2024-10-08 22:57 ` [PATCH v4 00/28] DCD: Add support for Dynamic Capacity Devices (DCD) Fan Ni
2024-10-08 23:06   ` Fan Ni
2024-10-10 15:30     ` Ira Weiny
2024-10-10 15:31     ` Ira Weiny
2024-10-21 16:47 ` Fan Ni
2024-10-22 17:05   ` Jonathan Cameron

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=67165f7447c77_8cb17294f0@iweiny-mobl.notmuch \
    --to=ira.weiny@intel.com \
    --cc=Jonathan.Cameron@huawei.com \
    --cc=akpm@linux-foundation.org \
    --cc=alison.schofield@intel.com \
    --cc=corbet@lwn.net \
    --cc=dan.j.williams@intel.com \
    --cc=dave.jiang@intel.com \
    --cc=dave@stgolabs.net \
    --cc=fan.ni@samsung.com \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=linux-cxl@vger.kernel.org \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=ming4.li@intel.com \
    --cc=navneet.singh@intel.com \
    --cc=nvdimm@lists.linux.dev \
    --cc=vishal.l.verma@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).