Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Rodrigo Vivi <rodrigo.vivi@intel.com>
To: "Kumar, Janga Rahul" <janga.rahul.kumar@intel.com>
Cc: "Roper, Matthew D" <matthew.d.roper@intel.com>,
	"intel-xe@lists.freedesktop.org" <intel-xe@lists.freedesktop.org>,
	"Gandi,  Ramadevi" <ramadevi.gandi@intel.com>
Subject: Re: [Intel-xe] [PATCH] drm/xe: Extend query ioctl to expose tile count
Date: Wed, 11 Oct 2023 13:00:50 -0400	[thread overview]
Message-ID: <ZSbUwmEQOE8l5CIk@intel.com> (raw)
In-Reply-To: <MW3PR11MB4748563072E5947514D726C2AECAA@MW3PR11MB4748.namprd11.prod.outlook.com>

On Thu, Oct 05, 2023 at 03:26:50PM +0000, Kumar, Janga Rahul wrote:
> 
> 
> > -----Original Message-----
> > From: Vivi, Rodrigo <rodrigo.vivi@intel.com>
> > Sent: Wednesday, October 4, 2023 9:48 PM
> > To: Kumar, Janga Rahul <janga.rahul.kumar@intel.com>
> > Cc: intel-xe@lists.freedesktop.org; Gandi, Ramadevi
> > <ramadevi.gandi@intel.com>; Roper, Matthew D
> > <matthew.d.roper@intel.com>
> > Subject: Re: [Intel-xe] [PATCH] drm/xe: Extend query ioctl to expose tile
> > count
> > 
> > On Wed, Oct 04, 2023 at 02:21:38PM +0530, Janga Rahul Kumar wrote:
> > > Tile count can be queried by UMD's using the query ioctl.
> > > VM bind ioctl has tile mask param whose valid range can be determined
> > > based on the tile count info.
> > 
> > And what exactly UMD is doing with the tile count?
> KMD is expecting UMD to pass tile_mask(bind_ops) as part of VM bind ioctl (drm_xe_vm_bind_op param)
>         /**
>          * @tile_mask: Mask for which tiles to create binds for, 0 == All tiles,
>          * only applies to creating new VMAs
>          */
>         __u64 tile_mask;
> How can UMD be sure of the valid tile_mask range if the tile count is not Known?

it is strange that it also receives a 'region' instance above...
and it is a single region and not a region_mask.

but shouldn't we replace both region instance and tile_mask simply per region_mask?

> 
> > 
> > Please let's hold on this idea for now. I'm more trending to propose killing
> > both tile and gt concepts of the uapi. For user the important parts are the
> > hw-engine and the memory-region where the BO is placed.
> Sure. For user hw-engine and region info is most important but also the relation between the engine id and its local memory region id is also needed
> in scenarios where user want to allocate a bo on same tile region on which the engine resides.

yes, if instead of query_gt we have query_engine where engine reports the
memory regions, then we don't need tile, nor gt and we can use directly
info from engines themselves.

> 
> Thanks,
> Rahul 
> > 
> > >
> > > Cc: Aravind Iddamsetty <aravind.iddamsetty@intel.com>
> > > Cc: Matt Roper <matthew.d.roper@intel.com>
> > > Signed-off-by: Janga Rahul Kumar <janga.rahul.kumar@intel.com>
> > > ---
> > >  drivers/gpu/drm/xe/xe_query.c | 2 ++
> > >  include/uapi/drm/xe_drm.h     | 3 ++-
> > >  2 files changed, 4 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/drivers/gpu/drm/xe/xe_query.c
> > > b/drivers/gpu/drm/xe/xe_query.c index a951205100fe..ada1f26dc1d9
> > > 100644
> > > --- a/drivers/gpu/drm/xe/xe_query.c
> > > +++ b/drivers/gpu/drm/xe/xe_query.c
> > > @@ -200,6 +200,8 @@ static int query_config(struct xe_device *xe, struct
> > drm_xe_device_query *query)
> > >  		hweight_long(xe->info.mem_region_mask);
> > >  	config->info[XE_QUERY_CONFIG_MAX_ENGINE_PRIORITY] =
> > >  		xe_exec_queue_device_get_max_priority(xe);
> > > +	config->info[XE_QUERY_CONFIG_TILE_COUNT] =
> > > +		xe->info.tile_count;
> > >
> > >  	if (copy_to_user(query_ptr, config, size)) {
> > >  		kfree(config);
> > > diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
> > > index d48d8e3c898c..1d207b5398ae 100644
> > > --- a/include/uapi/drm/xe_drm.h
> > > +++ b/include/uapi/drm/xe_drm.h
> > > @@ -257,7 +257,8 @@ struct drm_xe_query_config {
> > >  #define XE_QUERY_CONFIG_GT_COUNT		4
> > >  #define XE_QUERY_CONFIG_MEM_REGION_COUNT	5
> > >  #define XE_QUERY_CONFIG_MAX_ENGINE_PRIORITY	6
> > > -#define XE_QUERY_CONFIG_NUM_PARAM
> > 	(XE_QUERY_CONFIG_MAX_ENGINE_PRIORITY + 1)
> > > +#define XE_QUERY_CONFIG_TILE_COUNT		7
> > > +#define XE_QUERY_CONFIG_NUM_PARAM
> > 	(XE_QUERY_CONFIG_TILE_COUNT + 1)
> > >  	/** @info: array of elements containing the config info */
> > >  	__u64 info[];
> > >  };
> > > --
> > > 2.25.1
> > >

      reply	other threads:[~2023-10-11 17:01 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-10-04  8:51 [Intel-xe] [PATCH] drm/xe: Extend query ioctl to expose tile count Janga Rahul Kumar
2023-10-04  8:51 ` [Intel-xe] ✓ CI.Patch_applied: success for " Patchwork
2023-10-04  8:51 ` [Intel-xe] ✓ CI.checkpatch: " Patchwork
2023-10-04  8:52 ` [Intel-xe] ✓ CI.KUnit: " Patchwork
2023-10-04  8:59 ` [Intel-xe] ✓ CI.Build: " Patchwork
2023-10-04  9:00 ` [Intel-xe] ✗ CI.Hooks: failure " Patchwork
2023-10-04  9:01 ` [Intel-xe] ✓ CI.checksparse: success " Patchwork
2023-10-04  9:41 ` [Intel-xe] ✗ CI.BAT: failure " Patchwork
2023-10-04 16:18 ` [Intel-xe] [PATCH] " Rodrigo Vivi
2023-10-05 15:26   ` Kumar, Janga Rahul
2023-10-11 17:00     ` Rodrigo Vivi [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZSbUwmEQOE8l5CIk@intel.com \
    --to=rodrigo.vivi@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=janga.rahul.kumar@intel.com \
    --cc=matthew.d.roper@intel.com \
    --cc=ramadevi.gandi@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox