public inbox for intel-gfx@lists.freedesktop.org
 help / color / mirror / Atom feed
From: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
To: Lionel Landwerlin <lionel.g.landwerlin@intel.com>,
	intel-gfx@lists.freedesktop.org
Subject: Re: [PATCH v4 6/6] drm/i915: expose rcs topology through query uAPI
Date: Tue, 16 Jan 2018 08:42:29 +0000	[thread overview]
Message-ID: <b337f447-ccb9-058b-41ac-ea81727b821d@linux.intel.com> (raw)
In-Reply-To: <c5226ab6-b3b2-6957-5f48-160e3aa9c87b@intel.com>


On 15/01/2018 18:23, Lionel Landwerlin wrote:
> On 15/01/18 17:54, Tvrtko Ursulin wrote:
>>
>> On 15/01/2018 14:41, Lionel Landwerlin wrote:
>>> With the introduction of asymmetric slices in CNL, we cannot rely on
>>> the previous SUBSLICE_MASK getparam to tell userspace what subslices
>>> are available. Here we introduce a more detailed way of querying the
>>> Gen's GPU topology that doesn't aggregate numbers.
>>>
>>> This is essential for monitoring parts of the GPU with the OA unit,
>>> because counters need to be normalized to the number of
>>> EUs/subslices/slices. The current aggregated numbers like EU_TOTAL do
>>> not gives us sufficient information.
>>>
>>> As a bonus we can draw representations of the GPU :
>>>
>>>          https://imgur.com/a/vuqpa
>>>
>>> v2: Rename uapi struct s/_mask/_info/ (Tvrtko)
>>>      Report max_slice/subslice/eus_per_subslice rather than strides 
>>> (Tvrtko)
>>>      Add uapi macros to read data from *_info structs (Tvrtko)
>>>
>>> v3: Use !!(v & DRM_I915_BIT()) for uapi macros instead of custom 
>>> shifts (Tvrtko)
>>>
>>> Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
>>> ---
>>>   drivers/gpu/drm/i915/i915_query.c | 134 
>>> ++++++++++++++++++++++++++++++++++++++
>>>   include/uapi/drm/i915_drm.h       |  53 +++++++++++++++
>>>   2 files changed, 187 insertions(+)
>>>
>>> diff --git a/drivers/gpu/drm/i915/i915_query.c 
>>> b/drivers/gpu/drm/i915/i915_query.c
>>> index 5694cfea4553..465ec18a472f 100644
>>> --- a/drivers/gpu/drm/i915/i915_query.c
>>> +++ b/drivers/gpu/drm/i915/i915_query.c
>>> @@ -25,8 +25,129 @@
>>>   #include "i915_drv.h"
>>>   #include <uapi/drm/i915_drm.h>
>>>   +static int query_slices_info(struct drm_i915_private *dev_priv,
>>> +                 struct drm_i915_query_item *query_item)
>>> +{
>>> +    const struct sseu_dev_info *sseu = &INTEL_INFO(dev_priv)->sseu;
>>> +    struct drm_i915_query_slices_info slices_info;
>>> +    u32 data_length, length;
>>> +
>>> +    if (sseu->max_slices == 0)
>>> +        return -ENODEV;
>>> +
>>> +    data_length = sizeof(sseu->slice_mask);
>>> +    length = sizeof(slices_info) + data_length;
>>> +
>>> +    /*
>>> +     * If we ever change the internal slice mask data type, we'll 
>>> need to
>>> +     * update this function.
>>> +     */
>>> +    BUILD_BUG_ON(sizeof(u8) != sizeof(sseu->slice_mask));
>>> +
>>> +    if (query_item->length == 0) {
>>> +        query_item->length = length;
>>> +        return 0;
>>> +    }
>>> +
>>> +    if (query_item->length != length)
>>> +        return -EINVAL;
>>> +
>>> +    memset(&slices_info, 0, sizeof(slices_info));
>>> +    slices_info.max_slices = sseu->max_slices;
>>> +
>>> +    if (copy_to_user(u64_to_user_ptr(query_item->data_ptr), 
>>> &slices_info,
>>> +             sizeof(slices_info)))
>>> +        return -EFAULT;
>>> +
>>> +    if (copy_to_user(u64_to_user_ptr(query_item->data_ptr +
>>> +                     offsetof(struct drm_i915_query_slices_info, 
>>> data)),
>>> +             &sseu->slice_mask, data_length))
>>> +        return -EFAULT;
>>> +
>>> +    return 0;
>>> +}
>>> +
>>> +static int query_subslices_info(struct drm_i915_private *dev_priv,
>>> +                struct drm_i915_query_item *query_item)
>>> +{
>>> +    const struct sseu_dev_info *sseu = &INTEL_INFO(dev_priv)->sseu;
>>> +    struct drm_i915_query_subslices_info subslices_info;
>>> +    u32 data_length, length;
>>> +
>>> +    if (sseu->max_slices == 0)
>>> +        return -ENODEV;
>>> +
>>> +    memset(&subslices_info, 0, sizeof(subslices_info));
>>> +    subslices_info.max_slices = sseu->max_slices;
>>> +    subslices_info.max_subslices = sseu->max_subslices;
>>> +
>>> +    data_length = subslices_info.max_slices *
>>> +        DIV_ROUND_UP(subslices_info.max_subslices,
>>> +                 sizeof(sseu->subslice_mask[0]) * BITS_PER_BYTE);
>>> +    length = sizeof(subslices_info) + data_length;
>>> +
>>> +    if (query_item->length == 0) {
>>> +        query_item->length = length;
>>> +        return 0;
>>> +    }
>>> +
>>> +    if (query_item->length != length)
>>> +        return -EINVAL;
>>> +
>>> +    if (copy_to_user(u64_to_user_ptr(query_item->data_ptr), 
>>> &subslices_info,
>>> +             sizeof(subslices_info)))
>>> +        return -EFAULT;
>>> +
>>> +    if (copy_to_user(u64_to_user_ptr(query_item->data_ptr +
>>> +                     offsetof(struct drm_i915_query_subslices_info, 
>>> data)),
>>> +             sseu->subslice_mask, data_length))
>>> +        return -EFAULT;
>>> +
>>> +    return 0;
>>> +}
>>> +
>>> +static int query_eus_info(struct drm_i915_private *dev_priv,
>>> +              struct drm_i915_query_item *query_item)
>>> +{
>>> +    const struct sseu_dev_info *sseu = &INTEL_INFO(dev_priv)->sseu;
>>> +    struct drm_i915_query_eus_info eus_info;
>>> +    u32 data_length, length;
>>> +
>>> +    if (sseu->max_slices == 0)
>>> +        return -ENODEV;
>>> +
>>> +    memset(&eus_info, 0, sizeof(eus_info));
>>> +    eus_info.max_slices = sseu->max_slices;
>>> +    eus_info.max_subslices = sseu->max_subslices;
>>> +    eus_info.max_eus_per_subslice = sseu->max_eus_per_subslice;
>>> +
>>> +    data_length = eus_info.max_slices * eus_info.max_subslices *
>>> +        DIV_ROUND_UP(eus_info.max_eus_per_subslice, BITS_PER_BYTE);
>>> +    length = sizeof(eus_info) + data_length;
>>> +
>>> +    if (query_item->length == 0) {
>>> +        query_item->length = length;
>>> +        return 0;
>>> +    }
>>> +
>>> +    if (query_item->length != length)
>>> +        return -EINVAL;
>>> +
>>> +    if (copy_to_user(u64_to_user_ptr(query_item->data_ptr), &eus_info,
>>> +             sizeof(eus_info)))
>>> +        return -EFAULT;
>>> +
>>> +    if (copy_to_user(u64_to_user_ptr(query_item->data_ptr +
>>> +                     offsetof(struct drm_i915_query_eus_info, data)),
>>> +             sseu->eu_mask, data_length))
>>> +        return -EFAULT;
>>
>> I think in all three queries, once you have the length, you could 
>> extract the four if statement above into a common helper? Arguments 
>> are query_item, length, data_length, ptr_query_item, ptr_data, and 
>> offset_data, unless I am missing something. Up to you if you would 
>> find that a worthy compaction.
> 
> I just tried that but the result doesn't really look good.
> You need to maintain the control flow via an indirect return value...

I was thinking something like this:

int insert_query_helper_name(query_item, query_item_sz, length, 
data_length, query_item_ptr, query_data_ptr)
{
     if (query_item->length == 0) {
         query_item->length = length;
         return 0;
     }

     if (query_item->length != length)
         return -EINVAL;

     if (copy_to_user(u64_to_user_ptr(query_item->data_ptr),
			query_item_ptr, query_item_sz))
         return -EFAULT;

     if (copy_to_user(u64_to_user_ptr(query_item->data_ptr +
                      query_item_sz,
              query_data_ptr, data_length))
         return -EFAULT;

     return 0;
}

Then in the handlers just end with:

return insert_query_helper_hame(...);

?

>>
>>> +
>>> +    return 0;
>>> +}
>>> +
>>>   int i915_query_ioctl(struct drm_device *dev, void *data, struct 
>>> drm_file *file)
>>>   {
>>> +    struct drm_i915_private *dev_priv = to_i915(dev);
>>>       struct drm_i915_query *args = data;
>>>       struct drm_i915_query_item __user *user_item_ptr =
>>>           u64_to_user_ptr(args->items_ptr);
>>> @@ -34,15 +155,28 @@ int i915_query_ioctl(struct drm_device *dev, 
>>> void *data, struct drm_file *file)
>>>         for (i = 0; i < args->num_items; i++, user_item_ptr++) {
>>>           struct drm_i915_query_item item;
>>> +        int ret;
>>>             if (copy_from_user(&item, user_item_ptr, sizeof(item)))
>>>               return -EFAULT;
>>>             switch (item.query_id) {
>>> +        case DRM_I915_QUERY_ID_SLICES_INFO:
>>> +            ret = query_slices_info(dev_priv, &item);
>>> +            break;
>>> +        case DRM_I915_QUERY_ID_SUBSLICES_INFO:
>>> +            ret = query_subslices_info(dev_priv, &item);
>>> +            break;
>>> +        case DRM_I915_QUERY_ID_EUS_INFO:
>>> +            ret = query_eus_info(dev_priv, &item);
>>> +            break;
>>>           default:
>>>               return -EINVAL;
>>>           }
>>>   +        if (ret)
>>> +            return ret;
>>> +
>>>           if (copy_to_user(user_item_ptr, &item, sizeof(item)))
>>>               return -EFAULT;
>>>       }
>>> diff --git a/include/uapi/drm/i915_drm.h b/include/uapi/drm/i915_drm.h
>>> index 39e93f10f2cd..d25f21e3c107 100644
>>> --- a/include/uapi/drm/i915_drm.h
>>> +++ b/include/uapi/drm/i915_drm.h
>>> @@ -1618,6 +1618,9 @@ struct drm_i915_perf_oa_config {
>>>     struct drm_i915_query_item {
>>>       __u64 query_id;
>>> +#define DRM_I915_QUERY_ID_SLICES_INFO    0x01
>>> +#define DRM_I915_QUERY_ID_SUBSLICES_INFO 0x02
>>> +#define DRM_I915_QUERY_ID_EUS_INFO       0x03
>>
>> Would DRM_I915_QUERY_ID_"singular"_INFO read better? In struct names 
>> as well if so?
> 
> Hmm... Sorry, I don't understand your comment here.
> What would be "singular" ?

Opposite of plural, like:

_QUERY_ID_SLICE_INFO, _QUERY_ID_SUBSLICE_INFO, _QUERY_ID_EU_INFO.

Actually now I'm thinking if _ID_ part could also be removed for no loss?

Regards,

Tvrtko

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

  reply	other threads:[~2018-01-16  8:42 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-01-15 14:41 [PATCH v4 0/6] drm/i915: expose RCS topology to userspace Lionel Landwerlin
2018-01-15 14:41 ` [PATCH v4 1/6] drm/i915: store all subslice masks Lionel Landwerlin
2018-01-15 17:37   ` Tvrtko Ursulin
2018-01-15 18:09     ` Lionel Landwerlin
2018-01-15 14:41 ` [PATCH v4 2/6] drm/i915/debugfs: reuse max slice/subslices already stored in sseu Lionel Landwerlin
2018-01-15 14:41 ` [PATCH v4 3/6] drm/i915/debugfs: add rcs topology entry Lionel Landwerlin
2018-01-15 17:42   ` Tvrtko Ursulin
2018-01-15 18:12     ` Lionel Landwerlin
2018-01-15 14:41 ` [PATCH v4 4/6] drm/i915: add rcs topology to error state Lionel Landwerlin
2018-01-15 17:43   ` Tvrtko Ursulin
2018-01-16 13:33     ` Lionel Landwerlin
2018-01-15 14:41 ` [PATCH v4 5/6] drm/i915: add query uAPI Lionel Landwerlin
2018-01-15 14:41 ` [PATCH v4 6/6] drm/i915: expose rcs topology through " Lionel Landwerlin
2018-01-15 17:54   ` Tvrtko Ursulin
2018-01-15 18:23     ` Lionel Landwerlin
2018-01-16  8:42       ` Tvrtko Ursulin [this message]
2018-01-16 10:42         ` Lionel Landwerlin
2018-01-15 15:31 ` ✓ Fi.CI.BAT: success for drm/i915: expose RCS topology to userspace Patchwork
2018-01-15 19:18 ` ✗ Fi.CI.IGT: failure " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=b337f447-ccb9-058b-41ac-ea81727b821d@linux.intel.com \
    --to=tvrtko.ursulin@linux.intel.com \
    --cc=intel-gfx@lists.freedesktop.org \
    --cc=lionel.g.landwerlin@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox