From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 9AACBC54EE9 for ; Wed, 7 Sep 2022 15:04:42 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 6D7EF10E7A3; Wed, 7 Sep 2022 15:04:41 +0000 (UTC) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by gabe.freedesktop.org (Postfix) with ESMTPS id D338910E7A3 for ; Wed, 7 Sep 2022 15:04:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1662563077; x=1694099077; h=date:message-id:from:to:cc:subject:in-reply-to: references:mime-version:content-transfer-encoding; bh=WfeHsxYq3/EdIUmSaGUAH8CpuJ+xacDQpfDdauE/otA=; b=B/9nobaydEcoaDkC0v/WoN2rHgVXeJnsyPvL0F0nukBexzmDVFTl2mUX RlvH4Rps2DZnNZxKW5SOUnLt/+YIG0LbD0G1Z2hIS4Xmh/+wlB13hxj3Y 7l6z/vxJkepdEWh3IonQVZOa+8TfahmpbIURdSCQ1ZeB9jWFMP4yyNSzH NLLYSzlk04T6o0ERZPambCqo0gNS9l/u0fryc3xPOqCiWXxEyeJ+FAawt WTpcaWlAcPPGXHptINeIVheiD7yFNDybyk8DZtsewksUT8YxhRIPVAIMe 41/ukbK5WmHc09YPnpYMWxX7IwWUm2wawWF0A0wZVz8plYHOdG5/UKhSd w==; X-IronPort-AV: E=McAfee;i="6500,9779,10462"; a="283896457" X-IronPort-AV: E=Sophos;i="5.93,297,1654585200"; d="scan'208";a="283896457" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Sep 2022 08:03:46 -0700 X-IronPort-AV: E=Sophos;i="5.93,297,1654585200"; d="scan'208";a="790074561" Received: from adixit-mobl.amr.corp.intel.com (HELO adixit-arch.intel.com) ([10.212.191.110]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Sep 2022 08:03:46 -0700 Date: Wed, 07 Sep 2022 08:03:39 -0700 Message-ID: <8735d3xm44.wl-ashutosh.dixit@intel.com> From: "Dixit, Ashutosh" To: Tvrtko Ursulin In-Reply-To: References: <20220831193355.838209-1-ashutosh.dixit@intel.com> <20220831193355.838209-2-ashutosh.dixit@intel.com> <87sflaodjp.wl-ashutosh.dixit@intel.com> User-Agent: Wanderlust/2.15.9 (Almost Unreal) SEMI-EPG/1.14.7 (Harue) FLIM-LB/1.14.9 (=?ISO-8859-4?Q?Goj=F2?=) APEL-LB/10.8 EasyPG/1.0.0 Emacs/28.1 (x86_64-pc-linux-gnu) MULE/6.0 (HANACHIRUSATO) MIME-Version: 1.0 (generated by SEMI-EPG 1.14.7 - "Harue") Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Subject: Re: [Intel-gfx] [RFC PATCH 2/2] Fix per client busyness locking X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: intel-gfx@lists.freedesktop.org Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" On Wed, 07 Sep 2022 00:28:48 -0700, Tvrtko Ursulin wrote: > > On 06/09/2022 19:29, Umesh Nerlige Ramappa wrote: > > On Thu, Sep 01, 2022 at 04:55:22PM -0700, Dixit, Ashutosh wrote: > >> On Wed, 31 Aug 2022 15:45:49 -0700, Umesh Nerlige Ramappa wrote: > > [snip] > > >>> > > >>> >=A0=A0=A0 intel_gt_reset_unlock(gt, srcu); > >>> > > >>> > @@ -1476,17 +1476,21 @@ void intel_guc_busyness_unpark(struct > >>> intel_gt *gt) > >>> >=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 guc->timestamp.ping_delay); > >>> > } > >>> > > >>> > -static void __guc_context_update_clks(struct intel_context *ce) > >>> > +static u64 guc_context_update_stats(struct intel_context *ce) > >>> > { > >>> >=A0=A0=A0 struct intel_guc *guc =3D ce_to_guc(ce); > >>> >=A0=A0=A0 struct intel_gt *gt =3D ce->engine->gt; > >>> >=A0=A0=A0 u32 *pphwsp, last_switch, engine_id; > >>> > -=A0=A0=A0 u64 start_gt_clk, active; > >>> >=A0=A0=A0 unsigned long flags; > >>> > +=A0=A0=A0 u64 total, active =3D 0; > >>> >=A0=A0=A0 ktime_t unused; > >>> > > >>> > +=A0=A0=A0 intel_context_pin(ce); > >>> > >>> intel_context_pin can sleep and we are not allowed to sleep in this > >>> path - > >>> intel_context_get_total_runtime_ns(), however we can sleep in the ping > >>> worker path, so ideally we want to separate it out for the 2 paths. > >> > >> Do we know which intel_context_get_total_runtime_ns() call is not allo= wed > >> to sleep? E.g. the code path from i915_drm_client_fdinfo() is allowed = to > >> sleep. As mentioned I have done this is v2 of RFC patch which I think = is > >> sufficient, but a more complicated scheme (which I think we can avoid = for > >> now) would be to pin in code paths when sleeping is allowed. > >> > > > > Hmm, maybe intel_context_get_total_runtime_ns can sleep, not sure why I > > was assuming that this falls in the perf_pmu path. This is now in the > > drm_fdinfo query path. + Tvrtko. > > > > @Tvrtko, any idea if intel_context_get_total_runtime_ns path can sleep? > > Not at the moment - it calls it from a lockless (rcu) section when walking > all the contexts belonging to a client. Idea being performance queries > should have minimum effect on a running system. Hmm my bad, missing the rcu and assuming a userspace thread will be able to sleep. > I think it would be possible to change in theory but not sure how much > work. There is a hard need to sleep in there or what? GuC contexts need to be pinned/unpinned which can sleep but investigating if we can return a previously computed busyness when we cannot pin/sleep. Thanks. -- Ashutosh