Intel-GFX Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: "Tang, CQ" <cq.tang@intel.com>
To: Chris Wilson <chris@chris-wilson.co.uk>,
	"intel-gfx@lists.freedesktop.org"
	<intel-gfx@lists.freedesktop.org>
Subject: Re: [Intel-gfx] [PATCH] drm/i915: Make the GEM reclaim workqueue high priority
Date: Tue, 13 Oct 2020 16:40:35 +0000	[thread overview]
Message-ID: <daa1a1f388a94b07ad95ce5d12132925@intel.com> (raw)
In-Reply-To: <160260628825.2946.16678261690978530238@build.alporthouse.com>



> -----Original Message-----
> From: Chris Wilson <chris@chris-wilson.co.uk>
> Sent: Tuesday, October 13, 2020 9:25 AM
> To: Tang, CQ <cq.tang@intel.com>; intel-gfx@lists.freedesktop.org
> Subject: Re: [Intel-gfx] [PATCH] drm/i915: Make the GEM reclaim workqueue
> high priority
> 
> Quoting Tang, CQ (2020-10-13 17:19:27)
> > Chris,
> >     I tested this patch. It is still not enough, I keep catch running out of lmem.
> Every worker invocation takes larger and larger freeing object count.
> >
> 
> Was that with the immediate call (not via call_rcu) to
> __i915_gem_free_object_rcu?
> 
> If this brings the freelist under control, the next item is judicious use of
> cond_synchronize_rcu(). We just have to make sure we penalize the right
> hog.
> 
> Otherwise, we have to shotgun apply i915_gem_flush_free_objects() and
> still find somewhere to put the rcu sync.

This is with call_rcu().

Then I removed cond_resched(), it does not help, and then I call __i915_gem_free_object_rcu() directly, still the same error,
However, I noticed that sometimes 'queue_work()' return false, which means the work is already queued, how? The worker had been called so 'free_list' is empty:

[  117.381888] queue_work: 107967, 107930; 1
[  119.180230] queue_work: 125531, 125513; 1
[  121.349308] queue_work: 155017, 154996; 1
[  124.214885] queue_work: 193918, 193873; 1
[  127.967260] queue_work: 256838, 256776; 1
[  133.281045] queue_work: 345753, 345734; 1
[  141.457995] queue_work: 516943, 516859; 1
[  156.264420] queue_work: 863622, 863516; 1
[  156.322619] queue_work: 865849, 3163; 0
[  156.448551] queue_work: 865578, 7141; 0
[  156.882985] queue_work: 866984, 24138; 0
[  157.952163] queue_work: 862902, 53365; 0
[  159.838412] queue_work: 842522, 95504; 0
[  174.321508] queue_work: 937179, 657323; 0

--CQ

> -Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

  reply	other threads:[~2020-10-13 16:40 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-10-13 10:32 [Intel-gfx] [PATCH] drm/i915: Make the GEM reclaim workqueue high priority Chris Wilson
2020-10-13 12:27 ` [Intel-gfx] ✓ Fi.CI.BAT: success for " Patchwork
2020-10-13 16:19 ` [Intel-gfx] [PATCH] " Tang, CQ
2020-10-13 16:24   ` Chris Wilson
2020-10-13 16:40     ` Tang, CQ [this message]
2020-10-13 23:29       ` Tang, CQ
2020-10-15 15:06         ` Chris Wilson
2020-10-15 20:09           ` Tang, CQ
2020-10-15 20:32             ` Chris Wilson
2020-10-15 22:25               ` Tang, CQ
2020-10-14  3:19 ` [Intel-gfx] ✗ Fi.CI.IGT: failure for " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=daa1a1f388a94b07ad95ce5d12132925@intel.com \
    --to=cq.tang@intel.com \
    --cc=chris@chris-wilson.co.uk \
    --cc=intel-gfx@lists.freedesktop.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox