From: Mika Kuoppala <mika.kuoppala@linux.intel.com>
To: Chris Wilson <chris@chris-wilson.co.uk>, intel-gfx@lists.freedesktop.org
Cc: igt-dev@lists.freedesktop.org, matthew.auld@intel.com
Subject: Re: [igt-dev] [Intel-gfx] [PATCH i-g-t 1/3] i915/gem_userptr_blits: Only mlock the memfd once, not the arena
Date: Wed, 16 Jan 2019 12:35:59 +0200 [thread overview]
Message-ID: <87o98gyjn4.fsf@gaia.fi.intel.com> (raw)
In-Reply-To: <154763269626.30063.2897361111833119934@skylake-alporthouse-com>
Chris Wilson <chris@chris-wilson.co.uk> writes:
> Quoting Mika Kuoppala (2019-01-16 09:47:27)
>> Chris Wilson <chris@chris-wilson.co.uk> writes:
>>
>> > We multiply the memfd 64k to create a 2G arena which we then attempt to
>> > write into after marking read-only. Howver, when it comes to unlock the
>>
>> s/Howver/However
>>
>> > arena after the test, performance tanks as the kernel tries to resolve
>> > the 64k repeated mappings onto the same set of pages. (Must not be a
>> > very common operation!) We can get away with just mlocking the backing
>> > store to prevent its eviction, which should prevent the arena mapping
>> > from being freed as well.
>>
>> hmm should. How are they bound?
>
> All I'm worried about are the allocs for the pud/pmd etc, which aiui are
> not freed until the pte are removed and the pte shouldn't be reaped
> because the struct page are locked. However, I haven't actually verified
> that mlocking the underlying pages is enough to be sure that the page
> tables of the various mappings are safe from eviction. On the other
> hand, munlock_vma_range doesn't scale to the abuse we put it to, and
> that is causing issues for CI!
If we can dodge it with this, great.
Noticed there is also typo in preceeding code,
the comment when mapping the arena. s/usuable/usable.
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev
next prev parent reply other threads:[~2019-01-16 10:35 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-01-16 9:35 [igt-dev] [PATCH i-g-t 1/3] i915/gem_userptr_blits: Only mlock the memfd once, not the arena Chris Wilson
2019-01-16 9:35 ` [igt-dev] [PATCH i-g-t 2/3] i915/gem_cpu_reloc: Use a self-modifying chained batch Chris Wilson
2019-01-16 14:22 ` [igt-dev] [Intel-gfx] " Mika Kuoppala
2019-01-16 14:30 ` Chris Wilson
2019-01-16 14:49 ` Mika Kuoppala
2019-01-16 9:35 ` [igt-dev] [PATCH i-g-t 3/3] igt/drv_missed_irq: Skip if the kernel reports no rings available to test Chris Wilson
2019-01-16 14:30 ` [igt-dev] [Intel-gfx] " Mika Kuoppala
2019-01-16 9:47 ` [igt-dev] [Intel-gfx] [PATCH i-g-t 1/3] i915/gem_userptr_blits: Only mlock the memfd once, not the arena Mika Kuoppala
2019-01-16 9:58 ` Chris Wilson
2019-01-16 10:35 ` Mika Kuoppala [this message]
2019-01-16 10:46 ` [igt-dev] " Chris Wilson
2019-01-16 10:27 ` [igt-dev] ✓ Fi.CI.BAT: success for series starting with [i-g-t,1/3] " Patchwork
2019-01-16 11:58 ` [igt-dev] ✓ Fi.CI.IGT: " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=87o98gyjn4.fsf@gaia.fi.intel.com \
--to=mika.kuoppala@linux.intel.com \
--cc=chris@chris-wilson.co.uk \
--cc=igt-dev@lists.freedesktop.org \
--cc=intel-gfx@lists.freedesktop.org \
--cc=matthew.auld@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox