From: Jerome Glisse <glisse@freedesktop.org>
To: Dave Airlie <airlied@gmail.com>
Cc: thomas@shipmail.org, linux-kernel@vger.kernel.org,
dri-devel@lists.sf.net
Subject: Re: TTM page pool allocator
Date: Fri, 26 Jun 2009 09:31:55 +0200 [thread overview]
Message-ID: <1246001515.2312.3.camel@localhost> (raw)
In-Reply-To: <21d7e9970906251700n5f5fbd07ke24022b576b1770b@mail.gmail.com>
On Fri, 2009-06-26 at 10:00 +1000, Dave Airlie wrote:
> On Thu, Jun 25, 2009 at 10:01 PM, Jerome Glisse<glisse@freedesktop.org> wrote:
> > Hi,
> >
> > Thomas i attach a reworked page pool allocator based on Dave works,
> > this one should be ok with ttm cache status tracking. It definitely
> > helps on AGP system, now the bottleneck is in mesa vertex's dma
> > allocation.
> >
>
> My original version kept a list of wb pages as well, this proved to be
> quite a useful
> optimisation on my test systems when I implemented it, without it I
> was spending ~20%
> of my CPU in getting free pages, granted I always used WB pages on
> PCIE/IGP systems.
>
> Another optimisation I made at the time was around the populate call,
> (not sure if this
> is what still happens):
>
> Allocate a 64K local BO for DMA object.
> Write into the first 5 pages from userspace - get WB pages.
> Bind to GART, swap those 5 pages to WC + flush.
> Then populate the rest with WC pages from the list.
>
> Granted I think allocating WC in the first place from the pool might
> work just as well since most of the DMA buffers are write only.
>
> Dave.
>
I think it's better to fix userspace to not allocate as much buffer per
frame as it does now rather than having a pool of wb pages, i removed
it because on my 64M box memory is getting tight, we need to compute
the number of page we still based on memory. Also i think it's ok
to assume that page allocation is fast enough.
I am reworking the patch with lastes Thomas comment, will post new one
after a bit of testing.
Cheers,
Jerome
next prev parent reply other threads:[~2009-06-26 7:33 UTC|newest]
Thread overview: 29+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-06-25 12:01 TTM page pool allocator Jerome Glisse
2009-06-25 15:53 ` Thomas Hellström
2009-07-21 17:34 ` Jerome Glisse
2009-07-21 18:00 ` Jerome Glisse
2009-07-21 19:22 ` Jerome Glisse
2009-07-22 8:37 ` Thomas Hellström
2009-07-28 16:48 ` ttm_mem_global Jerome Glisse
2009-07-28 18:55 ` ttm_mem_global Thomas Hellström
2009-07-29 8:59 ` ttm_mem_global Jerome Glisse
2009-07-29 9:39 ` ttm_mem_global Thomas Hellström
2009-07-29 13:04 ` ttm_mem_global Jerome Glisse
2009-07-22 13:16 ` TTM page pool allocator Michel Dänzer
2009-07-22 13:31 ` Jerome Glisse
2009-07-22 19:13 ` Thomas Hellström
2009-07-22 22:35 ` Jerome Glisse
2009-07-22 23:24 ` Keith Whitwell
2009-07-22 23:27 ` Dave Airlie
2009-07-22 8:27 ` Thomas Hellström
2009-07-22 12:12 ` Jerome Glisse
2009-07-22 19:10 ` Thomas Hellström
2009-06-26 0:00 ` Dave Airlie
2009-06-26 6:31 ` Thomas Hellström
2009-06-26 7:33 ` Jerome Glisse
2009-06-26 7:31 ` Jerome Glisse [this message]
2009-06-26 7:38 ` Dave Airlie
2009-06-26 13:59 ` Jerome Glisse
2009-06-29 21:12 ` Thomas Hellström
2009-07-09 6:06 ` Dave Airlie
2009-07-09 8:48 ` Michel Dänzer
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1246001515.2312.3.camel@localhost \
--to=glisse@freedesktop.org \
--cc=airlied@gmail.com \
--cc=dri-devel@lists.sf.net \
--cc=linux-kernel@vger.kernel.org \
--cc=thomas@shipmail.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox