From: Tom Cooksey <thomas.cooksey@trolltech.com>
To: Bill Gatliff <bgat@billgatliff.com>
Cc: Robert Schwebel <r.schwebel@pengutronix.de>,
linux-embedded mailing list <linux-embedded@vger.kernel.org>
Subject: Re: Getting physical addresses of mmap'd pages from userspace
Date: Mon, 13 Oct 2008 16:45:31 +0200 [thread overview]
Message-ID: <200810131645.32805.thomas.cooksey@trolltech.com> (raw)
In-Reply-To: <48F34386.8040205@billgatliff.com>
On Monday 13 October 2008 14:48:06 Bill Gatliff wrote:
> Tom Cooksey wrote:
> > On Friday 10 October 2008 21:12:13 Robert Schwebel wrote:
> >> On Fri, Oct 10, 2008 at 06:15:05PM +0200, Tom Cooksey wrote:
> >>> Is there any way to get the physical address of mlock()'d memory from
> >>> userspace?
> >> What do you want to do? I don't really see a reason to do such strange
> >> things any more these days.
> >
> > It's quite an annoying problem actually. Basically I have binary drivers for Imagination Technologies's PowerVR graphics drivers. We have tried very hard to get the source code for these drivers and have failed. Eventually ImgTec allowed us to sign an NDA to get the headers for one of their user-space libraries. This library allows us to direct the graphics hardware to render to a specific physical memory region. The problem is that there's no way to find out what the physical addresses are which we need pass to the graphics hardware (via the user-space library). Allthough the library allows us to allocate emory, what I want to do is then blit that memory in a different process. So a client process renders into an off-screen buffer and the server process blits that buffer to the frameb
uffer. By allowing the server process to do the blit rather than the client process, we can get semi-transparent GL windows.
> >
> > The synchronisation we can do, it's the memory allocation I'm struggling with. If we ask the library to allocate the memory for us, we don't get the physical address to pass to the server process. Instead, we need to allocate memory ourselves and pass the physical address to the library. But like I say, I can't find a way to get the physical address from the kernel.
> >
> > I realise getting round closed-source drivers isn't exactly encoraged, but sadly, we really have no choice. :-(
>
> Not that this would help, but what if the blitting process was working with a
> shared memory area with the, er, main process? Could the allocation be done in
> the library, then?
>
> Reading your post carefully, it sounds like either the library in question is
> designed to run in kernel space (where you can get the physical address), isn't
> a Linux library, or there's some more code somewhere that can do the address
> translation--- you said that "This library allows us to direct the graphics
> hardware to render to a specific physical memory region". How are other users
> of the library getting the parameters that the library needs?
The other user is more closed source OpenGL library code. The problem is that
the drivers are designed to blit from the client process, where the library
does the allocation & the blits (which means opaque-only blits). If we want
transparent windows or fancy window composition effects we have to blit from a
central server process and have clients render into off-screen buffers.
Also, you're right - The library seems to not be Linux-specific but targeted at
other OSes (Which ones I can't say without breaking the NDA I had to sign!).
> What about writing a hacky little module that just does virtual-to-physical
> translation, and returns the result?
>
> Ugh, I think I need to shower. :)
Yup, I think that's what I'm going to end up having to do. Even if I can get
the physical address, I need some way to flush the CPU cache to RAM before I
ask the GPU to blit. I doubt there's any way to do that from userspace. :-( I'm
not even sure of the kernel API to do it. Any ideas?
Cheers,
Tom
next prev parent reply other threads:[~2008-10-13 14:45 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2008-10-10 16:15 Getting physical addresses of mmap'd pages from userspace Tom Cooksey
2008-10-10 16:37 ` Bart Van Assche
2008-10-10 19:12 ` Robert Schwebel
2008-10-13 6:33 ` Tom Cooksey
2008-10-13 7:00 ` Robert Schwebel
2008-10-13 7:20 ` Tom Cooksey
2008-10-13 7:28 ` Thomas Petazzoni
2008-10-13 7:31 ` Robert Schwebel
2008-10-13 12:50 ` Bill Gatliff
2008-10-13 13:23 ` Robert Schwebel
2008-10-13 15:58 ` George G. Davis
2008-10-13 16:09 ` Robert Schwebel
2008-10-14 6:36 ` Tom Cooksey
2008-10-14 15:47 ` Bill Gatliff
2008-10-15 7:06 ` Tom Cooksey
2008-10-15 8:30 ` James Chapman
2008-10-15 18:27 ` Robert Schwebel
2008-10-15 18:29 ` Bill Gatliff
2008-10-13 9:37 ` Gilad Ben-Yossef
[not found] ` <48F31155.6090603@codefidence.com>
2008-10-13 9:38 ` Tom Cooksey
2008-10-13 12:48 ` Bill Gatliff
2008-10-13 14:45 ` Tom Cooksey [this message]
2008-10-13 15:09 ` Daniel THOMPSON
2008-10-13 17:21 ` George G. Davis
2008-10-13 17:29 ` Chris
2008-10-14 6:46 ` Tom Cooksey
2008-10-14 7:31 ` Daniel J Laird
2008-10-14 9:03 ` Tom Cooksey
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=200810131645.32805.thomas.cooksey@trolltech.com \
--to=thomas.cooksey@trolltech.com \
--cc=bgat@billgatliff.com \
--cc=linux-embedded@vger.kernel.org \
--cc=r.schwebel@pengutronix.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.