From: kennethadammiller@gmail.com (Kenneth Adam Miller)
To: kernelnewbies@lists.kernelnewbies.org
Subject: UIO Devices and user processes
Date: Tue, 6 Oct 2015 10:46:49 -0400 [thread overview]
Message-ID: <CAK7rcp9Rxp8cF8DkH8zYXY54Ckb6aZMyQ7UA9Uh2kZQBx57htA@mail.gmail.com> (raw)
In-Reply-To: <CAK7rcp8iDQHqwaZfgrZuyy2ng7XeNu4-5CJKT-At+MUCO9gONw@mail.gmail.com>
Let me be more precise in general to the overall original question:
I want a userland process that I designate to only use a specific hard
coded region physical of memory for it's heap. A UIO driver is the means by
which I've gone about seeking to achieve this.
On Tue, Oct 6, 2015 at 10:41 AM, Kenneth Adam Miller <
kennethadammiller@gmail.com> wrote:
>
> On Tue, Oct 6, 2015 at 10:32 AM, Yann Droneaud <ydroneaud@opteya.com>
> wrote:
>
>> Le mardi 06 octobre 2015 ? 10:13 -0400, Kenneth Adam Miller a ?crit :
>> >
>> >
>> > On Tue, Oct 6, 2015 at 9:58 AM, Yann Droneaud <ydroneaud@opteya.com>
>> > wrote:
>> > > Le mardi 06 octobre 2015 ? 09:26 -0400, Kenneth Adam Miller a ?crit
>> > > :
>> > >
>> > > > Any body know about the issue of assigning a process a region of
>> > > > physical memory to use for it's malloc and free? I'd like to just
>> > > > have the process call through to a UIO driver with an ioctl, and
>> > > then
>> > > > once that's done it gets all it's memory from a specific region.
>> > > >
>> > >
>> > > You mean CONFIG_UIO_DMEM_GENIRQ (drivers/uio/uio_dmem_genirq.c)
>> > >
>> > > See:
>> > > http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/comm
>> > > it/?id=0a0c3b5a24bd802b1ebbf99e0b01296647b8199b
>> > > http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/comm
>> > > it/?id=b533a83008c3fb4983c1213276790cacd39b518f
>> > > https://www.kernel.org/doc/htmldocs/uio-howto/using-uio_dmem_genirq
>> > > .html
>> > >
>> > >
>> > Well I don't think that does exactly what I would like, although I've
>> > got that on my machine and I've been compiling it and learning from
>> > it. Here's my understanding of the process of the way mmap works:
>> >
>> > Mmap is called from userland and it maps a region of memory of a
>> > certain size according to the parameters given to it, and the return
>> > value it has is the address at which the block requested starts, if
>> > it was successful (which I'm not addressing the unsuccessful case
>> > here for brevity). The userland process now has only a pointer to a
>> > region of space, as if they had allocated something with new or
>> > malloc. Further calls to new or malloc don't mean that the pointers
>> > returned will preside within the new mmap'd chunk, they are just
>> > separate regions also mapped into the process.
>> >
>>
>> You have to write your own custom allocator using the mmap()'ed memory
>> your retrieved from UIO.
>>
>
> I know about C++'s placement new. But I'd prefer to not have to write my
> userland code in such a way-I want my userland code to remain agnostic of
> where it gets the memory from. I just want to put a small prologue in my
> main, and then have the rest of the program oblivious to the change.
>
>
>>
>> > What I would like is a region of memory that, once mapped to a
>> > process, further calls to new/malloc return pointers that preside
>> > within this chunk. Calls to new/malloc and delete/free only edit the
>> > process's internal table, which is fine.
>> >
>> > Is that wrong? Or is it that mmap already does the latter?
>>
>> It's likely wrong. glibc's malloc() using brk() and mmap() to allocate
>> anonymous pages. Tricking this implementation to use another mean to
>> retrieve memory is left to the reader.
>>
>>
>
>
>
>> Anyway, are you sure you want any random calls to malloc() (from glibc
>> itself or any other linked-in libraries) to eat UIO allocated buffers ?
>> I don't think so: such physically contiguous, cache coherent buffers
>> are premium ressources, you don't want to distribute them gratuitously.
>>
>>
> Yes - we have a hard limit on memory for our processes, and if they try to
> use more than what we mmap to them, they die, and we're more than fine with
> that. In fact, that's part of our use case and model, we've planned to
> allocate just 5 or so processes on our behemoth machine with gigabytes of
> memory. So they aren't so premium to us.
>
>
>> Regards.
>>
>> --
>> Yann Droneaud
>> OPTEYA
>>
>>
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.kernelnewbies.org/pipermail/kernelnewbies/attachments/20151006/dc0d7c73/attachment.html
next prev parent reply other threads:[~2015-10-06 14:46 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-10-05 23:07 UIO Devices and user processes Kenneth Adam Miller
2015-10-06 5:21 ` Greg KH
2015-10-06 13:26 ` Kenneth Adam Miller
2015-10-06 13:57 ` Greg KH
2015-10-06 14:03 ` Kenneth Adam Miller
2015-10-06 13:58 ` Yann Droneaud
2015-10-06 14:13 ` Kenneth Adam Miller
2015-10-06 14:32 ` Yann Droneaud
2015-10-06 14:41 ` Kenneth Adam Miller
2015-10-06 14:46 ` Kenneth Adam Miller [this message]
2015-10-06 15:04 ` Yann Droneaud
2015-10-06 15:10 ` Kenneth Adam Miller
2015-10-07 17:02 ` Greg KH
2015-10-07 18:10 ` Kenneth Adam Miller
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAK7rcp9Rxp8cF8DkH8zYXY54Ckb6aZMyQ7UA9Uh2kZQBx57htA@mail.gmail.com \
--to=kennethadammiller@gmail.com \
--cc=kernelnewbies@lists.kernelnewbies.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).