public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* Alloc and lock down large amounts of memory
@ 2002-08-16 14:38 Bhavana Nagendra
  2002-08-18 12:09 ` Gilad Ben-Yossef
  0 siblings, 1 reply; 16+ messages in thread
From: Bhavana Nagendra @ 2002-08-16 14:38 UTC (permalink / raw)
  To: linux-kernel

Hi,

I have a few questions with regards to alloc'ing and locking down memory.
An example 
would be very useful.  Please CC me on any responses.

1. Is there a mechanism to lock down large amounts of memory (>128M, upto
256M).
    Can 256M be allocated using vmalloc, if so is it swappable?
2. Is it possible for a user process and kernel to access the same shared
memory?
3. Can a shared memory have visibility across processes, i.e. can process A
access 
memory that was allocated by process B?
4. When a process exits will it cause a close to occur on the device?

Thanks,
Bhavana


^ permalink raw reply	[flat|nested] 16+ messages in thread
* RE: Alloc and lock down large amounts of memory
@ 2002-08-20 14:51 Bhavana Nagendra
  2002-08-21  5:27 ` Gilad Ben-Yossef
  0 siblings, 1 reply; 16+ messages in thread
From: Bhavana Nagendra @ 2002-08-20 14:51 UTC (permalink / raw)
  To: Gilad Ben-Yossef, Bhavana Nagendra; +Cc: linux-kernel


> OK. *AFAIK* most drivers that use DMA request the memory 
> allocation with
> GFP_DMA so that the allocators (whatever they chose) will give them
> DMAbale memory from the DMA memory zone. Memory from that 
> zone is fixed
> (not pagable). You can then map that memory into proccesses address
> space according to demand (and the Linux device drivers 
> second addition
> has a good example of doing that).
>
OK, understood.  
 
> Now, if I understood you correctly you want to allocate 256M of memroy
> and perform DMA into it? if so, you *cannot* use memory 
> allocated using
> vmalloc because the memory it supplies is virtual, that is it not
> contigous in physical memory but might be scattered all over the place
> and I doubt that whatever device you're DMAing from can handle that.
 
I was thinking a scatter gather page table would convert the virtual address
into physical addresses that DMA can use.

> Also, I think you don't want this memory to be swapable, because if it
> is swapped out and then in and might very well end up on a completly
> different address in physical memory from where it were and again, I
> don't think the device that does DMA will be able to handle that - all
> the ones I know require physical addresses (well actualy bus addresses
> but for the sake of argument let's ignore that for a second).
> 
> In short, I don't think you need what you think want... :-)

Gee, how did you guess?  :-) I was mistaken about who does the big 128M
alloc. 
The 128M will actually be malloced in user space and filled up with data in 
user space. This memory will then have to be mapped into kernel space and 
DMA performed. Before DMA is performed the memory obviously needs to be
locked 
down.  How does this play out with respect to DMA memory zone and GFP_DMA
flag, 
specifically pinning down memory?

Thanks Gilad!
 
> > 
> > Does the VM_RESERVED flag lock down the memory so that it 
> doesn't get paged out
> > during DMA?
> 
> AFAIK the VM_RESERVED flag will cause kswapd to ignore the page
> completly - no paging in or out at all.
> 
> > 
> > >
> > >
> > > >     Can 256M be allocated using vmalloc, if so is it swappable?
> > >
> > > It can be alloacted via vmalloc and AFAIK it is not swappable by
> > > default. This doesn't sound like a very good idea though.
> > 
> > Is there a good way to allocate large sums of memory in 
> Linux?  Doesn't have to
> > be
> > vmalloc but I don't think kmalloc, get_free_pages will work 
> for this purpose.  I
> > looked
> > into get_free_pages, but the largest order is 9 which 
> results in 512 pages.
> 
> > Does the memory allocated by vmalloc has visibility across 
> processes?
> 
> See my previous answer regarding why you don't vmalloced memory at
> visibility.
> 
> > I didn't mean shared memory.   If several processes open a 
> given device,
> > under normal conditions the data structure stays till the 
> last close at which
> > time a
> > release is done.   This depends on the usage or minor 
> number count.  Can there
> > be a case where the device exits before the processes 
> close?   In which case
> > the processes will be left hanging.    How is the close 
> handled if the driver is 
> > killed?
> 
> Very simple - you can't unload the device until the ref count says all
> the users (proccesses in thuis case) have closed it.
> 
> Gilad.
> 
> -- 
> Gilad Ben-Yossef <gilad@benyossef.com>
> http://benyossef.com
> 
> "Money talks, bullshit walks and GNU awks."
>   -- Shachar "Sun" Shemesh, debt collector for the GNU/Yakuza
> 

^ permalink raw reply	[flat|nested] 16+ messages in thread
* RE: Alloc and lock down large amounts of memory
@ 2002-08-20 20:08 Bhavana Nagendra
  2002-08-20 20:47 ` Richard B. Johnson
                   ` (2 more replies)
  0 siblings, 3 replies; 16+ messages in thread
From: Bhavana Nagendra @ 2002-08-20 20:08 UTC (permalink / raw)
  To: Mike Galbraith, Bhavana Nagendra, Gilad Ben-Yossef; +Cc: linux-kernel

> 
> Curiosity:  why do you want to do device DMA buffer 
> allocation from userland?

I need 256M memory for a graphics operation.  It's a requiremment,
can't change it. There will be other reasonably sized allocs in kernel 
space, this is a special case that will be done from userland. As 
discussed earlier in this thread, there's no good way of alloc()ing 
and pinning that much in DMA memory space, is there?

Gilad, I looked at mm/memory.c and map_user_kiobuf() lets me
map user memory into kernel memory and pins it down.  A scatter 
gatter mapping (say, pci_map_sg()) will create a seemingly 
contiguous buffer for DMA purposes.  Does that sound right to you?

Bhavana

^ permalink raw reply	[flat|nested] 16+ messages in thread
[parent not found: <23B25974812ED411B48200D0B774071701248C6A@exchusa03.intense 3d.com>]

end of thread, other threads:[~2002-08-21 14:35 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2002-08-16 14:38 Alloc and lock down large amounts of memory Bhavana Nagendra
2002-08-18 12:09 ` Gilad Ben-Yossef
2002-08-18 12:18   ` Alan Cox
2002-08-18 22:45     ` Gilad Ben-Yossef
2002-08-18 12:19   ` Gilad Ben-Yossef
2002-08-19 12:06   ` Bhavana Nagendra
2002-08-20  5:38     ` Gilad Ben-Yossef
  -- strict thread matches above, loose matches on Subject: below --
2002-08-20 14:51 Bhavana Nagendra
2002-08-21  5:27 ` Gilad Ben-Yossef
2002-08-20 20:08 Bhavana Nagendra
2002-08-20 20:47 ` Richard B. Johnson
     [not found] ` <Pine.LNX.3.95.1020820163301.27264A-100000@chaos.analogic.c om>
2002-08-21  4:43   ` Mike Galbraith
2002-08-21  5:29 ` Gilad Ben-Yossef
     [not found] <23B25974812ED411B48200D0B774071701248C6A@exchusa03.intense 3d.com>
2002-08-21  4:31 ` Mike Galbraith
2002-08-21 13:29   ` Roland Kuhn
2002-08-21 14:35     ` Mike Galbraith

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox