linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* steering allocations to particular parts of memory
@ 2012-09-07 18:27 Larry Bassel
  2012-09-11  9:34 ` Mel Gorman
  0 siblings, 1 reply; 7+ messages in thread
From: Larry Bassel @ 2012-09-07 18:27 UTC (permalink / raw)
  To: mgorman; +Cc: dan.magenheimer, linux-mm

I am looking for a way to steer allocations (these may be
by either userspace or the kernel) to or away from particular
ranges of memory. The reason for this is that some parts of
memory are different from others (i.e. some memory may be
faster/slower). For instance there may be 500M of "fast"
memory and 1500M of "slower" memory on a 2G platform.

At the memory mini-summit last week, it was mentioned
that the Super-H architecture was using NUMA for this
purpose, which was considered to be an very bad thing
to do -- we have ported NUMA to ARM here (as an experiment)
and agree that NUMA doesn't work well for solving this problem.

After the NUMA discussion, I spoke briefly to you and asked
you what a good approach would be. You thought that something
based on transcendent memory (which I am somewhat familiar
with, having built something based upon it which can be used either
as contiguous memory or as clean cache) might work, but
you didn't supply any details.

At the time, you asked me to email you about this and copy
Dan and the linux-mm mailing list, where hopefully you or Dan
might be able to explain how this would work.

Thanks.

Larry Bassel

-- 
The Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
hosted by The Linux Foundation

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: steering allocations to particular parts of memory
  2012-09-07 18:27 steering allocations to particular parts of memory Larry Bassel
@ 2012-09-11  9:34 ` Mel Gorman
  2012-09-12 21:28   ` Larry Bassel
  0 siblings, 1 reply; 7+ messages in thread
From: Mel Gorman @ 2012-09-11  9:34 UTC (permalink / raw)
  To: Larry Bassel; +Cc: dan.magenheimer, linux-mm

On Fri, Sep 07, 2012 at 11:27:15AM -0700, Larry Bassel wrote:
> I am looking for a way to steer allocations (these may be
> by either userspace or the kernel) to or away from particular
> ranges of memory. The reason for this is that some parts of
> memory are different from others (i.e. some memory may be
> faster/slower). For instance there may be 500M of "fast"
> memory and 1500M of "slower" memory on a 2G platform.
> 

Hi Larry,

> At the memory mini-summit last week, it was mentioned
> that the Super-H architecture was using NUMA for this
> purpose, which was considered to be an very bad thing
> to do -- we have ported NUMA to ARM here (as an experiment)
> and agree that NUMA doesn't work well for solving this problem.
> 

Yes, I remember the discussion and regret it had to be cut short.

NUMA is almost always considered to be the first solution to this type
of problem but as you say it's considered to be a "very bad thing to do".
It's convenient in one sense because you get data structures that track all
the pages for you and create the management structures. It's bad because
page allocation uses these slow nodes when the fast nodes are full which
is a very poor placement policy. Similarly pages from the slow node are
reclaimed based on memory pressure. It comes down to luck whether the
optimal pages are in the slow node or not. You can try wedging your own
placement policy on the side but it won't be pretty.

> After the NUMA discussion, I spoke briefly to you and asked
> you what a good approach would be. You thought that something
> based on transcendent memory (which I am somewhat familiar
> with, having built something based upon it which can be used either
> as contiguous memory or as clean cache) might work, but
> you didn't supply any details.
> 

I was running out the door to catch a bus unfortunately. It was a somewhat
off-the-cuff remark that tmem might help you and what I was really
interested in what tmem used as a placement policy. All I was really sure
of was that a plain NUMA node is a bad idea. Unfortunately I have not
sat down to properly design a solution for this that would satisfy all
interested parties.  Hence take all this with a big grain of salt.

The reason why tmem (http://lwn.net/Articles/340080/) came to mind is that
it addresses a similar class of problem to yours. Very broadly speaking
it was described as memory of an "unknown and dynamically variable size,
is addressable only indirectly by the kernel, can be configured either
as persistent or as "ephemeral" (meaning it will be around for awhile,
but might disappear without warning), and is still fast enough to be
synchronously accessible"

This is not an exact fit obviously. The slow memory node (slowmem) is
fixed size and is directly accessible. The core idea might still be
useful to you though. I'm actually not familiar with tmem but it would
be worth investigating if you can use the same API to decide whether
pages should migrate to/from slowmem and when to simply discard pages
from slowmem.

A possibly variation would be to have cleancache and similar mechanisms
use slowmem as a backend.

A third variation is for people considering creating RAM-like devices
that are backed by some sort of fast storage. These would be interested
in an almost identical sort of API that you need.

Note that none of this actually stops you using a pgdat structure to
represent slowmem and to creating the struct pages for you. This could
be core helper code that allocates a pgdat structure and initialises all
the pages but does not create a kswapd thread, link it to zonelists etc.
The key Ideally there would be a placement policy API (maybe similar
to tmems) that can be shared with slowmem, cleancache, whatever you are
implementing and potentially tmem if it gets revived.

In my simple mind the final solution to cover most or all of these use
causes would look something like this ASCII scribble.


             movement trigger
        KSM? kswapd hook? faults?
                    |
               placement policy
               notification API
                    |
          |------------------|
          |                  |
        placement         placement
        policy            policy                       faulting, IO
          |                  |                              |
          |------------------|                              |
                   |                                        |
       API to move pages RAM<->backing,         get_user_pages like API
           discard pages                        page for userspace access
                   |                                        |
		   |----------------------------------------|
                   |
      Interface to make it look like RAM
      Create struct pages, partial pgdat,
       no kswapd, not linked to zonelist
                   |
   ------------------------------
   |               |            |
slowmem      block device     tmem

Hope this clarifies my position a little but people like Dan who have
focused on this problem in the past may have a much better idea.

-- 
Mel Gorman
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: steering allocations to particular parts of memory
  2012-09-11  9:34 ` Mel Gorman
@ 2012-09-12 21:28   ` Larry Bassel
  2012-09-13  8:34     ` Mel Gorman
  0 siblings, 1 reply; 7+ messages in thread
From: Larry Bassel @ 2012-09-12 21:28 UTC (permalink / raw)
  To: Mel Gorman; +Cc: Larry Bassel, dan.magenheimer, linux-mm

On 11 Sep 12 10:34, Mel Gorman wrote:
> On Fri, Sep 07, 2012 at 11:27:15AM -0700, Larry Bassel wrote:
> > I am looking for a way to steer allocations (these may be
> > by either userspace or the kernel) to or away from particular
> > ranges of memory. The reason for this is that some parts of
> > memory are different from others (i.e. some memory may be
> > faster/slower). For instance there may be 500M of "fast"
> > memory and 1500M of "slower" memory on a 2G platform.
> > 
> 
> Hi Larry,
> 
> > At the memory mini-summit last week, it was mentioned
> > that the Super-H architecture was using NUMA for this
> > purpose, which was considered to be an very bad thing
> > to do -- we have ported NUMA to ARM here (as an experiment)
> > and agree that NUMA doesn't work well for solving this problem.
> > 
> 
> Yes, I remember the discussion and regret it had to be cut short.
> 
> NUMA is almost always considered to be the first solution to this type
> of problem but as you say it's considered to be a "very bad thing to do".
> It's convenient in one sense because you get data structures that track all
> the pages for you and create the management structures. It's bad because
> page allocation uses these slow nodes when the fast nodes are full which
> is a very poor placement policy. Similarly pages from the slow node are
> reclaimed based on memory pressure. It comes down to luck whether the
> optimal pages are in the slow node or not. You can try wedging your own
> placement policy on the side but it won't be pretty.

It appears that I was too vague about this. Both userspace and
kernel (drivers mostly) need to be able to specify either explicitly
or implicitly (using defaults if no explicit memory type is mentioned)
what sort of memory is desired and what to do if this type is not
available (either due to actual lack of such memory or because
a low watermark would be violated, etc.) such as fall back to
another type of memory or get an out-of-memory error
(More sophisticated alternatives would be to trigger
some sort of migration or even eviction in these cases).
This seems similar to a simplified version of memory policies,
unless I'm missing something.

Admittedly, most drivers and user processes will not explicitly ask
for a certain type of memory.

We also would like to be able to create lowmem or highmem
from any type of memory.

The above makes me wonder if something that keeps nodes and zones
and some sort of simple memory policy and throws out the rest of NUMA such
as bindings of memory to CPUs, cpusets, etc. might be useful
(though after the memory mini-summit I have doubts about this as well)
as node-aware allocators already exist.

[snip]

> Hope this clarifies my position a little but people like Dan who have
> focused on this problem in the past may have a much better idea.

Thanks.

> 
> -- 
> Mel Gorman
> SUSE Labs

Larry

-- 
The Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
hosted by The Linux Foundation

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: steering allocations to particular parts of memory
  2012-09-12 21:28   ` Larry Bassel
@ 2012-09-13  8:34     ` Mel Gorman
  2012-09-17 19:40       ` Dan Magenheimer
  0 siblings, 1 reply; 7+ messages in thread
From: Mel Gorman @ 2012-09-13  8:34 UTC (permalink / raw)
  To: Larry Bassel; +Cc: dan.magenheimer, linux-mm

On Wed, Sep 12, 2012 at 02:28:29PM -0700, Larry Bassel wrote:
> On 11 Sep 12 10:34, Mel Gorman wrote:
> > On Fri, Sep 07, 2012 at 11:27:15AM -0700, Larry Bassel wrote:
> > > I am looking for a way to steer allocations (these may be
> > > by either userspace or the kernel) to or away from particular
> > > ranges of memory. The reason for this is that some parts of
> > > memory are different from others (i.e. some memory may be
> > > faster/slower). For instance there may be 500M of "fast"
> > > memory and 1500M of "slower" memory on a 2G platform.
> > > 
> > 
> > Hi Larry,
> > 
> > > At the memory mini-summit last week, it was mentioned
> > > that the Super-H architecture was using NUMA for this
> > > purpose, which was considered to be an very bad thing
> > > to do -- we have ported NUMA to ARM here (as an experiment)
> > > and agree that NUMA doesn't work well for solving this problem.
> > > 
> > 
> > Yes, I remember the discussion and regret it had to be cut short.
> > 
> > NUMA is almost always considered to be the first solution to this type
> > of problem but as you say it's considered to be a "very bad thing to do".
> > It's convenient in one sense because you get data structures that track all
> > the pages for you and create the management structures. It's bad because
> > page allocation uses these slow nodes when the fast nodes are full which
> > is a very poor placement policy. Similarly pages from the slow node are
> > reclaimed based on memory pressure. It comes down to luck whether the
> > optimal pages are in the slow node or not. You can try wedging your own
> > placement policy on the side but it won't be pretty.
> 
> It appears that I was too vague about this. Both userspace and
> kernel (drivers mostly) need to be able to specify either explicitly
> or implicitly (using defaults if no explicit memory type is mentioned)

This pushes responsibility for placement policy out to the edge. While it
will work to some extent, it'll depend heavily on the applications getting
the placement policy right right. If a mistake is made then potentially
every one of these applications and drivers will need to be fixed although
I would expect that you'd create a new allocator API and hopefully only
have to fix it there if the policies were suitably fine-grained. To me
this type of solution is less than ideal as the drivers and applications
may not really know if the memory is "hot" or not.

> what sort of memory is desired and what to do if this type is not
> available (either due to actual lack of such memory or because
> a low watermark would be violated, etc.) such as fall back to
> another type of memory or get an out-of-memory error
> (More sophisticated alternatives would be to trigger
> some sort of migration or even eviction in these cases).
> This seems similar to a simplified version of memory policies,
> unless I'm missing something.
> 

I do not think it's a simplified version of memory policies but it is
certainly similar to memory policies.

> Admittedly, most drivers and user processes will not explicitly ask
> for a certain type of memory.
> 

This is what I expect. It means that your solution might work for Super-H
but it will not work for any of the other use cases where applications
will be expected to work without modification. I guess it would be fine
if one was building an applicance where they knew exactly what was going
to be running and how it behaved but it's not exactly a general solution.

> We also would like to be able to create lowmem or highmem
> from any type of memory.
> 

You may be able to hack something into the architecture layer that abuses
the memory model and remaps some pages into lowmem.

> The above makes me wonder if something that keeps nodes and zones
> and some sort of simple memory policy and throws out the rest of NUMA such
> as bindings of memory to CPUs, cpusets, etc. might be useful
> (though after the memory mini-summit I have doubts about this as well)
> as node-aware allocators already exist.
> 

You can just ignore the cpuset, CPU bindings and all the rest of it
already. It is already possible to use memory policies to only allocate
from a specific node (although it is not currently possible to restrict
allocations to a zone from user space at least).

I just fear that solutions that push responsibility out to drivers and
applications will end up being very hacky, rarely used, and be unsuitable
for the other use cases where application modification is not an option.

-- 
Mel Gorman
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* RE: steering allocations to particular parts of memory
  2012-09-13  8:34     ` Mel Gorman
@ 2012-09-17 19:40       ` Dan Magenheimer
  2012-09-18  8:32         ` Mel Gorman
  2012-09-21 17:44         ` Larry Bassel
  0 siblings, 2 replies; 7+ messages in thread
From: Dan Magenheimer @ 2012-09-17 19:40 UTC (permalink / raw)
  To: Mel Gorman, Larry Bassel; +Cc: linux-mm, Konrad Wilk

Hi Larry --

Sorry I missed seeing you and missed this discussion at Linuxcon!

> based on transcendent memory (which I am somewhat familiar
> with, having built something based upon it which can be used either
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> as contiguous memory or as clean cache) might work, but

That reminds me... I never saw this code posted on linux-mm
or lkml or anywhere else.  Since this is another interesting
use of tmem/cleancache/frontswap, it might be good to get
your work into the kernel or at least into some other public
tree.  Is your code post-able? (re original thread:
http://www.spinics.net/lists/linux-mm/msg24785.html )

> At the memory mini-summit last week, it was mentioned
> that the Super-H architecture was using NUMA for this
> purpose, which was considered to be an very bad thing
> to do -- we have ported NUMA to ARM here (as an experiment)
> and agree that NUMA doesn't work well for solving this problem.

If there are any notes/slides/threads with more detail
on this discussion (why NUMA doesn't work well), I'd be
interested in a pointer...

> I am looking for a way to steer allocations (these may be
> by either userspace or the kernel) to or away from particular
> ranges of memory. The reason for this is that some parts of
> memory are different from others (i.e. some memory may be
> faster/slower). For instance there may be 500M of "fast"
> memory and 1500M of "slower" memory on a 2G platform.

In the kernel's current uses of tmem (frontswap and cleancache),
there's no way to proactively steer the allocation.  The
kernel effectively subdivides pages into two priority
classes and lower priority pages end up in cleancache
rather than being reclaimed, and frontswap rather than
on a swap disk.

A brand new in-kernel interface to tmem code to explicitly
allocate "slow memory" is certainly possible, though I
haven't given it much thought.   Depending on how "slow"
is slow, it may make sense for the memory to only be used
for tmem pages rather than for user/kernel-directly-accessible
RAM.

> This pushes responsibility for placement policy out to the edge. While it
> will work to some extent, it'll depend heavily on the applications getting
> the placement policy right right. If a mistake is made then potentially
> every one of these applications and drivers will need to be fixed although
> I would expect that you'd create a new allocator API and hopefully only
> have to fix it there if the policies were suitably fine-grained. To me
> this type of solution is less than ideal as the drivers and applications
> may not really know if the memory is "hot" or not.

I'd have to agree with Mel on this.  There are certainly a number
of enterprise apps that subvert kernel policies and entirely
manage their own memory.  I'm not sure there would be much value
to kernel participation (or using tmem) if this is what you ultimately
need to do.

> I do not think it's a simplified version of memory policies but it is
> certainly similar to memory policies.
> 
> > Admittedly, most drivers and user processes will not explicitly ask
> > for a certain type of memory.
> 
> This is what I expect. It means that your solution might work for Super-H
> but it will not work for any of the other use cases where applications
> will be expected to work without modification. I guess it would be fine
> if one was building an applicance where they knew exactly what was going
> to be running and how it behaved but it's not exactly a general solution.
> 
> > We also would like to be able to create lowmem or highmem
> > from any type of memory.
> 
> You may be able to hack something into the architecture layer that abuses
> the memory model and remaps some pages into lowmem.
> 
> > The above makes me wonder if something that keeps nodes and zones
> > and some sort of simple memory policy and throws out the rest of NUMA such
> > as bindings of memory to CPUs, cpusets, etc. might be useful
> > (though after the memory mini-summit I have doubts about this as well)
> > as node-aware allocators already exist.
> 
> You can just ignore the cpuset, CPU bindings and all the rest of it
> already. It is already possible to use memory policies to only allocate
> from a specific node (although it is not currently possible to restrict
> allocations to a zone from user space at least).
> 
> I just fear that solutions that push responsibility out to drivers and
> applications will end up being very hacky, rarely used, and be unsuitable
> for the other use cases where application modification is not an option.

I agree with Mel on all of these comments.

Dan

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: steering allocations to particular parts of memory
  2012-09-17 19:40       ` Dan Magenheimer
@ 2012-09-18  8:32         ` Mel Gorman
  2012-09-21 17:44         ` Larry Bassel
  1 sibling, 0 replies; 7+ messages in thread
From: Mel Gorman @ 2012-09-18  8:32 UTC (permalink / raw)
  To: Dan Magenheimer; +Cc: Larry Bassel, linux-mm, Konrad Wilk

On Mon, Sep 17, 2012 at 12:40:58PM -0700, Dan Magenheimer wrote:
> Hi Larry --
> 
> Sorry I missed seeing you and missed this discussion at Linuxcon!
> 
> > <SNIP>
> > At the memory mini-summit last week, it was mentioned
> > that the Super-H architecture was using NUMA for this
> > purpose, which was considered to be an very bad thing
> > to do -- we have ported NUMA to ARM here (as an experiment)
> > and agree that NUMA doesn't work well for solving this problem.
> 
> If there are any notes/slides/threads with more detail
> on this discussion (why NUMA doesn't work well), I'd be
> interested in a pointer...
> 

It was a tangent to an unrelated discussion so there were no slides.
LWN.net has an excellent summary of what happened at the meeting in general
but this particular topic was not discussed in detail. The short summary
of why NUMA was bad was in my mail when I said this "It's bad because
page allocation uses these slow nodes when the fast nodes are full which
is a very poor placement policy. Similarly pages from the slow node are
reclaimed based on memory pressure. It comes down to luck whether the
optimal pages are in the slow node or not."

> > I am looking for a way to steer allocations (these may be
> > by either userspace or the kernel) to or away from particular
> > ranges of memory. The reason for this is that some parts of
> > memory are different from others (i.e. some memory may be
> > faster/slower). For instance there may be 500M of "fast"
> > memory and 1500M of "slower" memory on a 2G platform.
> 
> In the kernel's current uses of tmem (frontswap and cleancache),
> there's no way to proactively steer the allocation.  The
> kernel effectively subdivides pages into two priority
> classes and lower priority pages end up in cleancache
> rather than being reclaimed, and frontswap rather than
> on a swap disk.
> 

In the case of frontswap, a reclaim-driven placement policy makes a lot of
sense. To some extent, it does for cleancache as well. It is not necessarily
the best placement policy for slowmem if the data being placed in there
simply had slow access requirements but was otherwise quite large. I'm
not exactly sure but I expect the policy has worse control over when a
page exits the cache either to main memory or to get discarded.

Still, it's a far better policy than plain NUMA placement and would be a
sensible starting point. If an alternative placement policy was proposed
the changelog should include why a reclaim-driven policy was not
preferred.

> A brand new in-kernel interface to tmem code to explicitly
> allocate "slow memory" is certainly possible, though I
> haven't given it much thought.   Depending on how "slow"
> is slow, it may make sense for the memory to only be used
> for tmem pages rather than for user/kernel-directly-accessible
> RAM.
> 

There is a risk as well that each new placement policy would need a
different API so tmem is not necessarily the best interface. This is why
I tried to describe a different layering. Of course, I don't have any
code or a proper design.

> > This pushes responsibility for placement policy out to the edge. While it
> > will work to some extent, it'll depend heavily on the applications getting
> > the placement policy right right. If a mistake is made then potentially
> > every one of these applications and drivers will need to be fixed although
> > I would expect that you'd create a new allocator API and hopefully only
> > have to fix it there if the policies were suitably fine-grained. To me
> > this type of solution is less than ideal as the drivers and applications
> > may not really know if the memory is "hot" or not.
> 
> I'd have to agree with Mel on this.  There are certainly a number
> of enterprise apps that subvert kernel policies and entirely
> manage their own memory. 

Indeed. In the diagram I posted there was a part that created an "Interface
to make it look like RAM". An enterprise app might decide to just expose
that to the application as a character device and mmap it.

> I'm not sure there would be much value
> to kernel participation (or using tmem) if this is what you ultimately
> need to do.
> 

Which might indicate that tmem is not the interface they are looking
for. However, if someone was to implement a general solution I expect
they would borrow heavily from tmem and at the very least, tmem should
be able to reuse any core code.

-- 
Mel Gorman
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: steering allocations to particular parts of memory
  2012-09-17 19:40       ` Dan Magenheimer
  2012-09-18  8:32         ` Mel Gorman
@ 2012-09-21 17:44         ` Larry Bassel
  1 sibling, 0 replies; 7+ messages in thread
From: Larry Bassel @ 2012-09-21 17:44 UTC (permalink / raw)
  To: Dan Magenheimer; +Cc: Mel Gorman, Larry Bassel, linux-mm, Konrad Wilk

On 17 Sep 12 12:40, Dan Magenheimer wrote:
> Hi Larry --
> 
> Sorry I missed seeing you and missed this discussion at Linuxcon!
> 
> > based on transcendent memory (which I am somewhat familiar
> > with, having built something based upon it which can be used either
>         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> > as contiguous memory or as clean cache) might work, but
> 
> That reminds me... I never saw this code posted on linux-mm
> or lkml or anywhere else.  Since this is another interesting
> use of tmem/cleancache/frontswap, it might be good to get
> your work into the kernel or at least into some other public
> tree.  Is your code post-able? (re original thread:
> http://www.spinics.net/lists/linux-mm/msg24785.html )

This was done on a 3.0 base (the tmem/zcache was from 3.1) a while back.

Due to the fact that 1) although some benchmarks improved,
very large file system writes suffered performance degradation
(measured with lmdd), 2) it appeared that supporting FAT
(or other filesystems where blocksize != pagesize) would be
difficult and 3) in many use cases we couldn't fill the carved
out FMEM regions with enough cleancache (so memory was still
being "wasted") as well as the fact that there was some functionality
we hadn't yet implemented (mainly supporting non-compressed FMEM) and
that the code would need to be ported forward to our 3.4 source
base, management decided to put this project on the back burner.

Therefore I don't believe I have any relevant code to post
(unless the project is revived and ported to a current source base).

Larry

-- 
The Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
hosted by The Linux Foundation

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2012-09-21 17:44 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-09-07 18:27 steering allocations to particular parts of memory Larry Bassel
2012-09-11  9:34 ` Mel Gorman
2012-09-12 21:28   ` Larry Bassel
2012-09-13  8:34     ` Mel Gorman
2012-09-17 19:40       ` Dan Magenheimer
2012-09-18  8:32         ` Mel Gorman
2012-09-21 17:44         ` Larry Bassel

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).