* Page host virtual assist patches.
@ 2006-04-24 12:34 Martin Schwidefsky
2006-04-25 1:01 ` Andrew Morton
0 siblings, 1 reply; 23+ messages in thread
From: Martin Schwidefsky @ 2006-04-24 12:34 UTC (permalink / raw)
To: linux-mm, akpm, frankeh, rhim
Third version of the page host virtual assist patches. The code has
been reduced in size, and (hopefully) the last races have been fixed.
The basic idea of host virtual assist (hva) is to give a host system
which virtualizes the memory of its guest systems on a per page basis
usage information for the guest pages. The host can then use this
information to optimize the management of guest pages, in particular
the paging. This optimizations can be used for unused (free) guest
pages, for clean page cache pages, and for clean swap cache pages.
The content of free pages can be replace with zeroes and the content
of clean page cache / swap cache pages can be reloaded by the guest
from the backing store.
There are 8 patches that implement hva:
1) Hva state changes for free pages.
2) Hva state changes for page cache pages.
3) Hva state changes for swap cache pages.
4) Keep mlocked pages in stable state.
5) Add support for writable page table entries.
6) Optimization for minor faults.
7) Discarded page list.
8) s390 architecture support for hva.
>From my point of view the patches have reached a state where they
can be considered for wider propagation. Unfortunatly I did not
get any feedback for the prior two versions of the patches, neither
negative nor positive.
I'm currently running -rc1-mm3 with the patches enabled on my s390
test systems and on my thinkpad (without CONFIG_PAGE_HVA). It works
as advertised on s390 and for i386 I could not find any negative
effects. The only noticable changes for i386 is that a bit of code
has moved out of try_to_unmap_one to the callers of the function
to make it usable for hva as well (see patch #02 page_hva_unmap_all
for details). This increases the size of the kernel image by a few
bytes.
Any chance to get the patches included into the -mm tree?
--
blue skies,
Martin.
Martin Schwidefsky
Linux for zSeries Development & Services
IBM Deutschland Entwicklung GmbH
"Reality continues to ruin my life." - Calvin.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: Page host virtual assist patches.
2006-04-24 12:34 Page host virtual assist patches Martin Schwidefsky
@ 2006-04-25 1:01 ` Andrew Morton
2006-04-25 7:19 ` Nick Piggin
2006-04-25 8:10 ` Martin Schwidefsky
0 siblings, 2 replies; 23+ messages in thread
From: Andrew Morton @ 2006-04-25 1:01 UTC (permalink / raw)
To: Martin Schwidefsky; +Cc: linux-mm, frankeh, rhim
Martin Schwidefsky <schwidefsky@de.ibm.com> wrote:
>
> The basic idea of host virtual assist (hva) is to give a host system
> which virtualizes the memory of its guest systems on a per page basis
> usage information for the guest pages. The host can then use this
> information to optimize the management of guest pages, in particular
> the paging. This optimizations can be used for unused (free) guest
> pages, for clean page cache pages, and for clean swap cache pages.
This is pretty significant stuff. It sounds like something which needs to
be worked through with other possible users - UML, Xen, vware, etc.
How come the reclaim has to be done in the host? I'd have thought that a
much simpler approach would be to perform a host->guest upcall saying
either "try to free up this many pages" or "free this page" or "free this
vector of pages"?
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: Page host virtual assist patches.
2006-04-25 1:01 ` Andrew Morton
@ 2006-04-25 7:19 ` Nick Piggin
2006-04-25 8:31 ` Martin Schwidefsky
2006-04-25 8:10 ` Martin Schwidefsky
1 sibling, 1 reply; 23+ messages in thread
From: Nick Piggin @ 2006-04-25 7:19 UTC (permalink / raw)
To: Andrew Morton; +Cc: Martin Schwidefsky, linux-mm, frankeh, rhim
Andrew Morton wrote:
> Martin Schwidefsky <schwidefsky@de.ibm.com> wrote:
>
>> The basic idea of host virtual assist (hva) is to give a host system
>> which virtualizes the memory of its guest systems on a per page basis
>> usage information for the guest pages. The host can then use this
>> information to optimize the management of guest pages, in particular
>> the paging. This optimizations can be used for unused (free) guest
>> pages, for clean page cache pages, and for clean swap cache pages.
>
>
> This is pretty significant stuff. It sounds like something which needs to
> be worked through with other possible users - UML, Xen, vware, etc.
>
> How come the reclaim has to be done in the host? I'd have thought that a
> much simpler approach would be to perform a host->guest upcall saying
> either "try to free up this many pages" or "free this page" or "free this
> vector of pages"?
Definitely. The current patches seem like just an extra layer to do
everything we can already -- reclaim unused pages and populate them
again when they get touched.
And complex they are. Having the core VM have to know about all this
weird stuff seems... not good.
--
SUSE Labs, Novell Inc.
Send instant messages to your online friends http://au.messenger.yahoo.com
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: Page host virtual assist patches.
2006-04-25 1:01 ` Andrew Morton
2006-04-25 7:19 ` Nick Piggin
@ 2006-04-25 8:10 ` Martin Schwidefsky
2006-04-25 8:26 ` Nick Piggin
2006-04-25 8:30 ` Andrew Morton
1 sibling, 2 replies; 23+ messages in thread
From: Martin Schwidefsky @ 2006-04-25 8:10 UTC (permalink / raw)
To: Andrew Morton; +Cc: linux-mm, frankeh, rhim
On Mon, 2006-04-24 at 18:01 -0700, Andrew Morton wrote:
> Martin Schwidefsky <schwidefsky@de.ibm.com> wrote:
> >
> > The basic idea of host virtual assist (hva) is to give a host system
> > which virtualizes the memory of its guest systems on a per page basis
> > usage information for the guest pages. The host can then use this
> > information to optimize the management of guest pages, in particular
> > the paging. This optimizations can be used for unused (free) guest
> > pages, for clean page cache pages, and for clean swap cache pages.
>
> This is pretty significant stuff. It sounds like something which needs to
> be worked through with other possible users - UML, Xen, vware, etc.
>
> How come the reclaim has to be done in the host? I'd have thought that a
> much simpler approach would be to perform a host->guest upcall saying
> either "try to free up this many pages" or "free this page" or "free this
> vector of pages"?
Because calling into the guest is too slow. You need to schedule a cpu,
the code that does the allocation needs to run, which might need other
pages, etc. The beauty of the scheme is that the host can immediately
remove a page that is mark as volatile or unused. No i/o, no scheduling,
nothing. Consider what that does to the latency of the hosts memory
allocation. Even if the percentage of discardable pages is small, lets
say 25% of the guests memory, the host will quickly find reusable
memory. If the vmscan of the host attempts to evict 100 pages, on
average it will start i/o for 75 of them, the other 25 are immediately
free for reuse.
--
blue skies,
Martin.
Martin Schwidefsky
Linux for zSeries Development & Services
IBM Deutschland Entwicklung GmbH
"Reality continues to ruin my life." - Calvin.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: Page host virtual assist patches.
2006-04-25 8:10 ` Martin Schwidefsky
@ 2006-04-25 8:26 ` Nick Piggin
2006-04-25 10:36 ` Martin Schwidefsky
2006-04-25 8:30 ` Andrew Morton
1 sibling, 1 reply; 23+ messages in thread
From: Nick Piggin @ 2006-04-25 8:26 UTC (permalink / raw)
To: schwidefsky; +Cc: Andrew Morton, linux-mm, frankeh, rhim
Martin Schwidefsky wrote:
> On Mon, 2006-04-24 at 18:01 -0700, Andrew Morton wrote:
>
>>Martin Schwidefsky <schwidefsky@de.ibm.com> wrote:
>>
>>> The basic idea of host virtual assist (hva) is to give a host system
>>> which virtualizes the memory of its guest systems on a per page basis
>>> usage information for the guest pages. The host can then use this
>>> information to optimize the management of guest pages, in particular
>>> the paging. This optimizations can be used for unused (free) guest
>>> pages, for clean page cache pages, and for clean swap cache pages.
>>
>>This is pretty significant stuff. It sounds like something which needs to
>>be worked through with other possible users - UML, Xen, vware, etc.
>>
>>How come the reclaim has to be done in the host? I'd have thought that a
>>much simpler approach would be to perform a host->guest upcall saying
>>either "try to free up this many pages" or "free this page" or "free this
>>vector of pages"?
>
>
> Because calling into the guest is too slow. You need to schedule a cpu,
> the code that does the allocation needs to run, which might need other
> pages, etc. The beauty of the scheme is that the host can immediately
> remove a page that is mark as volatile or unused. No i/o, no scheduling,
> nothing. Consider what that does to the latency of the hosts memory
> allocation. Even if the percentage of discardable pages is small, lets
> say 25% of the guests memory, the host will quickly find reusable
> memory. If the vmscan of the host attempts to evict 100 pages, on
> average it will start i/o for 75 of them, the other 25 are immediately
> free for reuse.
>
I don't think there is any beauty in this scheme, to be honest.
I don't see why calling into the host is bad - won't it be able to
make better reclaim decisions? If starting IO is the wrong thing to
do under a hypervisor, why is it the right thing to do on bare metal?
As for latency of host's memory allocation, it should attempt to
keep some buffer of memory free.
--
SUSE Labs, Novell Inc.
Send instant messages to your online friends http://au.messenger.yahoo.com
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: Page host virtual assist patches.
2006-04-25 8:10 ` Martin Schwidefsky
2006-04-25 8:26 ` Nick Piggin
@ 2006-04-25 8:30 ` Andrew Morton
2006-04-25 10:43 ` Martin Schwidefsky
1 sibling, 1 reply; 23+ messages in thread
From: Andrew Morton @ 2006-04-25 8:30 UTC (permalink / raw)
To: schwidefsky; +Cc: linux-mm, frankeh, rhim
Martin Schwidefsky <schwidefsky@de.ibm.com> wrote:
>
> On Mon, 2006-04-24 at 18:01 -0700, Andrew Morton wrote:
> > Martin Schwidefsky <schwidefsky@de.ibm.com> wrote:
> > >
> > > The basic idea of host virtual assist (hva) is to give a host system
> > > which virtualizes the memory of its guest systems on a per page basis
> > > usage information for the guest pages. The host can then use this
> > > information to optimize the management of guest pages, in particular
> > > the paging. This optimizations can be used for unused (free) guest
> > > pages, for clean page cache pages, and for clean swap cache pages.
> >
> > This is pretty significant stuff. It sounds like something which needs to
> > be worked through with other possible users - UML, Xen, vware, etc.
> >
> > How come the reclaim has to be done in the host? I'd have thought that a
> > much simpler approach would be to perform a host->guest upcall saying
> > either "try to free up this many pages" or "free this page" or "free this
> > vector of pages"?
>
> Because calling into the guest is too slow.
So speed it up ;)
> You need to schedule a cpu,
> the code that does the allocation needs to run, which might need other
> pages, etc. The beauty of the scheme is that the host can immediately
> remove a page that is mark as volatile or unused. No i/o, no scheduling,
> nothing. Consider what that does to the latency of the hosts memory
> allocation. Even if the percentage of discardable pages is small, lets
> say 25% of the guests memory, the host will quickly find reusable
> memory. If the vmscan of the host attempts to evict 100 pages, on
> average it will start i/o for 75 of them, the other 25 are immediately
> free for reuse.
Batching can do wonders. What's the expected/typical memory footprint of a
guest versus the machine's total physical memory?
And what's the typical total size of a guest?
Because a 100-page chunk sounds an awfully small work unit for a guest, let
alone for the host.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: Page host virtual assist patches.
2006-04-25 7:19 ` Nick Piggin
@ 2006-04-25 8:31 ` Martin Schwidefsky
2006-04-25 8:37 ` Andrew Morton
2006-04-25 10:04 ` Nick Piggin
0 siblings, 2 replies; 23+ messages in thread
From: Martin Schwidefsky @ 2006-04-25 8:31 UTC (permalink / raw)
To: Nick Piggin; +Cc: Andrew Morton, linux-mm, frankeh, rhim
On Tue, 2006-04-25 at 17:19 +1000, Nick Piggin wrote:
> Andrew Morton wrote:
> > Martin Schwidefsky <schwidefsky@de.ibm.com> wrote:
> >
> >> The basic idea of host virtual assist (hva) is to give a host system
> >> which virtualizes the memory of its guest systems on a per page basis
> >> usage information for the guest pages. The host can then use this
> >> information to optimize the management of guest pages, in particular
> >> the paging. This optimizations can be used for unused (free) guest
> >> pages, for clean page cache pages, and for clean swap cache pages.
> >
> >
> > This is pretty significant stuff. It sounds like something which needs to
> > be worked through with other possible users - UML, Xen, vware, etc.
> >
> > How come the reclaim has to be done in the host? I'd have thought that a
> > much simpler approach would be to perform a host->guest upcall saying
> > either "try to free up this many pages" or "free this page" or "free this
> > vector of pages"?
>
> Definitely. The current patches seem like just an extra layer to do
> everything we can already -- reclaim unused pages and populate them
> again when they get touched.
>
> And complex they are. Having the core VM have to know about all this
> weird stuff seems... not good.
The point here is WHO does the reclaim. Sure we can do the reclaim in
the guest but it is the host that has the memory pressure. To call into
the guest is not a good idea, if you have an idle guest you generally
increase the memory pressure because some of the guests pages might have
been swapped which are needed if the guest has to do the reclaim.
--
blue skies,
Martin.
Martin Schwidefsky
Linux for zSeries Development & Services
IBM Deutschland Entwicklung GmbH
"Reality continues to ruin my life." - Calvin.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: Page host virtual assist patches.
2006-04-25 8:31 ` Martin Schwidefsky
@ 2006-04-25 8:37 ` Andrew Morton
2006-04-25 10:44 ` Martin Schwidefsky
2006-04-25 10:04 ` Nick Piggin
1 sibling, 1 reply; 23+ messages in thread
From: Andrew Morton @ 2006-04-25 8:37 UTC (permalink / raw)
To: schwidefsky; +Cc: nickpiggin, linux-mm, frankeh, rhim
Martin Schwidefsky <schwidefsky@de.ibm.com> wrote:
>
> > Definitely. The current patches seem like just an extra layer to do
> > everything we can already -- reclaim unused pages and populate them
> > again when they get touched.
> >
> > And complex they are. Having the core VM have to know about all this
> > weird stuff seems... not good.
>
> The point here is WHO does the reclaim. Sure we can do the reclaim in
> the guest but it is the host that has the memory pressure. To call into
> the guest is not a good idea, if you have an idle guest you generally
> increase the memory pressure because some of the guests pages might have
> been swapped which are needed if the guest has to do the reclaim.
Cannot the guests employ text sharing?
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: Page host virtual assist patches.
2006-04-25 8:31 ` Martin Schwidefsky
2006-04-25 8:37 ` Andrew Morton
@ 2006-04-25 10:04 ` Nick Piggin
2006-04-25 11:28 ` Martin Schwidefsky
1 sibling, 1 reply; 23+ messages in thread
From: Nick Piggin @ 2006-04-25 10:04 UTC (permalink / raw)
To: schwidefsky; +Cc: Andrew Morton, linux-mm, frankeh, rhim
Martin Schwidefsky wrote:
> The point here is WHO does the reclaim. Sure we can do the reclaim in
> the guest but it is the host that has the memory pressure. To call into
By logic, if the host has memory pressure, and the guest is running on
the host, doesn't the guest have memory pressure? (Assuming you want to
reclaim guest pages, which you do because that is what your patches are
effectively doing anyway).
If the guest isn't under memory pressure (it has been allocated a fixed
amount of memory, and hasn't exceeded it), then you just don't call in.
Nor should you be employing this virtual assist reclaim on them.
> the guest is not a good idea, if you have an idle guest you generally
> increase the memory pressure because some of the guests pages might have
> been swapped which are needed if the guest has to do the reclaim.
It might be a win in heavy swapping conditions to get your hypervisor's
tentacles into the guests' core VM, I could believe that. Doesn't mean
it is a good idea in our purpose OS.
How badly did the simple approach fare?
--
SUSE Labs, Novell Inc.
Send instant messages to your online friends http://au.messenger.yahoo.com
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: Page host virtual assist patches.
2006-04-25 8:26 ` Nick Piggin
@ 2006-04-25 10:36 ` Martin Schwidefsky
2006-04-25 10:51 ` Nick Piggin
0 siblings, 1 reply; 23+ messages in thread
From: Martin Schwidefsky @ 2006-04-25 10:36 UTC (permalink / raw)
To: Nick Piggin; +Cc: Andrew Morton, linux-mm, frankeh, rhim
On Tue, 2006-04-25 at 18:26 +1000, Nick Piggin wrote:
> > Because calling into the guest is too slow. You need to schedule a cpu,
> > the code that does the allocation needs to run, which might need other
> > pages, etc. The beauty of the scheme is that the host can immediately
> > remove a page that is mark as volatile or unused. No i/o, no scheduling,
> > nothing. Consider what that does to the latency of the hosts memory
> > allocation. Even if the percentage of discardable pages is small, lets
> > say 25% of the guests memory, the host will quickly find reusable
> > memory. If the vmscan of the host attempts to evict 100 pages, on
> > average it will start i/o for 75 of them, the other 25 are immediately
> > free for reuse.
> >
>
> I don't think there is any beauty in this scheme, to be honest.
Beauty lies in the eye of the beholder. From my point of view there is
benefit to the method.
> I don't see why calling into the host is bad - won't it be able to
> make better reclaim decisions? If starting IO is the wrong thing to
> do under a hypervisor, why is it the right thing to do on bare metal?
First some assumptions about the environment. We are talking about a
paging hypervisor that runs several hundreds of guest Linux images. The
memory is overcommited, the sum of the guest memory sizes is larger than
the host memory by a factor of 2-3. Usually a large percentage of the
guests memory is paged out by the hypervisor.
Both the host and the guest follow an LRU strategy. That means that the
host will pick the oldest page from the idlest guest. Almost the same
would happen if you call into the idlest guest to let the guest free its
oldest page. But the catch is that the guest will touch a lot of page
doing its vmscan operation, if that causes a single additional host i/o
because a guest page needs to be retrieved from the host swap device,
you are already in negative territory.
> As for latency of host's memory allocation, it should attempt to
> keep some buffer of memory free.
It does attempt to keep some memory free. But lets say 1000 guest images
generate a lot of memory pressure. You will run out of memory, and
anything that speeds up the host reclaim will improve the situation. And
the method allows to reduce the number of i/o that the host needs to do.
Consider an old, volatile page that is picked for eviction. Without hva
the host will write it to the paging device. If the guest touches the
page again the host has to read it back to memory again. Two host i/o's.
If the host discards the page, the guest will get a discard fault when
it tries to reaccess the page. The guest will read the page from its
backing device. One guest i/o. Seems like a good deal to me..
--
blue skies,
Martin.
Martin Schwidefsky
Linux for zSeries Development & Services
IBM Deutschland Entwicklung GmbH
"Reality continues to ruin my life." - Calvin.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: Page host virtual assist patches.
2006-04-25 8:30 ` Andrew Morton
@ 2006-04-25 10:43 ` Martin Schwidefsky
0 siblings, 0 replies; 23+ messages in thread
From: Martin Schwidefsky @ 2006-04-25 10:43 UTC (permalink / raw)
To: Andrew Morton; +Cc: linux-mm, frankeh, rhim
On Tue, 2006-04-25 at 01:30 -0700, Andrew Morton wrote:
> > > This is pretty significant stuff. It sounds like something which needs to
> > > be worked through with other possible users - UML, Xen, vware, etc.
> > >
> > > How come the reclaim has to be done in the host? I'd have thought that a
> > > much simpler approach would be to perform a host->guest upcall saying
> > > either "try to free up this many pages" or "free this page" or "free this
> > > vector of pages"?
> >
> > Because calling into the guest is too slow.
>
> So speed it up ;)
We did.. the other way round by adding the ESSA :-)
> > You need to schedule a cpu,
> > the code that does the allocation needs to run, which might need other
> > pages, etc. The beauty of the scheme is that the host can immediately
> > remove a page that is mark as volatile or unused. No i/o, no scheduling,
> > nothing. Consider what that does to the latency of the hosts memory
> > allocation. Even if the percentage of discardable pages is small, lets
> > say 25% of the guests memory, the host will quickly find reusable
> > memory. If the vmscan of the host attempts to evict 100 pages, on
> > average it will start i/o for 75 of them, the other 25 are immediately
> > free for reuse.
>
> Batching can do wonders. What's the expected/typical memory footprint of a
> guest versus the machine's total physical memory?
Yes, batching will speed up the calls for one particular guest. Trouble
is that we are not talking about freeing 1000 pages from 1 guest. We
have the problem to free 1 page from 1000 guests.
> And what's the typical total size of a guest?
>
> Because a 100-page chunk sounds an awfully small work unit for a guest, let
> alone for the host.
The typical memory size of the guests depends on the workload it runs. A
typical memory size would be something like 256MB. The real catch is the
amount of memory overcommitment. And 100 pages sound about right if you
have 1000 guests.
--
blue skies,
Martin.
Martin Schwidefsky
Linux for zSeries Development & Services
IBM Deutschland Entwicklung GmbH
"Reality continues to ruin my life." - Calvin.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: Page host virtual assist patches.
2006-04-25 8:37 ` Andrew Morton
@ 2006-04-25 10:44 ` Martin Schwidefsky
2006-04-25 16:29 ` Andrew Morton
0 siblings, 1 reply; 23+ messages in thread
From: Martin Schwidefsky @ 2006-04-25 10:44 UTC (permalink / raw)
To: Andrew Morton; +Cc: nickpiggin, linux-mm, frankeh, rhim
On Tue, 2006-04-25 at 01:37 -0700, Andrew Morton wrote:
> > The point here is WHO does the reclaim. Sure we can do the reclaim in
> > the guest but it is the host that has the memory pressure. To call into
> > the guest is not a good idea, if you have an idle guest you generally
> > increase the memory pressure because some of the guests pages might have
> > been swapped which are needed if the guest has to do the reclaim.
>
> Cannot the guests employ text sharing?
Yes we can. We even had some patches for sharing the kernel text between
virtual machines. But the kernel text is only a small part of the memory
that gets accessed for a vmscan operation.
--
blue skies,
Martin.
Martin Schwidefsky
Linux for zSeries Development & Services
IBM Deutschland Entwicklung GmbH
"Reality continues to ruin my life." - Calvin.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: Page host virtual assist patches.
2006-04-25 10:36 ` Martin Schwidefsky
@ 2006-04-25 10:51 ` Nick Piggin
2006-04-25 12:18 ` Martin Schwidefsky
0 siblings, 1 reply; 23+ messages in thread
From: Nick Piggin @ 2006-04-25 10:51 UTC (permalink / raw)
To: schwidefsky; +Cc: Andrew Morton, linux-mm, frankeh, rhim
Martin Schwidefsky wrote:
> On Tue, 2006-04-25 at 18:26 +1000, Nick Piggin wrote:
>>I don't think there is any beauty in this scheme, to be honest.
>
>
> Beauty lies in the eye of the beholder. From my point of view there is
> benefit to the method.
That's 'cause you have an s390.
>
>
>>I don't see why calling into the host is bad - won't it be able to
>>make better reclaim decisions? If starting IO is the wrong thing to
>>do under a hypervisor, why is it the right thing to do on bare metal?
>
>
> First some assumptions about the environment. We are talking about a
> paging hypervisor that runs several hundreds of guest Linux images. The
> memory is overcommited, the sum of the guest memory sizes is larger than
> the host memory by a factor of 2-3. Usually a large percentage of the
> guests memory is paged out by the hypervisor.
>
> Both the host and the guest follow an LRU strategy. That means that the
> host will pick the oldest page from the idlest guest. Almost the same
> would happen if you call into the idlest guest to let the guest free its
> oldest page. But the catch is that the guest will touch a lot of page
> doing its vmscan operation, if that causes a single additional host i/o
> because a guest page needs to be retrieved from the host swap device,
> you are already in negative territory.
Why would most guest memory be paged out if the host reclaims by first
asking guests to reclaim, *then* paging them out?
I can understand that you observe most guest memory to be paged out
under pressure with the present scheme, but the dynamics will completely
change I think... You'll be left with shrunk guests, which you could
then mark as unreclaimable, stop asking them to reclaim, then page the
rest of their memory out from the host.
> It does attempt to keep some memory free. But lets say 1000 guest images
> generate a lot of memory pressure. You will run out of memory, and
> anything that speeds up the host reclaim will improve the situation. And
I believe that, and I'm sure there are lots of really invasive things you
could do to make it even faster...
--
SUSE Labs, Novell Inc.
Send instant messages to your online friends http://au.messenger.yahoo.com
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: Page host virtual assist patches.
2006-04-25 10:04 ` Nick Piggin
@ 2006-04-25 11:28 ` Martin Schwidefsky
2006-04-25 12:13 ` Nick Piggin
2006-04-27 20:55 ` jschopp
0 siblings, 2 replies; 23+ messages in thread
From: Martin Schwidefsky @ 2006-04-25 11:28 UTC (permalink / raw)
To: Nick Piggin; +Cc: Andrew Morton, linux-mm, frankeh, rhim
On Tue, 2006-04-25 at 20:04 +1000, Nick Piggin wrote:
> Martin Schwidefsky wrote:
>
> > The point here is WHO does the reclaim. Sure we can do the reclaim in
> > the guest but it is the host that has the memory pressure. To call into
>
> By logic, if the host has memory pressure, and the guest is running on
> the host, doesn't the guest have memory pressure? (Assuming you want to
> reclaim guest pages, which you do because that is what your patches are
> effectively doing anyway).
The memory pressure of the host is generated by the guests. But the
guest that has to give up memory in general is NOT the guest that is
currently running. And no, the running guest system does not have memory
pressure since its "real" memory is virtualized by the host. The guest
simply accesses the virtual page frames. If the host has paged them, the
host gets the exception and has to deal with it.
> If the guest isn't under memory pressure (it has been allocated a fixed
> amount of memory, and hasn't exceeded it), then you just don't call in.
> Nor should you be employing this virtual assist reclaim on them.
The guests have a fixed host-virtual memory size. They do not have a
fixed host-physical memory size. And for an idle guest we will reclaim
old pages from it until nothing remains of the guest in memory.
> > the guest is not a good idea, if you have an idle guest you generally
> > increase the memory pressure because some of the guests pages might have
> > been swapped which are needed if the guest has to do the reclaim.
>
> It might be a win in heavy swapping conditions to get your hypervisor's
> tentacles into the guests' core VM, I could believe that. Doesn't mean
> it is a good idea in our purpose OS.
Yes, we do heavy swapping in the hypervisor. For a purpose OS it is not
a good idea but then done set CONFIG_PAGE_HVA and all the hva code turns
into nops.
> How badly did the simple approach fare?
Which simple approach do you mean? The guest ballooner? That works
reasonably well for a small number of guests. If you keep adding guests
the overhead for the guest calls increases. Ultimately we believe that a
combination of the ballooner method and the new hva method will yield
the best results.
--
blue skies,
Martin.
Martin Schwidefsky
Linux for zSeries Development & Services
IBM Deutschland Entwicklung GmbH
"Reality continues to ruin my life." - Calvin.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: Page host virtual assist patches.
2006-04-25 11:28 ` Martin Schwidefsky
@ 2006-04-25 12:13 ` Nick Piggin
2006-04-25 14:15 ` Martin Schwidefsky
2006-04-27 20:55 ` jschopp
1 sibling, 1 reply; 23+ messages in thread
From: Nick Piggin @ 2006-04-25 12:13 UTC (permalink / raw)
To: schwidefsky; +Cc: Andrew Morton, linux-mm, frankeh, rhim
Martin Schwidefsky wrote:
> On Tue, 2006-04-25 at 20:04 +1000, Nick Piggin wrote:
>
>>Martin Schwidefsky wrote:
>>
>>
>>>The point here is WHO does the reclaim. Sure we can do the reclaim in
>>>the guest but it is the host that has the memory pressure. To call into
>>
>>By logic, if the host has memory pressure, and the guest is running on
>>the host, doesn't the guest have memory pressure? (Assuming you want to
>>reclaim guest pages, which you do because that is what your patches are
>>effectively doing anyway).
>
>
> The memory pressure of the host is generated by the guests. But the
> guest that has to give up memory in general is NOT the guest that is
> currently running. And no, the running guest system does not have memory
> pressure since its "real" memory is virtualized by the host. The guest
> simply accesses the virtual page frames. If the host has paged them, the
> host gets the exception and has to deal with it.
>
>
>>If the guest isn't under memory pressure (it has been allocated a fixed
>>amount of memory, and hasn't exceeded it), then you just don't call in.
>>Nor should you be employing this virtual assist reclaim on them.
>
>
> The guests have a fixed host-virtual memory size. They do not have a
> fixed host-physical memory size.
That's just arguing semantics now. You are advocating to involve guests
in cooperating with memory management with the host. Ergo, if there is
memory pressure in the host then it is not a "layering violation" to ask
guests to reclaim memory as if they were under memory pressure too.
No more a violation than having the host reclaim the guest's memory from
under it.
>>>the guest is not a good idea, if you have an idle guest you generally
>>>increase the memory pressure because some of the guests pages might have
>>>been swapped which are needed if the guest has to do the reclaim.
>>
>>It might be a win in heavy swapping conditions to get your hypervisor's
>>tentacles into the guests' core VM, I could believe that. Doesn't mean
>>it is a good idea in our purpose OS.
>
>
> Yes, we do heavy swapping in the hypervisor. For a purpose OS it is not
> a good idea but then done set CONFIG_PAGE_HVA and all the hva code turns
> into nops.
But anybody who modifies or tries to understand the code and races etc
involved has to know about all this stuff. That is my problem with it.
I'm not worried about the overhead at all, because I presume you have
made it zero for the !CONFIG_PAGE_HVA case.
>>How badly did the simple approach fare?
>
>
> Which simple approach do you mean? The guest ballooner? That works
> reasonably well for a small number of guests. If you keep adding guests
> the overhead for the guest calls increases. Ultimately we believe that a
> combination of the ballooner method and the new hva method will yield
> the best results.
Yes, that simple approach (presumably the guest ballooner allocates
memory from the guest and frees it to the host or something similar).
I'd be interested to see numbers from real workloads...
I don't think the hva method is reasonable as it is. Let's see if we
can improve host->guest driven reclaiming first.
--
SUSE Labs, Novell Inc.
Send instant messages to your online friends http://au.messenger.yahoo.com
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: Page host virtual assist patches.
2006-04-25 10:51 ` Nick Piggin
@ 2006-04-25 12:18 ` Martin Schwidefsky
0 siblings, 0 replies; 23+ messages in thread
From: Martin Schwidefsky @ 2006-04-25 12:18 UTC (permalink / raw)
To: Nick Piggin; +Cc: Andrew Morton, linux-mm, frankeh, rhim
On Tue, 2006-04-25 at 20:51 +1000, Nick Piggin wrote:
> > Beauty lies in the eye of the beholder. From my point of view there is
> > benefit to the method.
>
> That's 'cause you have an s390.
And everbody else do not have to use the code. It configuratable.
> > First some assumptions about the environment. We are talking about a
> > paging hypervisor that runs several hundreds of guest Linux images. The
> > memory is overcommited, the sum of the guest memory sizes is larger than
> > the host memory by a factor of 2-3. Usually a large percentage of the
> > guests memory is paged out by the hypervisor.
> >
> > Both the host and the guest follow an LRU strategy. That means that the
> > host will pick the oldest page from the idlest guest. Almost the same
> > would happen if you call into the idlest guest to let the guest free its
> > oldest page. But the catch is that the guest will touch a lot of page
> > doing its vmscan operation, if that causes a single additional host i/o
> > because a guest page needs to be retrieved from the host swap device,
> > you are already in negative territory.
>
> Why would most guest memory be paged out if the host reclaims by first
> asking guests to reclaim, *then* paging them out?
Because memory for guests running under z/VM is overcommitted. Even with
the ballooner that reduces the guest memory size to the >guests< working
set size, the host will still do paging on the remaining guest pages.
> I can understand that you observe most guest memory to be paged out
> under pressure with the present scheme, but the dynamics will completely
> change I think... You'll be left with shrunk guests, which you could
> then mark as unreclaimable, stop asking them to reclaim, then page the
> rest of their memory out from the host.
Yes I think that this works. With 5 guest images. With 1000 images? I
doubt it, the overhead just adds up.
> > It does attempt to keep some memory free. But lets say 1000 guest images
> > generate a lot of memory pressure. You will run out of memory, and
> > anything that speeds up the host reclaim will improve the situation. And
>
> I believe that, and I'm sure there are lots of really invasive things you
> could do to make it even faster...
With enough images you have a lot of dynamics in the shift of memory
between guests. With the ballooner you can do the low-frequency shifts
to get the guests roughly to their working set size. The high-frequency
shifts between guests are better done with hva.
--
blue skies,
Martin.
Martin Schwidefsky
Linux for zSeries Development & Services
IBM Deutschland Entwicklung GmbH
"Reality continues to ruin my life." - Calvin.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: Page host virtual assist patches.
2006-04-25 12:13 ` Nick Piggin
@ 2006-04-25 14:15 ` Martin Schwidefsky
2006-04-26 1:13 ` Nick Piggin
0 siblings, 1 reply; 23+ messages in thread
From: Martin Schwidefsky @ 2006-04-25 14:15 UTC (permalink / raw)
To: Nick Piggin; +Cc: Andrew Morton, linux-mm, frankeh, rhim
On Tue, 2006-04-25 at 22:13 +1000, Nick Piggin wrote:
> >>If the guest isn't under memory pressure (it has been allocated a fixed
> >>amount of memory, and hasn't exceeded it), then you just don't call in.
> >>Nor should you be employing this virtual assist reclaim on them.
> >
> >
> > The guests have a fixed host-virtual memory size. They do not have a
> > fixed host-physical memory size.
>
> That's just arguing semantics now. You are advocating to involve guests
> in cooperating with memory management with the host. Ergo, if there is
> memory pressure in the host then it is not a "layering violation" to ask
> guests to reclaim memory as if they were under memory pressure too.
>
> No more a violation than having the host reclaim the guest's memory from
> under it.
I wouldn't call it a violation. But yes both ways of doing achieve the
same result. One of the guest pages is reclaimed. The million dollar
question is which way is faster.
> > Yes, we do heavy swapping in the hypervisor. For a purpose OS it is not
> > a good idea but then done set CONFIG_PAGE_HVA and all the hva code turns
> > into nops.
>
> But anybody who modifies or tries to understand the code and races etc
> involved has to know about all this stuff. That is my problem with it.
Oh, yes, I perfectly understand this. The code is rather complex.
> I'm not worried about the overhead at all, because I presume you have
> made it zero for the !CONFIG_PAGE_HVA case.
Yes, we made sure of that.
> > Which simple approach do you mean? The guest ballooner? That works
> > reasonably well for a small number of guests. If you keep adding guests
> > the overhead for the guest calls increases. Ultimately we believe that a
> > combination of the ballooner method and the new hva method will yield
> > the best results.
>
> Yes, that simple approach (presumably the guest ballooner allocates
> memory from the guest and frees it to the host or something similar).
> I'd be interested to see numbers from real workloads...
>
> I don't think the hva method is reasonable as it is. Let's see if we
> can improve host->guest driven reclaiming first.
So you believe that the host->guest driven relaiming can be improved to
a point where hva is superfluous. I do not believe that. Lets agree to
disagree here. Any findings in the hva code itself?
Anyway, thanks for you insights.
--
blue skies,
Martin.
Martin Schwidefsky
Linux for zSeries Development & Services
IBM Deutschland Entwicklung GmbH
"Reality continues to ruin my life." - Calvin.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: Page host virtual assist patches.
2006-04-25 10:44 ` Martin Schwidefsky
@ 2006-04-25 16:29 ` Andrew Morton
2006-04-25 17:04 ` Martin Schwidefsky
0 siblings, 1 reply; 23+ messages in thread
From: Andrew Morton @ 2006-04-25 16:29 UTC (permalink / raw)
To: schwidefsky; +Cc: nickpiggin, linux-mm, frankeh, rhim
Martin Schwidefsky <schwidefsky@de.ibm.com> wrote:
>
> On Tue, 2006-04-25 at 01:37 -0700, Andrew Morton wrote:
> > > The point here is WHO does the reclaim. Sure we can do the reclaim in
> > > the guest but it is the host that has the memory pressure. To call into
> > > the guest is not a good idea, if you have an idle guest you generally
> > > increase the memory pressure because some of the guests pages might have
> > > been swapped which are needed if the guest has to do the reclaim.
> >
> > Cannot the guests employ text sharing?
>
> Yes we can. We even had some patches for sharing the kernel text between
> virtual machines. But the kernel text is only a small part of the memory
> that gets accessed for a vmscan operation.
>
And the bulk of the rest will be accesses to mem_map[]. I guess the hva
patches still require that each guests's mem_map[] be in host memory, but
not necessarily in guest memory?
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: Page host virtual assist patches.
2006-04-25 16:29 ` Andrew Morton
@ 2006-04-25 17:04 ` Martin Schwidefsky
0 siblings, 0 replies; 23+ messages in thread
From: Martin Schwidefsky @ 2006-04-25 17:04 UTC (permalink / raw)
To: Andrew Morton; +Cc: nickpiggin, linux-mm, frankeh, rhim
On Tue, 2006-04-25 at 09:29 -0700, Andrew Morton wrote:
> > Yes we can. We even had some patches for sharing the kernel text between
> > virtual machines. But the kernel text is only a small part of the memory
> > that gets accessed for a vmscan operation.
> >
>
> And the bulk of the rest will be accesses to mem_map[]. I guess the hva
> patches still require that each guests's mem_map[] be in host memory, but
> not necessarily in guest memory?
The host does not need the mem_map information. The state information of
the guest pages is passed to the host by use of an instruction. It is
stored in the host page table for the guest (well actually in the PGSTE
which is the virtualization extension of the page table entry). The host
does not need any data from the guest memory for the decision to discard
a page, only the state information. Just like the linux kernel does not
need any user space data to swap a user page except for the page
content, with the difference that the host can discard the page based on
the state information.
--
blue skies,
Martin.
Martin Schwidefsky
Linux for zSeries Development & Services
IBM Deutschland Entwicklung GmbH
"Reality continues to ruin my life." - Calvin.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: Page host virtual assist patches.
2006-04-25 14:15 ` Martin Schwidefsky
@ 2006-04-26 1:13 ` Nick Piggin
2006-04-26 7:39 ` Martin Schwidefsky
0 siblings, 1 reply; 23+ messages in thread
From: Nick Piggin @ 2006-04-26 1:13 UTC (permalink / raw)
To: schwidefsky; +Cc: Andrew Morton, linux-mm, frankeh, rhim
Martin Schwidefsky wrote:
>On Tue, 2006-04-25 at 22:13 +1000, Nick Piggin wrote:
>
>>Yes, that simple approach (presumably the guest ballooner allocates
>>memory from the guest and frees it to the host or something similar).
>>I'd be interested to see numbers from real workloads...
>>
>>I don't think the hva method is reasonable as it is. Let's see if we
>>can improve host->guest driven reclaiming first.
>>
>
>So you believe that the host->guest driven relaiming can be improved to
>a point where hva is superfluous. I do not believe that. Lets agree to
>
I'm not sure that it would ever be quite as fast, but I hope it
could be improved to the point that it is adequate. Yes.
>disagree here. Any findings in the hva code itself?
>
OK, we'll agree to disagree for now :)
I did start looking at the code but as you can see I only reviewed
patch 1 before getting sidetracked. I'll try to find some more time
to look at in the next few days.
Nick
--
Send instant messages to your online friends http://au.messenger.yahoo.com
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: Page host virtual assist patches.
2006-04-26 1:13 ` Nick Piggin
@ 2006-04-26 7:39 ` Martin Schwidefsky
2006-04-26 12:03 ` Hubertus Franke
0 siblings, 1 reply; 23+ messages in thread
From: Martin Schwidefsky @ 2006-04-26 7:39 UTC (permalink / raw)
To: Nick Piggin; +Cc: Andrew Morton, linux-mm, frankeh, rhim
On Wed, 2006-04-26 at 11:13 +1000, Nick Piggin wrote:
> OK, we'll agree to disagree for now :)
>
> I did start looking at the code but as you can see I only reviewed
> patch 1 before getting sidetracked. I'll try to find some more time
> to look at in the next few days.
Thanks Nick, that would be greatly appreciated. The code is hard to
understand, it's memory races squared. Races of the hypervisor actions
against races in the Linux mm. Lovely. It took use quite a while to get
that beast working, on z/VM, Linux and the millicode.
--
blue skies,
Martin.
Martin Schwidefsky
Linux for zSeries Development & Services
IBM Deutschland Entwicklung GmbH
"Reality continues to ruin my life." - Calvin.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: Page host virtual assist patches.
2006-04-26 7:39 ` Martin Schwidefsky
@ 2006-04-26 12:03 ` Hubertus Franke
0 siblings, 0 replies; 23+ messages in thread
From: Hubertus Franke @ 2006-04-26 12:03 UTC (permalink / raw)
To: schwidefsky; +Cc: Nick Piggin, Andrew Morton, linux-mm, rhim
Martin Schwidefsky wrote:
> On Wed, 2006-04-26 at 11:13 +1000, Nick Piggin wrote:
>
>>OK, we'll agree to disagree for now :)
>>
>>I did start looking at the code but as you can see I only reviewed
>>patch 1 before getting sidetracked. I'll try to find some more time
>>to look at in the next few days.
>
>
> Thanks Nick, that would be greatly appreciated. The code is hard to
> understand, it's memory races squared. Races of the hypervisor actions
> against races in the Linux mm. Lovely. It took use quite a while to get
> that beast working, on z/VM, Linux and the millicode.
>
Martin, one thing that should be pointed out that despite these race
conditions, the principle concept is rather clean.
It's like putting a lock at the right place, you got to know what is
protected.
If the documentation is not clear, then lets change it.
As I see, you have not included the Documentation part into the latest
patch submission. I think doing that will help.
Kernel writers should understand when they need to make the page stable
when they should attempt to make it volatile, when the system does it for them
due to the page_cache_release.
In most cases, those functions are burried in the lower level functions already.
It just gets a bit hairy with LRU races.
Nick, your feedback on what is not clear, would help us properly address that in
the documentation.
As for code impact, I consider this very similar to the KMAP interface.
There's not need it for 64-bit architectures, but the interface is clean and
optimized away by the compiler. The same holds true here, as Martin pointed
out there is no change in the code when disabled (one exception, not on the critical
part).
-- Hubertus
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: Page host virtual assist patches.
2006-04-25 11:28 ` Martin Schwidefsky
2006-04-25 12:13 ` Nick Piggin
@ 2006-04-27 20:55 ` jschopp
1 sibling, 0 replies; 23+ messages in thread
From: jschopp @ 2006-04-27 20:55 UTC (permalink / raw)
To: schwidefsky; +Cc: Nick Piggin, Andrew Morton, linux-mm, frankeh, rhim
> Which simple approach do you mean? The guest ballooner? That works
> reasonably well for a small number of guests. If you keep adding guests
> the overhead for the guest calls increases. Ultimately we believe that a
> combination of the ballooner method and the new hva method will yield
> the best results.
Don't forget memory hotplug in your combination mix. Your ballooner fragments the hell out
of your memory, and your hva method requires some work to keep the state updated. Memory
hotplug on the other hand suffers from neither of those problems.
That said, I rather like the hva method. It's quite clever.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 23+ messages in thread
end of thread, other threads:[~2006-04-27 20:56 UTC | newest]
Thread overview: 23+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2006-04-24 12:34 Page host virtual assist patches Martin Schwidefsky
2006-04-25 1:01 ` Andrew Morton
2006-04-25 7:19 ` Nick Piggin
2006-04-25 8:31 ` Martin Schwidefsky
2006-04-25 8:37 ` Andrew Morton
2006-04-25 10:44 ` Martin Schwidefsky
2006-04-25 16:29 ` Andrew Morton
2006-04-25 17:04 ` Martin Schwidefsky
2006-04-25 10:04 ` Nick Piggin
2006-04-25 11:28 ` Martin Schwidefsky
2006-04-25 12:13 ` Nick Piggin
2006-04-25 14:15 ` Martin Schwidefsky
2006-04-26 1:13 ` Nick Piggin
2006-04-26 7:39 ` Martin Schwidefsky
2006-04-26 12:03 ` Hubertus Franke
2006-04-27 20:55 ` jschopp
2006-04-25 8:10 ` Martin Schwidefsky
2006-04-25 8:26 ` Nick Piggin
2006-04-25 10:36 ` Martin Schwidefsky
2006-04-25 10:51 ` Nick Piggin
2006-04-25 12:18 ` Martin Schwidefsky
2006-04-25 8:30 ` Andrew Morton
2006-04-25 10:43 ` Martin Schwidefsky
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).