linux-next.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* linux-next: manual merge of the nommu tree
@ 2009-01-08  4:30 Stephen Rothwell
  2009-01-08  7:20 ` Matt Mackall
  2009-01-08 12:46 ` David Howells
  0 siblings, 2 replies; 7+ messages in thread
From: Stephen Rothwell @ 2009-01-08  4:30 UTC (permalink / raw)
  To: David Howells; +Cc: linux-next, Matt Mackall

[-- Attachment #1: Type: text/plain, Size: 483 bytes --]

Hi David,

Today's linux-next merge of the nommu tree got a conflict in
mm/tiny-shmem.c between commit 853ac43ab194f5051b27a55060215d696dc9480d
("shmem: unify regular and tiny shmem") from Linus' tree and commit
d10f9907ba3626261f45dbb498867f441f06c486 ("shmem: remove unused
shmem_get_unmapped_area") from the nommu tree.

The former removed the file and so have I.
-- 
Cheers,
Stephen Rothwell                    sfr@canb.auug.org.au
http://www.canb.auug.org.au/~sfr/

[-- Attachment #2: Type: application/pgp-signature, Size: 197 bytes --]

^ permalink raw reply	[flat|nested] 7+ messages in thread
* linux-next: manual merge of the nommu tree
@ 2008-12-11  7:43 Stephen Rothwell
  0 siblings, 0 replies; 7+ messages in thread
From: Stephen Rothwell @ 2008-12-11  7:43 UTC (permalink / raw)
  To: David Howells; +Cc: linux-next, Vegard Nossum, Pekka Enberg, Ingo Molnar

Hi David,

Today's linux-next merge of the nommu tree got a conflict in
kernel/fork.c between commit e6df1035b1b488cafde1e69f1a25f2706c3ac1f7
("kmemcheck: add mm functions") from the kmemcheck tree and commit
f65466230e8afd45f716e5b836711ce270f45105 ("NOMMU: Make VMAs per MM as for
MMU-mode linux") from the nommu tree.

I fixed it up as best I can (see below) and can carry the fix as
necessary (though it would be good if you guys could figure out a better
solution).
-- 
Cheers,
Stephen Rothwell                    sfr@canb.auug.org.au
http://www.canb.auug.org.au/~sfr/

diff --cc kernel/fork.c
index 446167a,d7c5b42..0000000
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@@ -1459,23 -1451,21 +1459,21 @@@ void __init proc_caches_init(void
  {
  	sighand_cachep = kmem_cache_create("sighand_cache",
  			sizeof(struct sighand_struct), 0,
 -			SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_DESTROY_BY_RCU,
 -			sighand_ctor);
 +			SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_DESTROY_BY_RCU|
 +			SLAB_NOTRACK, sighand_ctor);
  	signal_cachep = kmem_cache_create("signal_cache",
  			sizeof(struct signal_struct), 0,
 -			SLAB_HWCACHE_ALIGN|SLAB_PANIC, NULL);
 +			SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_NOTRACK, NULL);
  	files_cachep = kmem_cache_create("files_cache",
  			sizeof(struct files_struct), 0,
 -			SLAB_HWCACHE_ALIGN|SLAB_PANIC, NULL);
 +			SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_NOTRACK, NULL);
  	fs_cachep = kmem_cache_create("fs_cache",
  			sizeof(struct fs_struct), 0,
 -			SLAB_HWCACHE_ALIGN|SLAB_PANIC, NULL);
 +			SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_NOTRACK, NULL);
- 	vm_area_cachep = kmem_cache_create("vm_area_struct",
- 			sizeof(struct vm_area_struct), 0,
- 			SLAB_PANIC|SLAB_NOTRACK, NULL);
  	mm_cachep = kmem_cache_create("mm_struct",
  			sizeof(struct mm_struct), ARCH_MIN_MMSTRUCT_ALIGN,
 -			SLAB_HWCACHE_ALIGN|SLAB_PANIC, NULL);
 +			SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_NOTRACK, NULL);
+ 	mmap_init();
  }
  
  /*
diff --git a/mm/mmap.c b/mm/mmap.c
index 6e5fc98..d85193e 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -2478,5 +2478,5 @@ void __init mmap_init(void)
 {
 	vm_area_cachep = kmem_cache_create("vm_area_struct",
 			sizeof(struct vm_area_struct), 0,
-			SLAB_PANIC, NULL);
+			SLAB_PANIC|SLAB_NOTRACK, NULL);
 }
diff --git a/mm/nommu.c b/mm/nommu.c
index 61b7f7a..efb3d01 100644
--- a/mm/nommu.c
+++ b/mm/nommu.c
@@ -447,10 +447,10 @@ void __init mmap_init(void)
 {
 	vm_region_jar = kmem_cache_create("vm_region_jar",
 					  sizeof(struct vm_region), 0,
-					  SLAB_PANIC, NULL);
+					  SLAB_PANIC|SLAB_NOTRACK, NULL);
 	vm_area_cachep = kmem_cache_create("vm_area_struct",
 					   sizeof(struct vm_area_struct), 0,
-					   SLAB_PANIC, NULL);
+					   SLAB_PANIC|SLAB_NOTRACK, NULL);
 }
 
 /*

^ permalink raw reply related	[flat|nested] 7+ messages in thread
* linux-next: manual merge of the nommu tree
@ 2008-12-11  7:25 Stephen Rothwell
  2008-12-11  7:42 ` Paul Mundt
  0 siblings, 1 reply; 7+ messages in thread
From: Stephen Rothwell @ 2008-12-11  7:25 UTC (permalink / raw)
  To: David Howells; +Cc: linux-next, Christoph Lameter, Paul Mundt, Pekka Enberg

Hi David,

Today's linux-next merge of the nommu tree got a conflict in
Documentation/sysctl/vm.txt between commit
cb8fc7a88a0069ebdab220180bf9b45e568f0ba9 ("slub: Trigger defragmentation
from memory reclaim") from the slab tree and commit
1a5d96d0151ce2ec77bf08498751fe8d9365c95f ("NOMMU: Make mmap allocation
page trimming behaviour configurable.") from the nommu tree.

Just overlapping additions.  I fixed it up (see below) and can carry the
fix as necessary.
-- 
Cheers,
Stephen Rothwell                    sfr@canb.auug.org.au
http://www.canb.auug.org.au/~sfr/

diff --cc Documentation/sysctl/vm.txt
index 5e7329a,e9a5c28..0000000
--- a/Documentation/sysctl/vm.txt
+++ b/Documentation/sysctl/vm.txt
@@@ -38,7 -38,7 +38,8 @@@ Currently, these files are in /proc/sys
  - numa_zonelist_order
  - nr_hugepages
  - nr_overcommit_hugepages
 +- slab_defrag_limit
+ - nr_trim_pages		(only if CONFIG_MMU=n)
  
  ==============================================================
  
@@@ -351,11 -351,17 +352,27 @@@ See Documentation/vm/hugetlbpage.tx
  
  ==============================================================
  
 +slab_defrag_limit
 +
 +Determines the frequency of calls from reclaim into slab defragmentation.
 +Slab defrag reclaims objects from sparsely populates slab pages.
 +The default is 1000. Increase if slab defragmentation occurs
 +too frequently. Decrease if more slab defragmentation passes
 +are needed. The slabinfo tool can report on the frequency of the callbacks.
 +
++==============================================================
++
+ nr_trim_pages
+ 
+ This is available only on NOMMU kernels.
+ 
+ This value adjusts the excess page trimming behaviour of power-of-2 aligned
+ NOMMU mmap allocations.
+ 
+ A value of 0 disables trimming of allocations entirely, while a value of 1
+ trims excess pages aggressively. Any value >= 1 acts as the watermark where
+ trimming of allocations is initiated.
+ 
+ The default value is 1.
+ 
+ See Documentation/nommu-mmap.txt for more information.

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2009-01-08 13:06 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-01-08  4:30 linux-next: manual merge of the nommu tree Stephen Rothwell
2009-01-08  7:20 ` Matt Mackall
2009-01-08 12:46 ` David Howells
2009-01-08 13:06   ` Stephen Rothwell
  -- strict thread matches above, loose matches on Subject: below --
2008-12-11  7:43 Stephen Rothwell
2008-12-11  7:25 Stephen Rothwell
2008-12-11  7:42 ` Paul Mundt

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).