linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* SLUB patches in mm
@ 2008-01-30  4:25 Christoph Lameter
  2008-01-30 23:32 ` Andrew Morton
  0 siblings, 1 reply; 6+ messages in thread
From: Christoph Lameter @ 2008-01-30  4:25 UTC (permalink / raw)
  To: akpm; +Cc: linux-mm, penberg, Matthew Wilcox

We still have not settled how much and if the performance improvement 
patches help. The cycle measurements seem only to go so far. I have found 
some minor regressions and would like to hold most of the performance 
patches for now. It seems that Intel has an environment in which more 
detailed performance tests could be run with individual patches.

Some of them also would be much better with upcoming patchsets 
(cpu_alloc f.e.) and may not be needed at all if we first go via 
cpu_alloc.

Most of the performance patches are only small scale improvements (0.5 - 
2%). Test like tbench typically run in an pretty unstable environment 
(seems that recompiling the kernel with some unrelated patches can cause 
larger changes than caused by these) and I really do not want to get 
patches in that needlessly complicate the allocator or cause slight 
regressions.


slub-move-count_partial.patch
slub-rename-numa-defrag_ratio-to-remote_node_defrag_ratio.patch
slub-consolidate-add_partial-and-add_partial_tail-to-one-function.patch

Merge (The consolidate-add-partial seems to improve speed by 1-2%. This 
       was intended for cleanup only but it has a similar effect as the 
       hackbench fix. It changes the handling of partial slabs slightly 
       and allows slabs to gather more objects before being used for 
       allocs again.
       From that I think we can conclude that work on the 
       partial list handling could yield some performance gains)

slub-use-non-atomic-bit-unlock.patch

Do not merge. Surprisingly removing the atomic operation on unlock seems 
to cause slight regressions in tbench. I guess it influence the speed with 
which a cacheline is dropping out of the cpu caches. It improves 
performance if a single thread is running.


slub-fix-coding-style-violations.patch
slub-fix-coding-style-violations-checkpatch-fixes.patch

Merge (obviously)


slub-noinline-some-functions-to-avoid-them-being-folded-into-alloc-free.patch
slub-move-kmem_cache_node-determination-into-add_full-and-add_partial.patch

Do not merge


slub-move-kmem_cache_node-determination-into-add_full-and-add_partial-slub-workaround-for-lockdep-confusion.patch

Merge (this is just a lockdep fix)


slub-avoid-checking-for-a-valid-object-before-zeroing-on-the-fast-path.patch
slub-__slab_alloc-exit-path-consolidation.patch
slub-provide-unique-end-marker-for-each-slab.patch
slub-provide-unique-end-marker-for-each-slab-fix.patch
slub-avoid-referencing-kmem_cache-structure-in-__slab_alloc.patch
slub-optional-fast-path-using-cmpxchg_local.patch
slub-do-our-own-locking-via-slab_lock-and-slab_unlock.patch
slub-do-our-own-locking-via-slab_lock-and-slab_unlock-checkpatch-fixes.patch
slub-do-our-own-locking-via-slab_lock-and-slab_unlock-fix.patch
slub-restructure-slab-alloc.patch

Do not merge. cmpxchg_local work still requires preemption 
disable/enable without cpu_alloc and Intel's tests so far do not show a 
convincing gain. And the do-our-own-locking series also removes the atomic 
unlock operation thus causing similar troubles as 
slub-use-non-atomic-bit-unlock.patch


slub-comment-kmem_cache_cpu-structure.patch

Merge


I have sorted the patches and put them into a git archive on 
git.kernel.org


patches to be merged for 2.6.25:

git://git.kernel.org/pub/scm/linux/kernel/git/christoph/vm.git slub-2.6.25


Performance patches on hold for testing:

git://git.kernel.org/pub/scm/linux/kernel/git/christoph/vm.git performance

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: SLUB patches in mm
  2008-01-30  4:25 SLUB patches in mm Christoph Lameter
@ 2008-01-30 23:32 ` Andrew Morton
  2008-01-30 23:50   ` Christoph Lameter
  0 siblings, 1 reply; 6+ messages in thread
From: Andrew Morton @ 2008-01-30 23:32 UTC (permalink / raw)
  To: Christoph Lameter; +Cc: linux-mm, penberg, matthew

On Tue, 29 Jan 2008 20:25:15 -0800 (PST)
Christoph Lameter <clameter@sgi.com> wrote:

> We still have not settled how much and if the performance improvement 
> patches help. The cycle measurements seem only to go so far. I have found 
> some minor regressions and would like to hold most of the performance 
> patches for now. It seems that Intel has an environment in which more 
> detailed performance tests could be run with individual patches.
> 
> Some of them also would be much better with upcoming patchsets 
> (cpu_alloc f.e.) and may not be needed at all if we first go via 
> cpu_alloc.
> 
> Most of the performance patches are only small scale improvements (0.5 - 
> 2%). Test like tbench typically run in an pretty unstable environment 
> (seems that recompiling the kernel with some unrelated patches can cause 
> larger changes than caused by these) and I really do not want to get 
> patches in that needlessly complicate the allocator or cause slight 
> regressions.
> 
> 
> slub-move-count_partial.patch
> slub-rename-numa-defrag_ratio-to-remote_node_defrag_ratio.patch
> slub-consolidate-add_partial-and-add_partial_tail-to-one-function.patch
> 
> Merge (The consolidate-add-partial seems to improve speed by 1-2%. This 
>        was intended for cleanup only but it has a similar effect as the 
>        hackbench fix. It changes the handling of partial slabs slightly 
>        and allows slabs to gather more objects before being used for 
>        allocs again.
>        From that I think we can conclude that work on the 
>        partial list handling could yield some performance gains)
> 
> slub-use-non-atomic-bit-unlock.patch
> 
> Do not merge. Surprisingly removing the atomic operation on unlock seems 
> to cause slight regressions in tbench. I guess it influence the speed with 
> which a cacheline is dropping out of the cpu caches. It improves 
> performance if a single thread is running.
> 
> 
> slub-fix-coding-style-violations.patch
> slub-fix-coding-style-violations-checkpatch-fixes.patch
> 
> Merge (obviously)
> 
> 
> slub-noinline-some-functions-to-avoid-them-being-folded-into-alloc-free.patch
> slub-move-kmem_cache_node-determination-into-add_full-and-add_partial.patch
> 
> Do not merge
> 
> 
> slub-move-kmem_cache_node-determination-into-add_full-and-add_partial-slub-workaround-for-lockdep-confusion.patch
> 
> Merge (this is just a lockdep fix)
> 
> 
> slub-avoid-checking-for-a-valid-object-before-zeroing-on-the-fast-path.patch
> slub-__slab_alloc-exit-path-consolidation.patch
> slub-provide-unique-end-marker-for-each-slab.patch
> slub-provide-unique-end-marker-for-each-slab-fix.patch
> slub-avoid-referencing-kmem_cache-structure-in-__slab_alloc.patch
> slub-optional-fast-path-using-cmpxchg_local.patch
> slub-do-our-own-locking-via-slab_lock-and-slab_unlock.patch
> slub-do-our-own-locking-via-slab_lock-and-slab_unlock-checkpatch-fixes.patch
> slub-do-our-own-locking-via-slab_lock-and-slab_unlock-fix.patch
> slub-restructure-slab-alloc.patch
> 
> Do not merge. cmpxchg_local work still requires preemption 
> disable/enable without cpu_alloc and Intel's tests so far do not show a 
> convincing gain. And the do-our-own-locking series also removes the atomic 
> unlock operation thus causing similar troubles as 
> slub-use-non-atomic-bit-unlock.patch
> 
> 
> slub-comment-kmem_cache_cpu-structure.patch
> 
> Merge
> 
> 
> I have sorted the patches and put them into a git archive on 
> git.kernel.org
> 
> 
> patches to be merged for 2.6.25:
> 
> git://git.kernel.org/pub/scm/linux/kernel/git/christoph/vm.git slub-2.6.25
> 
> 
> Performance patches on hold for testing:
> 
> git://git.kernel.org/pub/scm/linux/kernel/git/christoph/vm.git performance

I'm inclined to just drop every patch which you've mentioned, let you merge
slub-2.6.25 into Linus's tree and then add git-slub.patch to the -mm
lineup.  OK?

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: SLUB patches in mm
  2008-01-30 23:32 ` Andrew Morton
@ 2008-01-30 23:50   ` Christoph Lameter
  2008-01-31  0:44     ` Andrew Morton
  0 siblings, 1 reply; 6+ messages in thread
From: Christoph Lameter @ 2008-01-30 23:50 UTC (permalink / raw)
  To: Andrew Morton; +Cc: linux-mm, penberg, matthew

On Wed, 30 Jan 2008, Andrew Morton wrote:

> I'm inclined to just drop every patch which you've mentioned, let you merge
> slub-2.6.25 into Linus's tree and then add git-slub.patch to the -mm
> lineup.  OK?

Ok.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: SLUB patches in mm
  2008-01-30 23:50   ` Christoph Lameter
@ 2008-01-31  0:44     ` Andrew Morton
  2008-02-05  6:27       ` Christoph Lameter
  0 siblings, 1 reply; 6+ messages in thread
From: Andrew Morton @ 2008-01-31  0:44 UTC (permalink / raw)
  To: Christoph Lameter; +Cc: linux-mm, penberg, matthew

On Wed, 30 Jan 2008 15:50:08 -0800 (PST)
Christoph Lameter <clameter@sgi.com> wrote:

> On Wed, 30 Jan 2008, Andrew Morton wrote:
> 
> > I'm inclined to just drop every patch which you've mentioned, let you merge
> > slub-2.6.25 into Linus's tree and then add git-slub.patch to the -mm
> > lineup.  OK?
> 
> Ok.

The way I'll do this is to hang onto all the slub patches which I have. 
Once those patches reappear in -mm (via you->mainline or via git-slub->mm)
then I'll drop them.  This way I get to detect lost patches.

So please send me the git URL when it suits you.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: SLUB patches in mm
  2008-01-31  0:44     ` Andrew Morton
@ 2008-02-05  6:27       ` Christoph Lameter
  2008-02-05  6:42         ` Andrew Morton
  0 siblings, 1 reply; 6+ messages in thread
From: Christoph Lameter @ 2008-02-05  6:27 UTC (permalink / raw)
  To: Andrew Morton; +Cc: linux-mm, penberg, matthew

On Wed, 30 Jan 2008, Andrew Morton wrote:

> So please send me the git URL when it suits you.

Git URL / branch is:

git://git.kernel.org/pub/scm/linux/kernel/git/christoph/vm.git slub-mm

Current content is the basic cmpxchg framework that is needed later for 
the cpu_alloc/cpu_ops stuff and the statistics code.


The following changes since commit 
9ef9dc69d4167276c04590d67ee55de8380bc1ad:
  Linus Torvalds (1):
        Merge branch 'for-linus' of 
master.kernel.org:/home/rmk/linux-2.6-arm

are available in the git repository at:

  git://git.kernel.org/pub/scm/linux/kernel/git/christoph/vm.git slub-mm

Christoph Lameter (3):
      SLUB: Use unique end pointer for each slab page.
      SLUB: Alternate fast paths using cmpxchg_local
      SLUB: Support for statistics to help analyze allocator behavior

 Documentation/vm/slabinfo.c |  149 +++++++++++++++++++++++--
 arch/x86/Kconfig            |    4 +
 include/linux/mm_types.h    |    5 +-
 include/linux/slub_def.h    |   23 ++++
 lib/Kconfig.debug           |   11 ++
 mm/slub.c                   |  257 
+++++++++++++++++++++++++++++++++++++------
 6 files changed, 405 insertions(+), 44 deletions(-)


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: SLUB patches in mm
  2008-02-05  6:27       ` Christoph Lameter
@ 2008-02-05  6:42         ` Andrew Morton
  0 siblings, 0 replies; 6+ messages in thread
From: Andrew Morton @ 2008-02-05  6:42 UTC (permalink / raw)
  To: Christoph Lameter; +Cc: linux-mm, penberg, matthew

On Mon, 4 Feb 2008 22:27:47 -0800 (PST) Christoph Lameter <clameter@sgi.com> wrote:

> > So please send me the git URL when it suits you.
> 
> Git URL / branch is:
> 
> git://git.kernel.org/pub/scm/linux/kernel/git/christoph/vm.git slub-mm

added, thanks.

I discovered that I was still pulling
git+ssh://master.kernel.org/pub/scm/linux/kernel/git/christoph/slab.git.

No longer ;)

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2008-02-05  6:42 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-01-30  4:25 SLUB patches in mm Christoph Lameter
2008-01-30 23:32 ` Andrew Morton
2008-01-30 23:50   ` Christoph Lameter
2008-01-31  0:44     ` Andrew Morton
2008-02-05  6:27       ` Christoph Lameter
2008-02-05  6:42         ` Andrew Morton

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).