public inbox for linux-omap@vger.kernel.org
 help / color / mirror / Atom feed
From: epsi@gmx.de
To: Siarhei Siamashka <siarhei.siamashka@nokia.com>
Cc: linux-omap@vger.kernel.org
Subject: Re: Memory performance  / Cache problem
Date: Tue, 13 Oct 2009 11:13:33 +0200	[thread overview]
Message-ID: <20091013091333.141260@gmx.net> (raw)
In-Reply-To: <200910121135.15191.siarhei.siamashka@nokia.com>

The L2 cache is set and running.
I don't know - can it be configured or misconfigured somehow?

I just checked the output of 2.6.22 kernel and get these lines (which I don't have in newer kernels):

CPU0: D VIPT write-through cache
CPU0: cache: 768 bytes, associativity 1, 8 byte lines, 64 sets
Built 1 zonelists.  Total pages: 32512

I am wondering what is this? First thought was L1 cache, but it's to small. 

The benchmark is running on same hardware, same uboot, same rootfs, just the kernel is different.


> On Monday 12 October 2009 10:54:09 ext epsi@gmx.de wrote:
> > I found the memory performance of newer kernels are quit poor on an
> > EVM-Omap3 board. It works with 2-6 times performance on the old 2.6.22
> > kernel from TI's PSP.
> >
> > Possible reasons:
> > - problem in config the kernel (did omap3_evm_defconfig)
> > - problem in kernel
> > - kernel expects some settings from uboot, which are not done there
> >
> > I have tried the 2.6.29rc3 (from TI's PSP) and the 2.6.31 from git-tree.
> > Both behave quite simular:
> >
> > Transport in MByte:
> >   memcpy =   204.073, loop4 =   183.212, loop1 =    81.693, rand =
> > 4.534
> >
> > while the 22 kernel:
> >  memcpy =   453.932, loop4 =   469.934, loop1 =   125.031, rand =
> > 29.631
> >
> > Can someone give me help or can at least confirm that?
> 
> The numbers from 2.6.22 kernel look much better than anything I have ever
> seen with OMAP3.
> 
> How are you doing benchmarking? Is source buffer properly initialized?
> 
> The point is that if you just happen to allocate a large buffer without
> initializing it, it may end up having all the memory pages referencing to
> a
> single zero page in physical memory. In this case reading from this buffer
> will in fact be perfectly cached in L1 cache and memcpy would look fast.
> 
> If it is not the case, investigating how to boost memory performance in
> the
> latest kernels is very interesting for sure.
> 
> -- 
> Best regards,
> Siarhei Siamashka


-- 
Neu: GMX DSL bis 50.000 kBit/s und 200,- Euro Startguthaben!
http://portal.gmx.net/de/go/dsl02

  reply	other threads:[~2009-10-13  9:14 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-10-12  7:54 Memory performance / Cache problem epsi
2009-10-12  8:09 ` Dasgupta, Romit
2009-10-12  8:35 ` Siarhei Siamashka
2009-10-13  9:13   ` epsi [this message]
  -- strict thread matches above, loose matches on Subject: below --
2009-10-12  8:38 epsi
2009-10-12  9:07 ` Dasgupta, Romit
2009-10-12  9:51   ` Dasgupta, Romit
2009-10-12  9:12 ` Premi, Sanjeev
2009-10-14 13:59 ` Woodruff, Richard
2009-10-14 14:48   ` epsi
2009-10-14 17:37     ` Siarhei Siamashka
2009-10-14 17:46       ` Woodruff, Richard
2009-10-15 10:20       ` epsi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20091013091333.141260@gmx.net \
    --to=epsi@gmx.de \
    --cc=linux-omap@vger.kernel.org \
    --cc=siarhei.siamashka@nokia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox