From: epsi@gmx.de
To: "Woodruff, Richard" <r-woodruff2@ti.com>,
premi@ti.com, linux-omap@vger.kernel.org
Subject: Re: RE: RE: RE: Memory performance / Cache problem
Date: Wed, 14 Oct 2009 19:23:14 +0200 [thread overview]
Message-ID: <20091014172314.130150@gmx.net> (raw)
In-Reply-To: <13B9B4C6EF24D648824FF11BE8967162039B235EFA@dlee02.ent.ti.com>
> > Mem clock is both times 166MHz. I don't know whether are differences in
> cycle
> > access and timing, but memclock is fine.
>
> How did you physically verify this?
Oszi show 166MHz, also the kernel message about freq are in both kernels the same.
> > Following Siarhei hints of initialize the buffers (around 1.2 MByte
> each)
> > I get different results in 22kernel for use of
> > malloc alone
> > memcpy = 473.764, loop4 = 448.430, loop1 = 102.770, rand =
> 29.641
> > calloc alone
> > memcpy = 405.947, loop4 = 361.550, loop1 = 95.441, rand =
> 21.853
> > malloc+memset:
> > memcpy = 239.294, loop4 = 188.617, loop1 = 80.871, rand =
> 4.726
> >
> > In 31kernel all 3 measures are about the same (unfortunatly low) level
> of
> > malloc+memset in 22.
>
> Yes aligned buffers can make a difference. But probably more so for small
> copies. Of course you must touch the memory or mprotect() it so its
> faulted in, but indications are you have done this.
Mh, alignment (to an address) is done with malloc already. Probably you mean something different. I don't understand the difference. For me is malloc+memset=calloc.
I'll send you the benchmark code, if you like.
> > I used a standard memcpy (think this is glib and hence not neonbased)?
> > To be neonbased I guess it has to be recompiled?
>
> The version of glibc in use can make a difference. CodeSourcery in 2009
> release added PLD's to mem operations. This can give a good benefit. It
> might be you have optimized library in one case and a non-optimized in
> another.
In both kernels I used the same rootfs (via NFS). Indeed I used CS2009q1 and its libs, but we are talking about factor 2..6. This must be something serious.
What is your feeling? Does the 22 something strange or are the newer kernels slower that they have to be.
Would be interesting to see results on other Omap3 boards with both old an new kernels.
Best regards
Steffen
--
GRATIS für alle GMX-Mitglieder: Die maxdome Movie-FLAT!
Jetzt freischalten unter http://portal.gmx.net/de/go/maxdome01
--
To unsubscribe from this list: send the line "unsubscribe linux-omap" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
next prev parent reply other threads:[~2009-10-14 17:24 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-10-12 8:38 Memory performance / Cache problem epsi
2009-10-12 9:07 ` Dasgupta, Romit
2009-10-12 9:51 ` Dasgupta, Romit
2009-10-12 9:12 ` Premi, Sanjeev
2009-10-13 8:16 ` epsi
2009-10-14 13:59 ` Woodruff, Richard
2009-10-14 14:48 ` epsi
2009-10-14 15:25 ` Woodruff, Richard
2009-10-14 17:23 ` epsi [this message]
2009-10-14 17:36 ` Woodruff, Richard
2009-10-14 17:37 ` Siarhei Siamashka
2009-10-14 17:46 ` Woodruff, Richard
2009-10-15 10:20 ` epsi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20091014172314.130150@gmx.net \
--to=epsi@gmx.de \
--cc=linux-omap@vger.kernel.org \
--cc=premi@ti.com \
--cc=r-woodruff2@ti.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox