From: Misbah khan <misbah_khan@engineer.com>
To: linuxppc-embedded@ozlabs.org
Subject: Re: floating point support in the driver.
Date: Tue, 5 Aug 2008 02:49:17 -0700 (PDT) [thread overview]
Message-ID: <18827857.post@talk.nabble.com> (raw)
In-Reply-To: <48969805.40904@ovro.caltech.edu>
Hi David ,
Thank you for your reply.
I am running the algorithm on OMAP processor (arm-core) and i did tried the
same on iMX processor which takes 1.7 times more than OMAP.
It is true that the algorithm is performing the vector operation which is
blowing the cache .
But the question is How to lock the cache ? In driver how should we
implement the same ?
An example code or a document could be helpful in this regard.
--- Misbah <><
David Hawkins-3 wrote:
>
>
> Hi Misbah,
>
> I would recommend you look at your floating-point code again
> and benchmark each section. You should be able to estimate
> the number of clock cycles required to complete an operation
> and then check that against your measurements.
>
> Depending on whether your algorithm is processing intensive
> or data movement intensive, you may find that the big time
> waster is moving data on or off chip, or perhaps its a large
> vector operation that is blowing out the cache. If you
> do find that, then on some processors you can lock the
> cache, so your algorithm would require a custom driver
> that steals part of the cache from the OS, but the floating point
> code would not run in the kernel, it would run on data
> stored in the stolen cache area. You can lock both instructions
> and data in the cache; eg. an FFT routine can be locked in
> the instruction cache, while FFT data is in the data cache.
> I'm not sure how easy this is to do under Linux though.
>
> Here's an example of the level of detail you can get
> downto when benchmarking code:
>
> http://www.ovro.caltech.edu/~dwh/correlator/pdf/dsp_programming.pdf
>
> The FFT routine used on this processor made use of both
> the instruction and data cache (on-chip SRAM) on the
> DSP.
>
> This code is being re-developed to run on a MPC8349EA PowerPC
> with FPU. I did some initial testing to confirm that the
> FPU operates as per the data sheet, and will eventually get
> around to more complete testing.
>
> Which processor were you running your code on, and what
> frequency were you operating the processor at? How does
> the algorithm timing compare when run on other processors,
> eg. your desktop or laptop machine?
>
> Cheers,
> Dave
> _______________________________________________
> Linuxppc-embedded mailing list
> Linuxppc-embedded@ozlabs.org
> https://ozlabs.org/mailman/listinfo/linuxppc-embedded
>
>
--
View this message in context: http://www.nabble.com/floating-point-support-in-the-driver.-tp18772109p18827857.html
Sent from the linuxppc-embedded mailing list archive at Nabble.com.
next prev parent reply other threads:[~2008-08-05 9:49 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2008-08-01 10:57 floating point support in the driver Misbah khan
2008-08-01 11:32 ` Laurent Pinchart
2008-08-01 12:00 ` Misbah khan
2008-08-01 15:54 ` M. Warner Losh
2008-08-04 5:23 ` Misbah khan
2008-08-04 5:33 ` M. Warner Losh
2008-08-04 5:47 ` David Hawkins
2008-08-05 9:49 ` Misbah khan [this message]
2008-08-05 16:53 ` David Hawkins
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=18827857.post@talk.nabble.com \
--to=misbah_khan@engineer.com \
--cc=linuxppc-embedded@ozlabs.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).