public inbox for linux-omap@vger.kernel.org
 help / color / mirror / Atom feed
* Benchmarking: POP flash vs. MMC?
@ 2009-04-03 21:24 david.hagood
  2009-04-04  0:57 ` Russ Dill
  0 siblings, 1 reply; 15+ messages in thread
From: david.hagood @ 2009-04-03 21:24 UTC (permalink / raw)
  To: linux-omap

Has anybody done any benchmarking on POP flash read/write speeds (using
JFFS2) vs. MMC/SDHC card access (using ext[234])? Or for that matter, has
anybody done any playing around with using the execute in place function
of JFFS2 to see if there is any over-all speed advantage (due to reduced
pressure on the RAM) vs. loading into RAM?



^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Benchmarking: POP flash vs. MMC?
  2009-04-03 21:24 Benchmarking: POP flash vs. MMC? david.hagood
@ 2009-04-04  0:57 ` Russ Dill
  2009-04-04  2:52   ` David Hagood
  0 siblings, 1 reply; 15+ messages in thread
From: Russ Dill @ 2009-04-04  0:57 UTC (permalink / raw)
  To: david.hagood; +Cc: linux-omap

On Fri, Apr 3, 2009 at 2:24 PM,  <david.hagood@gmail.com> wrote:
> Has anybody done any benchmarking on POP flash read/write speeds (using
> JFFS2) vs. MMC/SDHC card access (using ext[234])? Or for that matter, has
> anybody done any playing around with using the execute in place function
> of JFFS2 to see if there is any over-all speed advantage (due to reduced
> pressure on the RAM) vs. loading into RAM?
>

UBIFS on POP NAND
Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
beagleboard    300M  2333  86 19899  45 10420  59  2289  95 17719  93  2060  93
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16  5310  94  7931  96  4439  87  4734  85 +++++ +++  3082  95
beagleboard,300M,2333,86,19899,45,10420,59,2289,95,17719,93,2060.1,93,16,5310,94,7931,96,4439,87,4734,85,+++++,+++,3082,95

ext3 (noatime,async,data=ordered) on MMC (4GB class 6)
Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
beagleboard    300M  2026  81  4553  11  4281   6  2502  92 16079  14  1672  23
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16  7135  95 +++++ +++ 12173  95  5040  67 +++++ +++ 11766  96
beagleboard,300M,2026,81,4553,11,4281,6,2502,92,16079,14,1672.3,23,16,7135,95,+++++,+++,12173,95,5040,67,+++++,+++,11766,96

ext3 (noatime,async,data=writeback) on MMC (4GB class 6)
Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
beagleboard    300M  2343  85 19370  40 10937  61  2312  95 19176  94  3122  93
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16  5194  93  7863  95  4511  87  4970  89 +++++ +++  3076  95
beagleboard,300M,2343,85,19370,40,10937,61,2312,95,19176,94,3122.3,93,16,5194,93,7863,95,4511,87,4970,89,+++++,+++,3076,95

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Benchmarking: POP flash vs. MMC?
  2009-04-04  0:57 ` Russ Dill
@ 2009-04-04  2:52   ` David Hagood
  2009-04-04  4:16     ` Russ Dill
  2009-04-06  7:41     ` Artem Bityutskiy
  0 siblings, 2 replies; 15+ messages in thread
From: David Hagood @ 2009-04-04  2:52 UTC (permalink / raw)
  To: Russ Dill; +Cc: linux-omap

Well, that's not what I would have expected - I would have thought reads on POP would have been faster than that, and cheaper - the SD being the same speed but less CPU is surprising.


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Benchmarking: POP flash vs. MMC?
  2009-04-04  2:52   ` David Hagood
@ 2009-04-04  4:16     ` Russ Dill
  2009-04-05 22:53       ` David Brownell
  2009-04-06  7:41     ` Artem Bityutskiy
  1 sibling, 1 reply; 15+ messages in thread
From: Russ Dill @ 2009-04-04  4:16 UTC (permalink / raw)
  To: David Hagood; +Cc: linux-omap

On Fri, Apr 3, 2009 at 7:52 PM, David Hagood <david.hagood@gmail.com> wrote:
> Well, that's not what I would have expected - I would have thought reads on POP would have been faster than that, and cheaper - the SD being the same speed but less CPU is surprising.

The POP flash may not have a DMA ability. Also, the POP flash contents
are compressed and ECC corrected, so its a lot of extra work for the
CPU.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Benchmarking: POP flash vs. MMC?
  2009-04-04  4:16     ` Russ Dill
@ 2009-04-05 22:53       ` David Brownell
  0 siblings, 0 replies; 15+ messages in thread
From: David Brownell @ 2009-04-05 22:53 UTC (permalink / raw)
  To: Russ Dill; +Cc: David Hagood, linux-omap

On Friday 03 April 2009, Russ Dill wrote:
> On Fri, Apr 3, 2009 at 7:52 PM, David Hagood <david.hagood@gmail.com> wrote:
> > Well, that's not what I would have expected - I would have thought
> > reads on POP would have been faster than that, and cheaper - the SD
> > being the same speed but less CPU is surprising.  
> 
> The POP flash may not have a DMA ability.

ISTR it does, but it's not used.  There are other speedups possible;
the NAND driver in the OMAP tree is pretty simplistic.  It doesn't
use the "Prefetch and Write-Posting Engine" (we don't know if that
really helps though); or the more aggressive ECC support (4-bit and
8-bit modes both work).

Of course using DMA in the MTD stack can be a bit problematic in
general.  It assumes that DMA is not used ... to the extent of not
guaranteeing to provide DMA-mappable I/O buffers.

That said, it's hard to achieve speedups through DMA for such small
blocks of memory ... an issue which is not unique to NAND.


> Also, the POP flash contents 
> are compressed and ECC corrected, so its a lot of extra work for the
> CPU.

ECC is done in hardware for the native NAND stack on OMAP;
but you're right about compression.

Another point is that "managed NAND" (like MMC and eMMC, and
various SSD flavors) is there to offload some complicated
and error-prone tasks like wear-leveling.



^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Benchmarking: POP flash vs. MMC?
  2009-04-04  2:52   ` David Hagood
  2009-04-04  4:16     ` Russ Dill
@ 2009-04-06  7:41     ` Artem Bityutskiy
  2009-04-06  8:13       ` Russ Dill
  1 sibling, 1 reply; 15+ messages in thread
From: Artem Bityutskiy @ 2009-04-06  7:41 UTC (permalink / raw)
  To: David Hagood; +Cc: Russ Dill, linux-omap

David Hagood wrote:
> Well, that's not what I would have expected - I would have thought
> reads on POP would have been faster than that, and cheaper - the SD
> being the same speed but less CPU is surprising.

1. As Russ and David said, OneNAND driver does not really
use DMA, because the I/O is done in 2K chunks, and this
is just too small piece of data for DMA.

2. UBIFS also compresses data on-the-flight. You may try
disabling it and see what changes, but probably not
too much, because of the way the driver writes (no DMA).
Try mounting with 'compre=none' option, see here:

http://www.linux-mtd.infradead.org/doc/ubifs.html#L_mountopts

BTW, some compression testing results may be found here:

http://www.linux-mtd.infradead.org/misc/misc.html#L_ubifs_compr

although they are not 100% relevant for this case.

3. UBIFS provides you greater data reliability. E.g., it
CRCs all data (see here
http://www.linux-mtd.infradead.org/doc/ubifs.html#L_checksumming)
OneNAND was very reliable last time we tested it, and we
disable _some_ CRC checking for it. Try to use the
'no_chk_data_crc' and get better read speed.

4. UBIFS has 'bulk read' feature which works well on OneNAND,
(see here: http://www.linux-mtd.infradead.org/doc/ubifs.html#L_readahead)
Try to enable it as well. You should end up with faster
read speed.

5. Last but not least, UBIFS+OneNAND provides just another
level of reliability, comparing to SD. In general, SDs are
not very good for storing system libraries, etc. I tried to
summarize this at some point here:

http://www.linux-mtd.infradead.org/doc/ubifs.html#L_raw_vs_ftl

HTH.

-- 
Best Regards,
Artem Bityutskiy (Артём Битюцкий)
--
To unsubscribe from this list: send the line "unsubscribe linux-omap" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Benchmarking: POP flash vs. MMC?
  2009-04-06  7:41     ` Artem Bityutskiy
@ 2009-04-06  8:13       ` Russ Dill
  2009-04-06  9:07         ` Artem Bityutskiy
  2009-04-06  9:59         ` David Brownell
  0 siblings, 2 replies; 15+ messages in thread
From: Russ Dill @ 2009-04-06  8:13 UTC (permalink / raw)
  To: Artem Bityutskiy; +Cc: David Hagood, linux-omap

On Mon, Apr 6, 2009 at 12:41 AM, Artem Bityutskiy <dedekind@yandex.ru> wrote:
> David Hagood wrote:
>>
>> Well, that's not what I would have expected - I would have thought
>> reads on POP would have been faster than that, and cheaper - the SD
>> being the same speed but less CPU is surprising.
>
> 1. As Russ and David said, OneNAND driver does not really
> use DMA, because the I/O is done in 2K chunks, and this
> is just too small piece of data for DMA.

All very relevant, but just to avoid confusion; the tests were
performed on a Beagleboard with 256MB of Micron NAND.

http://www.micron.com/products/partdetail?part=MT29C2G24MAKLAJG-6 IT

Also, it appears from looking an the openzoom git, there are some
patches to add DMA support in, but I'm not sure what effect they have.

> 2. UBIFS also compresses data on-the-flight. You may try
> disabling it and see what changes, but probably not
> too much, because of the way the driver writes (no DMA).
> Try mounting with 'compre=none' option, see here:
>
> http://www.linux-mtd.infradead.org/doc/ubifs.html#L_mountopts
>
> BTW, some compression testing results may be found here:
>
> http://www.linux-mtd.infradead.org/misc/misc.html#L_ubifs_compr
>
> although they are not 100% relevant for this case.
>
> 3. UBIFS provides you greater data reliability. E.g., it
> CRCs all data (see here
> http://www.linux-mtd.infradead.org/doc/ubifs.html#L_checksumming)
> OneNAND was very reliable last time we tested it, and we
> disable _some_ CRC checking for it. Try to use the
> 'no_chk_data_crc' and get better read speed.
>
> 4. UBIFS has 'bulk read' feature which works well on OneNAND,
> (see here: http://www.linux-mtd.infradead.org/doc/ubifs.html#L_readahead)
> Try to enable it as well. You should end up with faster
> read speed.
>
> 5. Last but not least, UBIFS+OneNAND provides just another
> level of reliability, comparing to SD. In general, SDs are
> not very good for storing system libraries, etc. I tried to
> summarize this at some point here:
>
> http://www.linux-mtd.infradead.org/doc/ubifs.html#L_raw_vs_ftl
>
> HTH.
>
> --
> Best Regards,
> Artem Bityutskiy (Артём Битюцкий)
>
--
To unsubscribe from this list: send the line "unsubscribe linux-omap" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Benchmarking: POP flash vs. MMC?
  2009-04-06  8:13       ` Russ Dill
@ 2009-04-06  9:07         ` Artem Bityutskiy
  2009-04-06  9:59         ` David Brownell
  1 sibling, 0 replies; 15+ messages in thread
From: Artem Bityutskiy @ 2009-04-06  9:07 UTC (permalink / raw)
  To: Russ Dill; +Cc: David Hagood, linux-omap

Russ Dill wrote:
> On Mon, Apr 6, 2009 at 12:41 AM, Artem Bityutskiy <dedekind@yandex.ru> wrote:
>> David Hagood wrote:
>>> Well, that's not what I would have expected - I would have thought
>>> reads on POP would have been faster than that, and cheaper - the SD
>>> being the same speed but less CPU is surprising.
>> 1. As Russ and David said, OneNAND driver does not really
>> use DMA, because the I/O is done in 2K chunks, and this
>> is just too small piece of data for DMA.
> 
> All very relevant, but just to avoid confusion; the tests were
> performed on a Beagleboard with 256MB of Micron NAND.
 
Ok, sorry. The CRC part should be relevant.
The bulk_read part may also be relevant if your HW is
able to read multiple NAND pages faster than reading
them one-by-one. The compression links should be
helpful for compressor selection.

-- 
Best Regards,
Artem Bityutskiy (Артём Битюцкий)
--
To unsubscribe from this list: send the line "unsubscribe linux-omap" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Benchmarking: POP flash vs. MMC?
  2009-04-06  8:13       ` Russ Dill
  2009-04-06  9:07         ` Artem Bityutskiy
@ 2009-04-06  9:59         ` David Brownell
  2009-04-06 10:08           ` Paul Walmsley
  1 sibling, 1 reply; 15+ messages in thread
From: David Brownell @ 2009-04-06  9:59 UTC (permalink / raw)
  To: Russ Dill; +Cc: Artem Bityutskiy, David Hagood, linux-omap

On Monday 06 April 2009, Russ Dill wrote:
> Also, it appears from looking an the openzoom git, there are some
> patches to add DMA support in, but I'm not sure what effect they have.

We had asked for some benchmark data -- anything! -- to get a
handle on that, and the prefetch/etc engine; nothing forthcoming,
so far.


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Benchmarking: POP flash vs. MMC?
  2009-04-06  9:59         ` David Brownell
@ 2009-04-06 10:08           ` Paul Walmsley
  2009-04-06 10:23             ` David Brownell
  2009-04-07 12:09             ` Woodruff, Richard
  0 siblings, 2 replies; 15+ messages in thread
From: Paul Walmsley @ 2009-04-06 10:08 UTC (permalink / raw)
  To: David Brownell; +Cc: Russ Dill, Artem Bityutskiy, David Hagood, linux-omap

Hi David

On Mon, 6 Apr 2009, David Brownell wrote:

> On Monday 06 April 2009, Russ Dill wrote:
> > Also, it appears from looking an the openzoom git, there are some
> > patches to add DMA support in, but I'm not sure what effect they have.
> 
> We had asked for some benchmark data -- anything! -- to get a
> handle on that, and the prefetch/etc engine; nothing forthcoming,
> so far.

We'd also have to make sure that the comparison is between the linux-omap 
kernel and the OMAPZoo kernel, rather than o-z PIO vs. o-z DMA.  The 
OMAPZoom kernel doesn't post any device register writes.  That should 
cause any driver using PIO to drag, compared to the l-o kernel.


- Paul

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Benchmarking: POP flash vs. MMC?
  2009-04-06 10:08           ` Paul Walmsley
@ 2009-04-06 10:23             ` David Brownell
  2009-04-06 10:34               ` Paul Walmsley
  2009-04-07 12:09             ` Woodruff, Richard
  1 sibling, 1 reply; 15+ messages in thread
From: David Brownell @ 2009-04-06 10:23 UTC (permalink / raw)
  To: Paul Walmsley; +Cc: Russ Dill, Artem Bityutskiy, David Hagood, linux-omap

On Monday 06 April 2009, Paul Walmsley wrote:
> Hi David
> 
> On Mon, 6 Apr 2009, David Brownell wrote:
> 
> > On Monday 06 April 2009, Russ Dill wrote:
> > > Also, it appears from looking an the openzoom git, there are some
> > > patches to add DMA support in, but I'm not sure what effect they have.
> > 
> > We had asked for some benchmark data -- anything! -- to get a
> > handle on that, and the prefetch/etc engine; nothing forthcoming,
> > so far.
> 
> We'd also have to make sure that the comparison is between the linux-omap 
> kernel and the OMAPZoo kernel, rather than o-z PIO vs. o-z DMA.  The 
> OMAPZoom kernel doesn't post any device register writes.  That should 
> cause any driver using PIO to drag, compared to the l-o kernel.

I'd think the first thing to check is two linux-omap PIO flavors:
with vs without the prefetch/postwrite engine.  Benchmarking DMA
would be a separate issue.  Ditto comparing kernels with/without
write posting.

I've benchmarked small DMA transfers and it's rather hard to get
them faster than io{read,write}32_rep().  Overhead of DMA mapping
and just DMA setup/teardown/complete is annoyingly high.  While
bus overheads aren't large.

Puzzle:  get a dma_copypage() to work faster than copy_page().
Or a dma_clear_page() faster than clear_page().  Not easy...

- Dave

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Benchmarking: POP flash vs. MMC?
  2009-04-06 10:23             ` David Brownell
@ 2009-04-06 10:34               ` Paul Walmsley
  2009-04-06 10:48                 ` David Brownell
  0 siblings, 1 reply; 15+ messages in thread
From: Paul Walmsley @ 2009-04-06 10:34 UTC (permalink / raw)
  To: David Brownell; +Cc: Russ Dill, Artem Bityutskiy, David Hagood, linux-omap

On Mon, 6 Apr 2009, David Brownell wrote:

> I'd think the first thing to check is two linux-omap PIO flavors:
> with vs without the prefetch/postwrite engine.  Benchmarking DMA
> would be a separate issue.  Ditto comparing kernels with/without
> write posting.

Makes sense to me.  FIFO-less GPMC PIO should be full of fail.

> Puzzle:  get a dma_copypage() to work faster than copy_page().
> Or a dma_clear_page() faster than clear_page().  Not easy...

Doing it via the DMA engine may save power, since MPU can sleep.


- Paul

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Benchmarking: POP flash vs. MMC?
  2009-04-06 10:34               ` Paul Walmsley
@ 2009-04-06 10:48                 ` David Brownell
  2009-04-07 17:22                   ` Tony Lindgren
  0 siblings, 1 reply; 15+ messages in thread
From: David Brownell @ 2009-04-06 10:48 UTC (permalink / raw)
  To: Paul Walmsley; +Cc: Russ Dill, Artem Bityutskiy, David Hagood, linux-omap

On Monday 06 April 2009, Paul Walmsley wrote:
> > Puzzle:  get a dma_copypage() to work faster than copy_page().
> > Or a dma_clear_page() faster than clear_page().  Not easy...
> 
> Doing it via the DMA engine may save power, since MPU can sleep.

But the CPU overhead of calling the DMA engine can exceed
that of the memcpy()/memset() ... ;)

Another concern is cache impact.  In some cases, having the
dirty data in dcache is a big win.  With DMA, the cache will
have been purged.

It'd be nice to see DMA versions of this stuff winning;
all I'm saying is that such wins are hard to achieve.

- Dave



--
To unsubscribe from this list: send the line "unsubscribe linux-omap" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 15+ messages in thread

* RE: Benchmarking: POP flash vs. MMC?
  2009-04-06 10:08           ` Paul Walmsley
  2009-04-06 10:23             ` David Brownell
@ 2009-04-07 12:09             ` Woodruff, Richard
  1 sibling, 0 replies; 15+ messages in thread
From: Woodruff, Richard @ 2009-04-07 12:09 UTC (permalink / raw)
  To: Paul Walmsley, David Brownell
  Cc: Russ Dill, Artem Bityutskiy, David Hagood,
	linux-omap@vger.kernel.org

> We'd also have to make sure that the comparison is between the linux-omap
> kernel and the OMAPZoo kernel, rather than o-z PIO vs. o-z DMA.  The
> OMAPZoom kernel doesn't post any device register writes.  That should
> cause any driver using PIO to drag, compared to the l-o kernel.

There effetely are a few levels of posting. The patch stops _ARM_ PIO to peripheral control registers from hanging out in the interconnect. Other levels still are there. The big winner of interconnect from ARM side is DDR operations which is still gets posted at all levels.  The size of buffering in interconnect is like a couple cache lines so it will fill up pretty fast.  DMA and other initiators are not impacted as they don't use arm-mmu attributes.

Really, if you think about it "big" block writes may not be impacted either as you will back up at the slower device's speed. Cache is a familiar example. If you write out 100M quickly you will get to a point where you are bottlenecked on main memory speed very quickly (every write is a miss or cast out at some point). The fact you have a cache in the way doesn't matter.

It can make some difference if you're intermixing some small PIOs with other work. In general benchmarks I've yet to see the hit on system against all the bigger noises. Probably I can construct a case where ~10% is lost.

I recall some tests which have dma + prefetch working on nand.  I'll see if I can dig them up.

I've was at several meetings a year back where different memory vendors come in and showed with some tweak they can get 2x l-o on flash. Then if you also take their device optimized file system you will get like 5x. Hopefully at least the in tree 2x is gone now that it's a year later.

Regards,
Richard W.


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Benchmarking: POP flash vs. MMC?
  2009-04-06 10:48                 ` David Brownell
@ 2009-04-07 17:22                   ` Tony Lindgren
  0 siblings, 0 replies; 15+ messages in thread
From: Tony Lindgren @ 2009-04-07 17:22 UTC (permalink / raw)
  To: David Brownell
  Cc: Paul Walmsley, Russ Dill, Artem Bityutskiy, David Hagood,
	linux-omap

* David Brownell <david-b@pacbell.net> [090406 03:48]:
> On Monday 06 April 2009, Paul Walmsley wrote:
> > > Puzzle:  get a dma_copypage() to work faster than copy_page().
> > > Or a dma_clear_page() faster than clear_page().  Not easy...
> > 
> > Doing it via the DMA engine may save power, since MPU can sleep.
> 
> But the CPU overhead of calling the DMA engine can exceed
> that of the memcpy()/memset() ... ;)
> 
> Another concern is cache impact.  In some cases, having the
> dirty data in dcache is a big win.  With DMA, the cache will
> have been purged.
> 
> It'd be nice to see DMA versions of this stuff winning;
> all I'm saying is that such wins are hard to achieve.

If the DMA transfer can loop over the small FIFO by and transfer
larger chunks of data, the setup overhead is small. Currently
at least tusb and ASoC loop the DMA over the FIFO.

This way you can let the hardware trigger huge reads or
writes from the small FIFO. To loop over the FIFO, set
dst_fi or src_fi to a _negative_ number.

Maybe onenand could use this too? If you can tell onenand to
read or write multiple FIFOs, and there is a hardware DMA
request line, the DMA request line can tell onenand to reload
the FIFO.

Tony
--
To unsubscribe from this list: send the line "unsubscribe linux-omap" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2009-04-07 17:22 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-04-03 21:24 Benchmarking: POP flash vs. MMC? david.hagood
2009-04-04  0:57 ` Russ Dill
2009-04-04  2:52   ` David Hagood
2009-04-04  4:16     ` Russ Dill
2009-04-05 22:53       ` David Brownell
2009-04-06  7:41     ` Artem Bityutskiy
2009-04-06  8:13       ` Russ Dill
2009-04-06  9:07         ` Artem Bityutskiy
2009-04-06  9:59         ` David Brownell
2009-04-06 10:08           ` Paul Walmsley
2009-04-06 10:23             ` David Brownell
2009-04-06 10:34               ` Paul Walmsley
2009-04-06 10:48                 ` David Brownell
2009-04-07 17:22                   ` Tony Lindgren
2009-04-07 12:09             ` Woodruff, Richard

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox