public inbox for u-boot@lists.denx.de
 help / color / mirror / Atom feed
From: Patrick Wildt <patrick@blueri.se>
To: u-boot@lists.denx.de
Subject: [U-Boot] [PATCH] nvme: add more cache flushes
Date: Thu, 17 Oct 2019 08:44:00 +0200	[thread overview]
Message-ID: <20191017064400.GA33984@nox.fritz.box> (raw)
In-Reply-To: <CAEUhbmULEN28dJ0MmAqY3rr3_14OStrcyzS-z6J1+f7eSdsAhw@mail.gmail.com>

On Thu, Oct 17, 2019 at 10:55:11AM +0800, Bin Meng wrote:
> Hi Patrick,
> 
> On Wed, Oct 16, 2019 at 11:35 PM Patrick Wildt <patrick@blueri.se> wrote:
> >
> > On Wed, Oct 16, 2019 at 06:11:23PM +0800, Bin Meng wrote:
> > > On Mon, Oct 14, 2019 at 7:11 PM Patrick Wildt <patrick@blueri.se> wrote:
> > > >
> > > > On an i.MX8MQ our nvme driver doesn't completely work since we are
> > > > missing a few cache flushes.  One is the prp list, which is an extra
> > > > buffer that we need to flush before handing it to the hardware.  Also
> > > > the block read/write operations needs more cache flushes on this SoC.
> > > >
> > > > Signed-off-by: Patrick Wildt <patrick@blueri.se>
> > > > ---
> > > >  drivers/nvme/nvme.c | 15 +++++++++------
> > > >  1 file changed, 9 insertions(+), 6 deletions(-)
> > > >
> > > > diff --git a/drivers/nvme/nvme.c b/drivers/nvme/nvme.c
> > > > index 2444e0270f..69d5e3eedc 100644
> > > > --- a/drivers/nvme/nvme.c
> > > > +++ b/drivers/nvme/nvme.c
> > > > @@ -123,6 +123,9 @@ static int nvme_setup_prps(struct nvme_dev *dev, u64 *prp2,
> > > >         }
> > > >         *prp2 = (ulong)dev->prp_pool;
> > > >
> > > > +       flush_dcache_range((ulong)dev->prp_pool, (ulong)dev->prp_pool +
> > > > +                          dev->prp_entry_num * sizeof(u64));
> > > > +
> > > >         return 0;
> > > >  }
> > > >
> > > > @@ -717,9 +720,10 @@ static ulong nvme_blk_rw(struct udevice *udev, lbaint_t blknr,
> > > >         u16 lbas = 1 << (dev->max_transfer_shift - ns->lba_shift);
> > > >         u64 total_lbas = blkcnt;
> > > >
> > > > -       if (!read)
> > > > -               flush_dcache_range((unsigned long)buffer,
> > > > -                                  (unsigned long)buffer + total_len);
> > > > +       flush_dcache_range((unsigned long)buffer,
> > > > +                          (unsigned long)buffer + total_len);
> > >
> > > Why we need this for read?
> >
> > I'm no processor engineer, but I have read (and "seen") the following
> > on ARMs.  If I'm wrong. please correct me.
> >
> > Since the buffer is usually allocated cache-aligned on the stack,
> > it is very possible that this buffer was previously still used
> > as it's supposed to be used: as stack.  Thus, the caches can still
> > be filled, and are not yet evicted/flushed to memory.  Now it is
> > possible that between the DMA access from the device and our cache
> > invalidation, the CPU cache writes back its contents to memory,
> > overwriting whatever the NVMe just gave us.
> 
> OK, this makes sense. So if we allocate the buffer from the heap, we
> should only care about flush on write, right? Can you test this?

If you're talking about having a bounce buffer:  You'd need to flush
it once upon allocation, because that part of the heap could have also
be used earlier by someone else.

Best regards,
Patrick

> >
> > > > +       invalidate_dcache_range((unsigned long)buffer,
> > > > +                               (unsigned long)buffer + total_len);
> > > >
> > > >         c.rw.opcode = read ? nvme_cmd_read : nvme_cmd_write;
> > > >         c.rw.flags = 0;
> > > > @@ -755,9 +759,8 @@ static ulong nvme_blk_rw(struct udevice *udev, lbaint_t blknr,
> > > >                 buffer += lbas << ns->lba_shift;
> > > >         }
> > > >
> > > > -       if (read)
> > > > -               invalidate_dcache_range((unsigned long)buffer,
> > > > -                                       (unsigned long)buffer + total_len);
> > > > +       invalidate_dcache_range((unsigned long)buffer,
> > > > +                               (unsigned long)buffer + total_len);
> > >
> > > Why we need this for write?
> >
> > That's a good point.  After the transaction, if it was a read then
> > we need to invalidate it, as we might have speculatively read it.
> > On a write, we don't care about its contents.  I will test it w/o
> > this chunk and report back.
> >
> 
> Regards,
> Bin

  reply	other threads:[~2019-10-17  6:44 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-10-14 11:11 [U-Boot] [PATCH] nvme: add more cache flushes Patrick Wildt
2019-10-16 10:11 ` Bin Meng
2019-10-16 15:35   ` Patrick Wildt
2019-10-16 21:22     ` [U-Boot] [PATCH] nvme: flush dcache on both r/w, and the prp list Patrick Wildt
2019-10-17  9:28       ` Bin Meng
2019-11-01 13:31       ` Tom Rini
2019-10-17  2:55     ` [U-Boot] [PATCH] nvme: add more cache flushes Bin Meng
2019-10-17  6:44       ` Patrick Wildt [this message]
2019-10-17  6:58         ` Simon Goldschmidt
2019-10-17  7:08         ` Bin Meng
2019-10-17  7:12           ` Patrick Wildt
2019-10-17 15:46             ` J. William Campbell

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20191017064400.GA33984@nox.fritz.box \
    --to=patrick@blueri.se \
    --cc=u-boot@lists.denx.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox