From: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
To: "Ilpo Järvinen" <ilpo.jarvinen@linux.intel.com>
Cc: LKML <linux-kernel@vger.kernel.org>,
platform-driver-x86@vger.kernel.org,
Mika Westerberg <mika.westerberg@linux.intel.com>,
Hans de Goede <hdegoede@redhat.com>,
Ferry Toth <fntoth@gmail.com>
Subject: Re: [PATCH v2 1/3] platform/x86: intel_scu_ipc: Replace workaround by 32-bit IO
Date: Mon, 21 Oct 2024 13:02:44 +0300 [thread overview]
Message-ID: <ZxYmxOPLOGol22gz@smile.fi.intel.com> (raw)
In-Reply-To: <ZxYkyC00pDzarnVU@smile.fi.intel.com>
On Mon, Oct 21, 2024 at 12:54:16PM +0300, Andy Shevchenko wrote:
> On Mon, Oct 21, 2024 at 12:49:08PM +0300, Ilpo Järvinen wrote:
> > On Mon, 21 Oct 2024, Andy Shevchenko wrote:
> > > On Mon, Oct 21, 2024 at 12:24:57PM +0300, Ilpo Järvinen wrote:
> > > > On Mon, 21 Oct 2024, Andy Shevchenko wrote:
...
> > > > > + for (nc = 0, offset = 0; nc < 4; nc++, offset += 4)
> > > > > + wbuf[nc] = ipc_data_readl(scu, offset);
> > > > > + memcpy(data, wbuf, count);
> > > >
> > > > So do we actually need to read more than
> > > > DIV_ROUND_UP(min(count, 16U), sizeof(u32))? Because that's the approach
> > > > used in intel_scu_ipc_dev_command_with_size() which you referred to.
> > >
> > > I'm not sure I follow. We do IO for whole (16-bytes) buffer, but return only
> > > asked _bytes_ to the user.
> >
> > So always reading 16 bytes is not part of the old workaround? Because it
> > has a "lets read enough" feel.
>
> Ah, now I got it! Yes, we may reduce the reads to just needed ones.
> The idea is that we always have to perform 32-bit reads independently
> on the amount of data we want.
Oh, looking at the code (*) it seems they are really messed up in the original
with bytes vs. 32-bit words! Since the above has been tested, let me put this
on TODO list to clarify this mess and run with another testing.
Sounds good to you?
*) the mythical comment about max 5 items for 20-byte buffer is worrying and
now I know why,
--
With Best Regards,
Andy Shevchenko
next prev parent reply other threads:[~2024-10-21 10:02 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-10-21 8:38 [PATCH v2 0/3] platform/x86: intel_scu_ipc: Avoid working around IO and cleanups Andy Shevchenko
2024-10-21 8:38 ` [PATCH v2 1/3] platform/x86: intel_scu_ipc: Replace workaround by 32-bit IO Andy Shevchenko
2024-10-21 8:49 ` Mika Westerberg
2024-10-21 9:24 ` Ilpo Järvinen
2024-10-21 9:35 ` Andy Shevchenko
2024-10-21 9:49 ` Ilpo Järvinen
2024-10-21 9:54 ` Andy Shevchenko
2024-10-21 10:02 ` Andy Shevchenko [this message]
2024-10-21 10:14 ` Ilpo Järvinen
2024-10-21 8:38 ` [PATCH v2 2/3] platform/x86: intel_scu_ipc: Simplify code with cleanup helpers Andy Shevchenko
2024-10-21 8:50 ` Mika Westerberg
2024-10-21 9:32 ` Ilpo Järvinen
2024-10-21 9:42 ` Andy Shevchenko
2024-10-21 10:08 ` Ilpo Järvinen
2024-10-21 8:38 ` [PATCH v2 3/3] platform/x86: intel_scu_ipc: Save a copy of the entire struct intel_scu_ipc_data Andy Shevchenko
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZxYmxOPLOGol22gz@smile.fi.intel.com \
--to=andriy.shevchenko@linux.intel.com \
--cc=fntoth@gmail.com \
--cc=hdegoede@redhat.com \
--cc=ilpo.jarvinen@linux.intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mika.westerberg@linux.intel.com \
--cc=platform-driver-x86@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox