From: Simon Horman <simon.horman@netronome.com>
To: Jason Wang <jasowang@redhat.com>
Cc: "Zhu, Lingshan" <lingshan.zhu@intel.com>,
mst@redhat.com, alex.williamson@redhat.com,
linux-kernel@vger.kernel.org,
virtualization@lists.linux-foundation.org, kvm@vger.kernel.org,
netdev@vger.kernel.org, dan.daly@intel.com,
cunming.liang@intel.com, tiwei.bie@intel.com,
jason.zeng@intel.com, zhiyuan.lv@intel.com
Subject: Re: [RFC 1/2] vhost: IFC VF hardware operation layer
Date: Wed, 23 Oct 2019 19:11:16 +0200 [thread overview]
Message-ID: <20191023171115.GA28355@netronome.com> (raw)
In-Reply-To: <83356b5f-e2f4-ab79-79d7-20d4850c26a9@redhat.com>
On Wed, Oct 23, 2019 at 06:36:13PM +0800, Jason Wang wrote:
>
> On 2019/10/23 下午6:13, Simon Horman wrote:
> > On Tue, Oct 22, 2019 at 09:32:36AM +0800, Jason Wang wrote:
> > > On 2019/10/22 上午12:31, Simon Horman wrote:
> > > > On Mon, Oct 21, 2019 at 05:55:33PM +0800, Zhu, Lingshan wrote:
> > > > > On 10/16/2019 5:53 PM, Simon Horman wrote:
> > > > > > Hi Zhu,
> > > > > >
> > > > > > thanks for your patch.
> > > > > >
> > > > > > On Wed, Oct 16, 2019 at 09:10:40AM +0800, Zhu Lingshan wrote:
> > > > ...
> > > >
> > > > > > > +static void ifcvf_read_dev_config(struct ifcvf_hw *hw, u64 offset,
> > > > > > > + void *dst, int length)
> > > > > > > +{
> > > > > > > + int i;
> > > > > > > + u8 *p;
> > > > > > > + u8 old_gen, new_gen;
> > > > > > > +
> > > > > > > + do {
> > > > > > > + old_gen = ioread8(&hw->common_cfg->config_generation);
> > > > > > > +
> > > > > > > + p = dst;
> > > > > > > + for (i = 0; i < length; i++)
> > > > > > > + *p++ = ioread8((u8 *)hw->dev_cfg + offset + i);
> > > > > > > +
> > > > > > > + new_gen = ioread8(&hw->common_cfg->config_generation);
> > > > > > > + } while (old_gen != new_gen);
> > > > > > Would it be wise to limit the number of iterations of the loop above?
> > > > > Thanks but I don't quite get it. This is used to make sure the function
> > > > > would get the latest config.
> > > > I am worried about the possibility that it will loop forever.
> > > > Could that happen?
> > > >
> > > > ...
> > > My understanding is that the function here is similar to virtio config
> > > generation [1]. So this can only happen for a buggy hardware.
> > Ok, so this circles back to my original question.
> > Should we put a bound on the number of times the loop runs
> > or should we accept that the kernel locks up if the HW is buggy?
> >
>
> I'm not sure, and similar logic has been used by virtio-pci drivers for
> years. Consider this logic is pretty simple and it should not be the only
> place that virito hardware can lock kernel, we can keep it as is.
Ok, I accept that there isn't much use fixing this if its idomatic and
there are other places virtio hardware can lock up the kernel.
> Actually, there's no need for hardware to implement generation logic, it
> could be emulated by software or even ignored. In new version of
> virtio-mdev, get_generation() is optional, when it was not implemented, 0 is
> simply returned by virtio-mdev transport.
>
> Thanks
>
next prev parent reply other threads:[~2019-10-23 17:11 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-10-16 1:10 [RFC 0/2] Intel IFC VF driver for vdpa Zhu Lingshan
2019-10-16 1:10 ` [RFC 1/2] vhost: IFC VF hardware operation layer Zhu Lingshan
2019-10-16 9:53 ` Simon Horman
2019-10-21 9:55 ` Zhu, Lingshan
2019-10-21 16:31 ` Simon Horman
2019-10-22 1:32 ` Jason Wang
2019-10-22 6:48 ` Zhu Lingshan
2019-10-23 10:13 ` Simon Horman
2019-10-23 10:36 ` Jason Wang
2019-10-23 17:11 ` Simon Horman [this message]
2019-10-16 1:10 ` [RFC 2/2] vhost: IFC VF vdpa layer Zhu Lingshan
-- strict thread matches above, loose matches on Subject: below --
2019-10-16 1:30 [RFC 0/2] Intel IFC VF driver for vdpa Zhu Lingshan
2019-10-16 1:30 ` [RFC 1/2] vhost: IFC VF hardware operation layer Zhu Lingshan
2019-10-16 8:40 ` Jason Wang
2019-10-21 10:00 ` Zhu, Lingshan
2019-10-21 10:35 ` Jason Wang
2019-10-16 8:45 ` Jason Wang
2019-10-21 9:57 ` Zhu, Lingshan
2019-10-21 10:21 ` Jason Wang
2019-10-16 1:03 [RFC 0/2] Intel IFC VF driver for vdpa Zhu Lingshan
2019-10-16 1:03 ` [RFC 1/2] vhost: IFC VF hardware operation layer Zhu Lingshan
2019-10-16 2:04 ` Stephen Hemminger
2019-10-16 2:06 ` Stephen Hemminger
2019-10-29 7:36 ` Zhu, Lingshan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20191023171115.GA28355@netronome.com \
--to=simon.horman@netronome.com \
--cc=alex.williamson@redhat.com \
--cc=cunming.liang@intel.com \
--cc=dan.daly@intel.com \
--cc=jason.zeng@intel.com \
--cc=jasowang@redhat.com \
--cc=kvm@vger.kernel.org \
--cc=lingshan.zhu@intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mst@redhat.com \
--cc=netdev@vger.kernel.org \
--cc=tiwei.bie@intel.com \
--cc=virtualization@lists.linux-foundation.org \
--cc=zhiyuan.lv@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).