netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jason Wang <jasowang@redhat.com>
To: "Zhu, Lingshan" <lingshan.zhu@intel.com>,
	mst@redhat.com, alex.williamson@redhat.com
Cc: linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, kvm@vger.kernel.org,
	netdev@vger.kernel.org, dan.daly@intel.com,
	cunming.liang@intel.com, tiwei.bie@intel.com,
	jason.zeng@intel.com, zhiyuan.lv@intel.com
Subject: Re: [RFC 1/2] vhost: IFC VF hardware operation layer
Date: Mon, 21 Oct 2019 18:21:50 +0800	[thread overview]
Message-ID: <0fe6eb76-85a7-a1cb-5b11-8edb01dd65c7@redhat.com> (raw)
In-Reply-To: <32d4c431-24f2-f9f0-8573-268abc7bb71c@intel.com>


On 2019/10/21 下午5:57, Zhu, Lingshan wrote:
>
> On 10/16/2019 4:45 PM, Jason Wang wrote:
>>
>> On 2019/10/16 上午9:30, Zhu Lingshan wrote:
>>> + */
>>> +#define IFCVF_TRANSPORT_F_START 28
>>> +#define IFCVF_TRANSPORT_F_END   34
>>> +
>>> +#define IFC_SUPPORTED_FEATURES \
>>> +        ((1ULL << VIRTIO_NET_F_MAC)            | \
>>> +         (1ULL << VIRTIO_F_ANY_LAYOUT)            | \
>>> +         (1ULL << VIRTIO_F_VERSION_1) | \
>>> +         (1ULL << VHOST_F_LOG_ALL)            | \
>>
>>
>> Let's avoid using VHOST_F_LOG_ALL, using the get_mdev_features() 
>> instead.
> Thanks, I will remove VHOST_F_LOG_ALL
>>
>>
>>> +         (1ULL << VIRTIO_NET_F_GUEST_ANNOUNCE)        | \
>>> +         (1ULL << VIRTIO_NET_F_CTRL_VQ)            | \
>>> +         (1ULL << VIRTIO_NET_F_STATUS)            | \
>>> +         (1ULL << VIRTIO_NET_F_MRG_RXBUF)) /* not fully supported */
>>
>>
>> Why not having VIRTIO_F_IOMMU_PLATFORM and VIRTIO_F_ORDER_PLATFORM?
>
> I will add VIRTIO_F_ORDER_PLATFORM, for VIRTIO_F_IOMMU_PLATFORM, if we 
> add this bit, QEMU may enable viommu, can cause troubles in LM


Qemu has mature support of vIOMMU support for VFIO device, it can shadow 
IO page tables and setup them through DMA ioctl of vfio containers. Any 
issue you saw here?

Btw, to test them quickly, you can implement set_config/get_config and 
test them through virtio-mdev/kernel drivers as well.

Thanks


> (through we don't support LM in this version driver)
>
> Thanks,
> BR
> Zhu Lingshan
>>
>> Thanks
>>


  reply	other threads:[~2019-10-21 10:22 UTC|newest]

Thread overview: 35+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-10-16  1:30 [RFC 0/2] Intel IFC VF driver for vdpa Zhu Lingshan
2019-10-16  1:30 ` [RFC 1/2] vhost: IFC VF hardware operation layer Zhu Lingshan
2019-10-16  8:40   ` Jason Wang
2019-10-21 10:00     ` Zhu, Lingshan
2019-10-21 10:35       ` Jason Wang
2019-10-16  8:45   ` Jason Wang
2019-10-21  9:57     ` Zhu, Lingshan
2019-10-21 10:21       ` Jason Wang [this message]
2019-10-16  1:30 ` [RFC 2/2] vhost: IFC VF vdpa layer Zhu Lingshan
2019-10-16 10:19   ` Jason Wang
2019-10-18  6:36     ` Zhu, Lingshan
2019-10-21  9:53     ` Zhu, Lingshan
2019-10-21 10:19       ` Jason Wang
2019-10-22  6:53         ` Zhu Lingshan
2019-10-22 13:05           ` Jason Wang
2019-10-23  6:19             ` Zhu, Lingshan
2019-10-23  6:39               ` Jason Wang
2019-10-23  9:24           ` Zhu, Lingshan
2019-10-23  9:58             ` Jason Wang
2019-10-16  1:36 ` [RFC 0/2] Intel IFC VF driver for vdpa Zhu Lingshan
2019-10-16  8:26   ` Jason Wang
2019-10-21  7:10     ` Zhu, Lingshan
  -- strict thread matches above, loose matches on Subject: below --
2019-10-16  1:10 Zhu Lingshan
2019-10-16  1:10 ` [RFC 1/2] vhost: IFC VF hardware operation layer Zhu Lingshan
2019-10-16  9:53   ` Simon Horman
2019-10-21  9:55     ` Zhu, Lingshan
2019-10-21 16:31       ` Simon Horman
2019-10-22  1:32         ` Jason Wang
2019-10-22  6:48           ` Zhu Lingshan
2019-10-23 10:13           ` Simon Horman
2019-10-23 10:36             ` Jason Wang
2019-10-23 17:11               ` Simon Horman
2019-10-16  1:03 [RFC 0/2] Intel IFC VF driver for vdpa Zhu Lingshan
2019-10-16  1:03 ` [RFC 1/2] vhost: IFC VF hardware operation layer Zhu Lingshan
2019-10-16  2:04   ` Stephen Hemminger
2019-10-16  2:06   ` Stephen Hemminger
2019-10-29  7:36     ` Zhu, Lingshan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=0fe6eb76-85a7-a1cb-5b11-8edb01dd65c7@redhat.com \
    --to=jasowang@redhat.com \
    --cc=alex.williamson@redhat.com \
    --cc=cunming.liang@intel.com \
    --cc=dan.daly@intel.com \
    --cc=jason.zeng@intel.com \
    --cc=kvm@vger.kernel.org \
    --cc=lingshan.zhu@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mst@redhat.com \
    --cc=netdev@vger.kernel.org \
    --cc=tiwei.bie@intel.com \
    --cc=virtualization@lists.linux-foundation.org \
    --cc=zhiyuan.lv@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).