From mboxrd@z Thu Jan 1 00:00:00 1970 From: Tetsuya Mukawa Subject: Re: [RFC PATCH] vhost: Add VHOST PMD Date: Mon, 31 Aug 2015 15:29:12 +0900 Message-ID: <55E3F438.7050103@igel.co.jp> References: <1440732101-18704-1-git-send-email-mukawa@igel.co.jp> <1440732101-18704-2-git-send-email-mukawa@igel.co.jp> Mime-Version: 1.0 Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit Cc: "ann.zhuangyanying@huawei.com" , "Liu, Yuanhan" To: "Ouyang, Changchun" , "dev@dpdk.org" Return-path: Received: from mail-pa0-f50.google.com (mail-pa0-f50.google.com [209.85.220.50]) by dpdk.org (Postfix) with ESMTP id 40C448E8C for ; Mon, 31 Aug 2015 08:29:15 +0200 (CEST) Received: by pabzx8 with SMTP id zx8so128775277pab.1 for ; Sun, 30 Aug 2015 23:29:14 -0700 (PDT) In-Reply-To: List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 2015/08/31 14:14, Ouyang, Changchun wrote: > > +struct pmd_internal { > + TAILQ_ENTRY(pmd_internal) next; > + char *dev_name; > + char *iface_name; > + unsigned nb_rx_queues; > + unsigned nb_tx_queues; > + rte_atomic16_t xfer; > + > + struct vhost_queue > rx_vhost_queues[RTE_PMD_RING_MAX_RX_RINGS]; > + struct vhost_queue > tx_vhost_queues[RTE_PMD_RING_MAX_TX_RINGS]; > +}; > Need consider how the vhost multiple queues implements here. > You can refer to the patch set I sent before. Hi Ouyang, I appreciate your comments. I will refer to your patch and fix it. >> + >> +TAILQ_HEAD(pmd_internal_head, pmd_internal); static struct >> +pmd_internal_head internals_list = >> + TAILQ_HEAD_INITIALIZER(internals_list); >> + >> +static pthread_mutex_t internal_list_lock = PTHREAD_MUTEX_INITIALIZER; >> + >> +static struct rte_eth_link pmd_link = { >> + .link_speed = 10000, >> + .link_duplex = ETH_LINK_FULL_DUPLEX, >> + .link_status = 0 >> +}; >> + >> +static uint16_t >> +eth_vhost_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) { >> + struct vhost_queue *r = q; >> + uint16_t nb_rx; >> + >> + if (unlikely(r->internal == NULL)) >> + return 0; >> + >> + if (unlikely(rte_atomic16_read(&r->internal->xfer) == 0)) >> + return 0; >> + >> + rte_atomic16_set(&r->rx_executing, 1); >> + >> + if (unlikely(rte_atomic16_read(&r->internal->xfer) == 0)) >> + goto out; >> + >> + nb_rx = (uint16_t)rte_vhost_dequeue_burst(r->device, >> + VIRTIO_TXQ, r->mb_pool, bufs, nb_bufs); > Logically correct here, > But it would be better to have more clear description why need use VIRTIO_TXQ for vhost_rx function. > It increases readability. :-) Sure, I will add comments here, also TX function. > > +static void *vhost_driver_session(void *param __rte_unused) { > + static struct virtio_net_device_ops *vhost_ops = NULL; > + vhost_ops = rte_zmalloc(NULL, sizeof(*vhost_ops), 0); > + if (vhost_ops == NULL) > + rte_panic("Can't allocate memory\n"); > + > + /* set vhost arguments */ > + vhost_ops->new_device = new_device; > + vhost_ops->destroy_device = destroy_device; > + if (rte_vhost_driver_callback_register(vhost_ops) < 0) > + rte_panic("Can't register callbacks\n"); > + > + /* start event handling */ > + rte_vhost_driver_session_start(); > It should be called after rte_vhost_driver_register, > But rte_vhost_driver_session_start is called when dev_init, > Error here? In the case of vhost-cuse, it seems we should call register() before calling start(). Thanks for your checking. I will fix it in next patch. Regards, Tetsuya