From mboxrd@z Thu Jan 1 00:00:00 1970 From: Yuanhan Liu Subject: Re: [PATCH 07/21] vhost: add iotlb helper functions Date: Fri, 8 Sep 2017 17:21:21 +0800 Message-ID: <20170908092121.GH9736@yliu-home> References: <20170831095023.21037-1-maxime.coquelin@redhat.com> <20170831095023.21037-8-maxime.coquelin@redhat.com> <20170908080855.GF9736@yliu-home> <8a9b486b-635a-ec70-76f9-de830ad21882@redhat.com> <20170908083633.GG9736@yliu-home> <3150540a-66ee-b686-7553-a4c988a2f18d@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: dev@dpdk.org, jfreiman@redhat.com, tiwei.bie@intel.com, mst@redhat.com, vkaplans@redhat.com, jasowang@redhat.com To: Maxime Coquelin Return-path: Received: from mail-pg0-f49.google.com (mail-pg0-f49.google.com [74.125.83.49]) by dpdk.org (Postfix) with ESMTP id EF46C2E8B for ; Fri, 8 Sep 2017 11:21:33 +0200 (CEST) Received: by mail-pg0-f49.google.com with SMTP id v66so3988130pgb.5 for ; Fri, 08 Sep 2017 02:21:33 -0700 (PDT) Content-Disposition: inline In-Reply-To: <3150540a-66ee-b686-7553-a4c988a2f18d@redhat.com> List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On Fri, Sep 08, 2017 at 10:50:49AM +0200, Maxime Coquelin wrote: > >>>>+{ > >>>>+ struct vhost_iotlb_entry *node, *temp_node; > >>>>+ > >>>>+ rte_rwlock_write_lock(&vq->iotlb_lock); > >>>>+ > >>>>+ TAILQ_FOREACH_SAFE(node, &vq->iotlb_list, next, temp_node) { > >>>>+ TAILQ_REMOVE(&vq->iotlb_list, node, next); > >>>>+ rte_mempool_put(vq->iotlb_pool, node); > >>>>+ } > >>>>+ > >>>>+ rte_rwlock_write_unlock(&vq->iotlb_lock); > >>>>+} > >>>>+ > >>>>+void vhost_user_iotlb_cache_insert(struct vhost_virtqueue *vq, uint64_t iova, > >>>>+ uint64_t uaddr, uint64_t size, uint8_t perm) > >>>>+{ > >>>>+ struct vhost_iotlb_entry *node, *new_node; > >>>>+ int ret; > >>>>+ > >>>>+ ret = rte_mempool_get(vq->iotlb_pool, (void **)&new_node); > >>>>+ if (ret) { > >>>>+ RTE_LOG(ERR, VHOST_CONFIG, "IOTLB pool empty, invalidate cache\n"); > >>> > >>>It's a cache, why not considering remove one to get space for new one? > >> > >>It would mean having to track every lookups not to remove hot entries, > >>which would have an impact on performance. > > > >You were removing all caches, how can we do worse than that? Even a > >random evict would be better. Or, more simply, just to remove the > >head or the tail? > > I think removing head or tail could cause deadlocks. > For example it needs to translate from 0x0 to 0x2000, with page size > being 0x1000. > > If cache is full, when inserting 0x1000-0x1fff, it will remove > 0x0-0xfff, so a miss will be sent for 0x0-0xffff and 0x1000-0x1fff will > be remove at insert time, etc... Okay, that means we can't simply remove the head or tail. > Note that we really need to size the cache large enough for performance > purpose, so having the cache full could be cause by either buggy or > malicious guest. I agree. But for the malicious guest, it could lead to a DDOS attack: assume it keeps vhost running out of cache and then vhost keeps printing above message. What I suggested was to evict one (by some polices) to get a space for the new one we want to insert. Note that it won't be a performance issue, IMO, due to we only do that when we run out of caches, which, according to your sayings, should not happen in normal cases. --yliu > >>Moreover, the idea is to have the cache large enough, else you could > >>face packet drops due to random cache misses. > >> > >>We might consider to improve it, but I consider it an optimization that > >>could be implemented later if needed. > >> > >>>>+ vhost_user_iotlb_cache_remove_all(vq); > >>>>+ ret = rte_mempool_get(vq->iotlb_pool, (void **)&new_node); > >>>>+ if (ret) { > >>>>+ RTE_LOG(ERR, VHOST_CONFIG, "IOTLB pool still empty, failure\n"); > >>>>+ return; > >>>>+ } > >>>>+ }