From mboxrd@z Thu Jan 1 00:00:00 1970 From: Shahaf Shuler Subject: [PATCH v3 2/6] vfio: don't fail to DMA map if memory is already mapped Date: Tue, 5 Mar 2019 15:59:42 +0200 Message-ID: <1e8400f68a2fb1ceb07127c72f0874bb881e5d80.1551793527.git.shahafs@mellanox.com> References: Cc: dev@dpdk.org To: anatoly.burakov@intel.com, yskoh@mellanox.com, thomas@monjalon.net, ferruh.yigit@intel.com, nhorman@tuxdriver.com, gaetan.rivet@6wind.com Return-path: Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id D54434CA7 for ; Tue, 5 Mar 2019 15:00:05 +0100 (CET) In-Reply-To: In-Reply-To: References: List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Currently vfio DMA map function will fail in case the same memory segment is mapped twice. This is too strict, as this is not an error to map the same memory twice. Instead, use the kernel return value to detect such state and have the DMA function to return as successful. For type1 mapping the kernel driver returns EEXISTS. For spapr mapping EBUSY is returned since kernel 4.10. Signed-off-by: Shahaf Shuler Acked-by: Anatoly Burakov --- lib/librte_eal/linuxapp/eal/eal_vfio.c | 32 +++++++++++++++++++++++++---- 1 file changed, 28 insertions(+), 4 deletions(-) diff --git a/lib/librte_eal/linuxapp/eal/eal_vfio.c b/lib/librte_eal/linuxapp/eal/eal_vfio.c index 9adbda8bb7..d0a0f9c16f 100644 --- a/lib/librte_eal/linuxapp/eal/eal_vfio.c +++ b/lib/librte_eal/linuxapp/eal/eal_vfio.c @@ -1264,9 +1264,21 @@ vfio_type1_dma_mem_map(int vfio_container_fd, uint64_t vaddr, uint64_t iova, ret = ioctl(vfio_container_fd, VFIO_IOMMU_MAP_DMA, &dma_map); if (ret) { - RTE_LOG(ERR, EAL, " cannot set up DMA remapping, error %i (%s)\n", - errno, strerror(errno)); + /** + * In case the mapping was already done EEXIST will be + * returned from kernel. + */ + if (errno == EEXIST) { + RTE_LOG(DEBUG, EAL, + " Memory segment is allready mapped," + " skipping"); + } else { + RTE_LOG(ERR, EAL, + " cannot set up DMA remapping," + " error %i (%s)\n", + errno, strerror(errno)); return -1; + } } } else { memset(&dma_unmap, 0, sizeof(dma_unmap)); @@ -1325,9 +1337,21 @@ vfio_spapr_dma_do_map(int vfio_container_fd, uint64_t vaddr, uint64_t iova, ret = ioctl(vfio_container_fd, VFIO_IOMMU_MAP_DMA, &dma_map); if (ret) { - RTE_LOG(ERR, EAL, " cannot set up DMA remapping, error %i (%s)\n", - errno, strerror(errno)); + /** + * In case the mapping was already done EBUSY will be + * returned from kernel. + */ + if (errno == EBUSY) { + RTE_LOG(DEBUG, EAL, + " Memory segment is allready mapped," + " skipping"); + } else { + RTE_LOG(ERR, EAL, + " cannot set up DMA remapping," + " error %i (%s)\n", errno, + strerror(errno)); return -1; + } } } else { -- 2.12.0