From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.3 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE, SPF_PASS,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C2BFFC433E9 for ; Wed, 27 Jan 2021 07:26:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7FA632067C for ; Wed, 27 Jan 2021 07:26:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233297AbhA0H0t (ORCPT ); Wed, 27 Jan 2021 02:26:49 -0500 Received: from mga18.intel.com ([134.134.136.126]:44355 "EHLO mga18.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S316653AbhA0AjQ (ORCPT ); Tue, 26 Jan 2021 19:39:16 -0500 IronPort-SDR: +/bW+LTKZvJtasLZ+RjQeoan3L70K9ZbTdZik44Xg+ArDanJ0gwPUP4pR0Php5RxzDTJvp5XdU xp5/X8mfa9YA== X-IronPort-AV: E=McAfee;i="6000,8403,9876"; a="167666945" X-IronPort-AV: E=Sophos;i="5.79,378,1602572400"; d="scan'208";a="167666945" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Jan 2021 16:37:55 -0800 IronPort-SDR: izRmj+LflD4/Gvd2EAT8bbLibLT/SFc47kq861oWBkgRC5tGr/L+pX9eK9mxfSYa2GZC00ZBBD CYNC5ut8RrcA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.79,378,1602572400"; d="scan'208";a="402923270" Received: from allen-box.sh.intel.com (HELO [10.239.159.128]) ([10.239.159.128]) by fmsmga004.fm.intel.com with ESMTP; 26 Jan 2021 16:37:53 -0800 Cc: baolu.lu@linux.intel.com, linux-kernel@vger.kernel.org, Nadav Amit , David Woodhouse , Joerg Roedel , Will Deacon , stable@vger.kernel.org Subject: Re: [PATCH] iommu/vt-d: do not use flush-queue when caching-mode is on To: Nadav Amit , iommu@lists.linux-foundation.org References: <20210126203856.1544088-1-namit@vmware.com> From: Lu Baolu Message-ID: <72cab17b-7b2f-1e4d-3bd5-3041b7edc724@linux.intel.com> Date: Wed, 27 Jan 2021 08:29:37 +0800 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.10.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org On 1/27/21 8:26 AM, Lu Baolu wrote: >> +{ >> +    struct dmar_domain *dmar_domain = to_dmar_domain(domain); >> +    struct intel_iommu *iommu = domain_get_iommu(dmar_domain); >> + >> +    if (intel_iommu_strict) >> +        return 0; >> + >> +    /* >> +     * The flush queue implementation does not perform page-selective >> +     * invalidations that are required for efficient TLB flushes in >> virtual >> +     * environments. The benefit of batching is likely to be much >> lower than >> +     * the overhead of synchronizing the virtual and physical IOMMU >> +     * page-tables. >> +     */ >> +    if (iommu && cap_caching_mode(iommu->cap)) { >> +        pr_warn_once("IOMMU batching is partially disabled due to >> virtualization"); >> +        return 0; >> +    } > > domain_get_iommu() only returns the first iommu, and could return NULL > when this is called before domain attaching to any device. A better > choice could be check caching mode globally and return false if caching > mode is supported on any iommu. > >        struct dmar_drhd_unit *drhd; >        struct intel_iommu *iommu; > >        rcu_read_lock(); >        for_each_active_iommu(iommu, drhd) { >                 if (cap_caching_mode(iommu->cap)) >                         return false; We should unlock rcu before return here. Sorry! >         } >         rcu_read_unlock(); Best regards, baolu