linux-pci.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: David Woodhouse <dwmw2@infradead.org>
To: Alex Williamson <alex.williamson@redhat.com>
Cc: rajesh.sankaran@intel.com, iommu@lists.linux-foundation.org,
	linux-pci@vger.kernel.org, linux-kernel@vger.kernel.org,
	chrisw@sous-sol.org, ddutile@redhat.com
Subject: Re: [PATCH] intel-iommu: Manage iommu_coherency globally
Date: Fri, 18 Nov 2011 18:15:17 +0000	[thread overview]
Message-ID: <1321640117.15493.63.camel@shinybook.infradead.org> (raw)
In-Reply-To: <1321636512.26410.132.camel@bling.home>

[-- Attachment #1: Type: text/plain, Size: 1659 bytes --]

On Fri, 2011-11-18 at 10:15 -0700, Alex Williamson wrote:
> A bit heavy handed, but obviously easier.  It feels like we could safely
> be more strategic, but maybe we'd end up trashing the cache anyway in a
> drawn out attempt to flush the context and all page tables.  However, do
> we actually need a wbinvd_on_all_cpus()?  Probably better to trash one
> cache than all of them.  Thanks, 

Yes, it would be wbinvd_on_all_cpus(). The page-table-walk code would
look something like this...

static void flush_domain_ptes(struct dmar_domain *domain)
{
	int level = agaw_to_level(domain->agaw);
	struct dma_pte *ptelvl[level + 2], *pte;

	pte = domain->pgd;
	clflush_cache_range(pte, VTD_PAGE_SIZE);

	/* Keep the end condition nice and simple... */
	ptelvl[level + 1] = NULL;

	while (pte) {
		/* BUG_ON(level < 2); */
		if (dma_pte_present(pte)) {
			if (level == 2) {
				/* Flush the bottom-level page, but no need to
				   process its *contents* any further. */
				clflush_cache_range(phys_to_virt(dma_pte_addr(pte)),
						    VTD_PAGE_SIZE);
			} else if (!(pte->val & DMA_PTE_LARGE_PAGE)) {
				/* Remember the next PTE at this level; we'll
				   come back to it. */
				ptelvl[level] = pte + 1;

				/* Go down a level and process PTEs there. */
				pte = phys_to_virt(dma_pte_addr(pte));
				clflush_cache_range(pte, VTD_PAGE_SIZE);
				level--;
				continue;
			}
		}
		
		pte++;
		/* When we reach the end of a PTE page, go back up to the next
		   level where we came from. */
		while (pte && !((unsigned long)pte & ~VTD_PAGE_MASK)) {
			level++;
			pte = ptelvl[level];
		}
	}
}

-- 
dwmw2

[-- Attachment #2: smime.p7s --]
[-- Type: application/x-pkcs7-signature, Size: 5818 bytes --]

  reply	other threads:[~2011-11-18 18:15 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-11-16  4:11 [PATCH] intel-iommu: Manage iommu_coherency globally Alex Williamson
2011-11-16 10:28 ` Chris Wright
2011-11-18 12:03 ` David Woodhouse
2011-11-18 16:03   ` Alex Williamson
2011-11-18 17:00     ` David Woodhouse
2011-11-18 17:15       ` Alex Williamson
2011-11-18 18:15         ` David Woodhouse [this message]
2011-11-19 19:17   ` Chris Wright
2011-11-19 20:11     ` David Woodhouse
2011-11-23  7:30   ` cody
2011-11-22 15:41     ` David Woodhouse

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1321640117.15493.63.camel@shinybook.infradead.org \
    --to=dwmw2@infradead.org \
    --cc=alex.williamson@redhat.com \
    --cc=chrisw@sous-sol.org \
    --cc=ddutile@redhat.com \
    --cc=iommu@lists.linux-foundation.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-pci@vger.kernel.org \
    --cc=rajesh.sankaran@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).