xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: David Vrabel <david.vrabel@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	xen-devel@lists.xensource.com
Cc: konrad.wilk@oracle.com, Ian.Campbell@citrix.com,
	linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [PATCH 2/2] xen/arm: introduce XENMEM_cache_flush
Date: Thu, 2 Oct 2014 11:41:06 +0100	[thread overview]
Message-ID: <542D2BC2.2040505@citrix.com> (raw)
In-Reply-To: <1412244417-12251-2-git-send-email-stefano.stabellini@eu.citrix.com>

On 02/10/14 11:06, Stefano Stabellini wrote:
> Introduce support for new hypercall XENMEM_cache_flush.
> Use it to perform cache flashing on pages used for dma when necessary.
[...]
> --- a/arch/arm/xen/mm32.c
> +++ b/arch/arm/xen/mm32.c
[...]
> @@ -24,7 +27,21 @@ static void dma_cache_maint(dma_addr_t handle, unsigned long offset,
>  	
>  		if (!pfn_valid(pfn))
>  		{
> -			/* TODO: cache flush */
> +			struct xen_cache_flush cflush;
> +
> +			cflush.op = 0;
> +			cflush.addr = handle + offset;
> +			cflush.size = size;
> +
> +			if (op == dmac_unmap_area && dir != DMA_TO_DEVICE)
> +				cflush.op = XENMEM_CACHE_INVAL;
> +			if (op == dmac_map_area) {
> +				cflush.op = XENMEM_CACHE_CLEAN;
> +				if (dir == DMA_FROM_DEVICE)
> +					cflush.op |= XENMEM_CACHE_INVAL;
> +			}
> +			if (cflush.op)
> +				HYPERVISOR_memory_op(XENMEM_cache_flush, &cflush);
>  		} else {
>  			struct page *page = pfn_to_page(pfn);
>  
[...]
> --- a/include/xen/interface/memory.h
> +++ b/include/xen/interface/memory.h
> @@ -263,4 +263,20 @@ struct xen_remove_from_physmap {
>  };
>  DEFINE_GUEST_HANDLE_STRUCT(xen_remove_from_physmap);
>  
> +/*
> + * Issue one or more cache maintenance operations on a memory range
> + * owned by the calling domain or granted to the calling domain by a
> + * foreign domain.
> + */
> +#define XENMEM_cache_flush                 27
> +struct xen_cache_flush {
> +/* addr is the machine address at the start of the memory range */

You say machine address here but call it with a bus address.  With no
IOMMU these are equivalent but what's correct if an IOMMU is used?

David

> +uint64_t addr;
> +uint64_t size;
> +#define XENMEM_CACHE_CLEAN      (1<<0)
> +#define XENMEM_CACHE_INVAL      (1<<1)
> +uint32_t op;
> +};
> +DEFINE_GUEST_HANDLE_STRUCT(xen_cache_flush);
> +
>  #endif /* __XEN_PUBLIC_MEMORY_H__ */
> 

  reply	other threads:[~2014-10-02 10:41 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-10-02 10:06 [PATCH 0/2] introduce XENMEM_cache_flush Stefano Stabellini
2014-10-02 10:06 ` [PATCH 1/2] xen/arm: remove handling of XENFEAT_grant_map_identity Stefano Stabellini
2014-10-02 10:36   ` [Xen-devel] " David Vrabel
2014-10-02 11:31     ` Stefano Stabellini
2014-10-02 10:06 ` [PATCH 2/2] xen/arm: introduce XENMEM_cache_flush Stefano Stabellini
2014-10-02 10:41   ` David Vrabel [this message]
2014-10-02 11:32     ` Stefano Stabellini

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=542D2BC2.2040505@citrix.com \
    --to=david.vrabel@citrix.com \
    --cc=Ian.Campbell@citrix.com \
    --cc=konrad.wilk@oracle.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=stefano.stabellini@eu.citrix.com \
    --cc=xen-devel@lists.xensource.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).