virtualization.lists.linux-foundation.org archive mirror
 help / color / mirror / Atom feed
From: Cornelia Huck <cohuck@redhat.com>
To: Michael Mueller <mimu@linux.ibm.com>
Cc: Vasily Gorbik <gor@linux.ibm.com>,
	Linux-S390 Mailing List <linux-s390@vger.kernel.org>,
	Thomas Huth <thuth@redhat.com>,
	Claudio Imbrenda <imbrenda@linux.ibm.com>,
	KVM Mailing List <kvm@vger.kernel.org>,
	Sebastian Ott <sebott@linux.ibm.com>,
	"Michael S . Tsirkin" <mst@redhat.com>,
	Pierre Morel <pmorel@linux.ibm.com>,
	Farhan Ali <alifm@linux.ibm.com>,
	Heiko Carstens <heiko.carstens@de.ibm.com>,
	Eric Farman <farman@linux.ibm.com>,
	virtualization@lists.linux-foundation.org,
	Halil Pasic <pasic@linux.ibm.com>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	Christoph Hellwig <hch@infradead.org>,
	Viktor Mihajlovski <mihajlov@linux.ibm.com>,
	Janosch Frank <frankja@linux.ibm.com>
Subject: Re: [PATCH v2 2/8] s390/cio: introduce DMA pools to cio
Date: Mon, 27 May 2019 08:57:18 +0200	[thread overview]
Message-ID: <20190527085718.10494ee2.cohuck@redhat.com> (raw)
In-Reply-To: <20190523162209.9543-3-mimu@linux.ibm.com>

On Thu, 23 May 2019 18:22:03 +0200
Michael Mueller <mimu@linux.ibm.com> wrote:

> From: Halil Pasic <pasic@linux.ibm.com>
> 
> To support protected virtualization cio will need to make sure the
> memory used for communication with the hypervisor is DMA memory.
> 
> Let us introduce one global cio, and some tools for pools seated

"one global pool for cio"?

> at individual devices.
> 
> Our DMA pools are implemented as a gen_pool backed with DMA pages. The
> idea is to avoid each allocation effectively wasting a page, as we
> typically allocate much less than PAGE_SIZE.
> 
> Signed-off-by: Halil Pasic <pasic@linux.ibm.com>
> ---
>  arch/s390/Kconfig           |   1 +
>  arch/s390/include/asm/cio.h |  11 +++++
>  drivers/s390/cio/css.c      | 110 ++++++++++++++++++++++++++++++++++++++++++++
>  3 files changed, 122 insertions(+)
> 

(...)

> @@ -1018,6 +1024,109 @@ static struct notifier_block css_power_notifier = {
>  	.notifier_call = css_power_event,
>  };
>  
> +#define POOL_INIT_PAGES 1
> +static struct gen_pool *cio_dma_pool;
> +/* Currently cio supports only a single css */

This comment looks misplaced.

> +#define  CIO_DMA_GFP (GFP_KERNEL | __GFP_ZERO)
> +
> +
> +struct device *cio_get_dma_css_dev(void)
> +{
> +	return &channel_subsystems[0]->device;
> +}
> +
> +struct gen_pool *cio_gp_dma_create(struct device *dma_dev, int nr_pages)
> +{
> +	struct gen_pool *gp_dma;
> +	void *cpu_addr;
> +	dma_addr_t dma_addr;
> +	int i;
> +
> +	gp_dma = gen_pool_create(3, -1);
> +	if (!gp_dma)
> +		return NULL;
> +	for (i = 0; i < nr_pages; ++i) {
> +		cpu_addr = dma_alloc_coherent(dma_dev, PAGE_SIZE, &dma_addr,
> +					      CIO_DMA_GFP);
> +		if (!cpu_addr)
> +			return gp_dma;

So, you may return here with no memory added to the pool at all (or
less than requested), but for the caller that is indistinguishable from
an allocation that went all right. May that be a problem?

> +		gen_pool_add_virt(gp_dma, (unsigned long) cpu_addr,
> +				  dma_addr, PAGE_SIZE, -1);
> +	}
> +	return gp_dma;
> +}
> +

(...)

> +static void __init cio_dma_pool_init(void)
> +{
> +	/* No need to free up the resources: compiled in */
> +	cio_dma_pool = cio_gp_dma_create(cio_get_dma_css_dev(), 1);

Does it make sense to continue if you did not get a pool here? I don't
think that should happen unless things were really bad already?

> +}
> +
> +void *cio_gp_dma_zalloc(struct gen_pool *gp_dma, struct device *dma_dev,
> +			size_t size)
> +{
> +	dma_addr_t dma_addr;
> +	unsigned long addr;
> +	size_t chunk_size;
> +
> +	addr = gen_pool_alloc(gp_dma, size);
> +	while (!addr) {
> +		chunk_size = round_up(size, PAGE_SIZE);
> +		addr = (unsigned long) dma_alloc_coherent(dma_dev,
> +					 chunk_size, &dma_addr, CIO_DMA_GFP);
> +		if (!addr)
> +			return NULL;
> +		gen_pool_add_virt(gp_dma, addr, dma_addr, chunk_size, -1);
> +		addr = gen_pool_alloc(gp_dma, size);
> +	}
> +	return (void *) addr;
> +}
> +
> +void cio_gp_dma_free(struct gen_pool *gp_dma, void *cpu_addr, size_t size)
> +{
> +	if (!cpu_addr)
> +		return;
> +	memset(cpu_addr, 0, size);
> +	gen_pool_free(gp_dma, (unsigned long) cpu_addr, size);
> +}
> +
> +/**
> + * Allocate dma memory from the css global pool. Intended for memory not
> + * specific to any single device within the css. The allocated memory
> + * is not guaranteed to be 31-bit addressable.
> + *
> + * Caution: Not suitable for early stuff like console.
> + *
> + */
> +void *cio_dma_zalloc(size_t size)
> +{
> +	return cio_gp_dma_zalloc(cio_dma_pool, cio_get_dma_css_dev(), size);

Ok, that looks like the failure I mentioned above should be
accommodated by the code. Still, I think it's a bit odd.

> +}

  parent reply	other threads:[~2019-05-27  6:57 UTC|newest]

Thread overview: 36+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-05-23 16:22 [PATCH v2 0/8] s390: virtio: support protected virtualization Michael Mueller
2019-05-23 16:22 ` [PATCH v2 1/8] s390/mm: force swiotlb for " Michael Mueller
2019-05-23 16:22 ` [PATCH v2 2/8] s390/cio: introduce DMA pools to cio Michael Mueller
2019-05-25  9:22   ` Sebastian Ott
2019-05-27 11:26     ` Michael Mueller
2019-05-27  6:57   ` Cornelia Huck [this message]
2019-05-27 11:47     ` Halil Pasic
2019-05-27 12:06       ` Cornelia Huck
2019-05-27 12:00     ` Michael Mueller
2019-05-23 16:22 ` [PATCH v2 3/8] s390/cio: add basic protected virtualization support Michael Mueller
2019-05-25  9:44   ` Sebastian Ott
2019-05-27 15:01     ` Michael Mueller
2019-05-27 10:38   ` Cornelia Huck
2019-05-27 12:15     ` Michael Mueller
2019-05-27 12:30     ` Halil Pasic
2019-05-27 13:31       ` Cornelia Huck
2019-05-29 12:24         ` Michael Mueller
2019-05-29 12:30           ` Cornelia Huck
2019-05-23 16:22 ` [PATCH v2 4/8] s390/airq: use DMA memory for adapter interrupts Michael Mueller
2019-05-25  9:51   ` Sebastian Ott
2019-05-27 10:53   ` Cornelia Huck
2019-05-23 16:22 ` [PATCH v2 5/8] virtio/s390: use cacheline aligned airq bit vectors Michael Mueller
2019-05-27 10:55   ` Cornelia Huck
2019-05-27 12:03     ` Halil Pasic
2019-05-23 16:22 ` [PATCH v2 6/8] virtio/s390: add indirection to indicators access Michael Mueller
2019-05-27 11:00   ` Cornelia Huck
2019-05-27 11:57     ` Halil Pasic
2019-05-27 12:10       ` Cornelia Huck
2019-05-29 11:05         ` Michael Mueller
2019-05-23 16:22 ` [PATCH v2 7/8] virtio/s390: use DMA memory for ccw I/O and classic notifiers Michael Mueller
2019-05-27 11:49   ` Cornelia Huck
2019-05-23 16:22 ` [PATCH v2 8/8] virtio/s390: make airq summary indicators DMA Michael Mueller
2019-05-27 12:00   ` Cornelia Huck
2019-05-28 14:33     ` Halil Pasic
2019-05-28 14:56       ` Cornelia Huck
2019-05-28 14:58       ` Michael Mueller

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190527085718.10494ee2.cohuck@redhat.com \
    --to=cohuck@redhat.com \
    --cc=alifm@linux.ibm.com \
    --cc=borntraeger@de.ibm.com \
    --cc=farman@linux.ibm.com \
    --cc=frankja@linux.ibm.com \
    --cc=gor@linux.ibm.com \
    --cc=hch@infradead.org \
    --cc=heiko.carstens@de.ibm.com \
    --cc=imbrenda@linux.ibm.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-s390@vger.kernel.org \
    --cc=mihajlov@linux.ibm.com \
    --cc=mimu@linux.ibm.com \
    --cc=mst@redhat.com \
    --cc=pasic@linux.ibm.com \
    --cc=pmorel@linux.ibm.com \
    --cc=sebott@linux.ibm.com \
    --cc=thuth@redhat.com \
    --cc=virtualization@lists.linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).