linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Yuri Tikhonov <yur@emcraft.com>
To: Dan Williams <dan.j.williams@intel.com>
Cc: linux-raid@vger.kernel.org
Subject: Re: [PATCH 2.6.21-rc4 01/15] dmaengine: add base support for the async_tx api
Date: Fri, 23 Mar 2007 17:18:01 +0300	[thread overview]
Message-ID: <200703231718.01770.yur@emcraft.com> (raw)
In-Reply-To: <20070323065142.15570.98231.stgit@dwillia2-linux.ch.intel.com>

 Hi Dan,

 Functions dma_async_memcpy_buf_to_buf(), dma_async_memcpy_buf_to_pg() and 
dma_async_memcpy_pg_to_pg() are practically identical. Maybe it makes sense 
to extract the common part into a separate inline function? Somehow like 
this:

static inline dma_cookie_t dma_async_memcpy (
	struct dma_chan *chan, 
	dma_addr_t src,
	dma_addr_t dst,
	size_t len) 
{
	struct dma_device *dev = chan->device;
	struct dma_async_tx_descriptor *tx;
	dma_cookie_t cookie;
	int cpu;

	tx = dev->device_prep_dma_memcpy(chan, len, 0);
	if (!tx)
		return -ENOMEM;

	tx->ack = 1;
	tx->callback = NULL;
	dev->device_set_src(src, tx, 0);
	dev->device_set_dest(dst, tx, 0);
	cookie = dev->device_tx_submit(tx);

	cpu = get_cpu();
	per_cpu_ptr(chan->local, cpu)->bytes_transferred += len;
	per_cpu_ptr(chan->local, cpu)->memcpy_count++;
	put_cpu();

	return cookie;
}

dma_cookie_t dma_async_memcpy_buf_to_buf(struct dma_chan *chan,
        void *dest, void *src, size_t len)
{
	dma_addr_t dsrc, ddst;

	dsrc = dma_map_single(chan->device->dev, src, len, DMA_TO_DEVICE);
	ddst = dma_map_single(chan->device->dev, dest, len, DMA_FROM_DEVICE);

	return dma_async_memcpy(chan, dsrc, ddst, len);
}

dma_cookie_t dma_async_memcpy_buf_to_pg(struct dma_chan *chan,
        struct page *page, unsigned int offset, void *kdata, size_t len)
{
	dma_addr_t dsrc, ddst;

	dsrc = dma_map_single(chan->device->dev, kdata, len, DMA_TO_DEVICE);
	ddst = dma_map_page(chan->device->dev, page, offset, len, DMA_FROM_DEVICE);

	return dma_async_memcpy(chan, dsrc, ddst, len);
}

dma_cookie_t dma_async_memcpy_pg_to_pg(struct dma_chan *chan,
        struct page *dest_pg, unsigned int dest_off, struct page *src_pg,
        unsigned int src_off, size_t len)
{
	dma_addr_t dsrc, ddst;

	dsrc = dma_map_page(chan->device->dev, src_pg, src_off, len, DMA_TO_DEVICE);
	ddst = dma_map_page(chan->device->dev,dest_pg,dest_off, len,DMA_FROM_DEVICE);

	return dma_async_memcpy(chan, dsrc, ddst, len);
}


 Regards, Yuri.

On Friday 23 March 2007 09:51, you wrote:
> The async_tx api provides methods for describing a chain of asynchronous
> ...
> diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c
> index 322ee29..2285f33 100644
> --- a/drivers/dma/dmaengine.c
> +++ b/drivers/dma/dmaengine.c
> ...
> +/**
> + * dma_async_memcpy_buf_to_buf - offloaded copy between virtual addresses
> + * @chan: DMA channel to offload copy to
> + * @dest: destination address (virtual)
> + * @src: source address (virtual)
> + * @len: length
> + *
> + * Both @dest and @src must be mappable to a bus address according to the
> + * DMA mapping API rules for streaming mappings.
> + * Both @dest and @src must stay memory resident (kernel memory or locked
> + * user space pages).
> + */
> +dma_cookie_t dma_async_memcpy_buf_to_buf(struct dma_chan *chan,
> +        void *dest, void *src, size_t len)
> +{
> +	struct dma_device *dev = chan->device;
> +	struct dma_async_tx_descriptor *tx;
> +	dma_addr_t addr;
> +	dma_cookie_t cookie;
> +	int cpu;
> +
> +	tx = dev->device_prep_dma_memcpy(chan, len, 0);
> +	if (!tx)
> +		return -ENOMEM;
> +
> +	tx->ack = 1;
> +	tx->callback = NULL;
> +	addr = dma_map_single(dev->dev, src, len, DMA_TO_DEVICE);
> +	dev->device_set_src(addr, tx, 0);
> +	addr = dma_map_single(dev->dev, dest, len, DMA_FROM_DEVICE);
> +	dev->device_set_dest(addr, tx, 0);
> +	cookie = dev->device_tx_submit(tx);
> +
> +	cpu = get_cpu();
> +	per_cpu_ptr(chan->local, cpu)->bytes_transferred += len;
> +	per_cpu_ptr(chan->local, cpu)->memcpy_count++;
> +	put_cpu();
> +
> +	return cookie;
> +}
> +EXPORT_SYMBOL(dma_async_memcpy_buf_to_buf);
> +
> +/**
> + * dma_async_memcpy_buf_to_pg - offloaded copy from address to page
> + * @chan: DMA channel to offload copy to
> + * @page: destination page
> + * @offset: offset in page to copy to
> + * @kdata: source address (virtual)
> + * @len: length
> + *
> + * Both @page/@offset and @kdata must be mappable to a bus address
> according + * to the DMA mapping API rules for streaming mappings.
> + * Both @page/@offset and @kdata must stay memory resident (kernel memory
> or + * locked user space pages)
> + */
> +dma_cookie_t dma_async_memcpy_buf_to_pg(struct dma_chan *chan,
> +        struct page *page, unsigned int offset, void *kdata, size_t len)
> +{
> +	struct dma_device *dev = chan->device;
> +	struct dma_async_tx_descriptor *tx;
> +	dma_addr_t addr;
> +	dma_cookie_t cookie;
> +	int cpu;
> +
> +	tx = dev->device_prep_dma_memcpy(chan, len, 0);
> +	if (!tx)
> +		return -ENOMEM;
> +
> +	tx->ack = 1;
> +	tx->callback = NULL;
> +	addr = dma_map_single(dev->dev, kdata, len, DMA_TO_DEVICE);
> +	dev->device_set_src(addr, tx, 0);
> +	addr = dma_map_page(dev->dev, page, offset, len, DMA_FROM_DEVICE);
> +	dev->device_set_dest(addr, tx, 0);
> +	cookie = dev->device_tx_submit(tx);
> +
> +	cpu = get_cpu();
> +	per_cpu_ptr(chan->local, cpu)->bytes_transferred += len;
> +	per_cpu_ptr(chan->local, cpu)->memcpy_count++;
> +	put_cpu();
> +
> +	return cookie;
> +}
> +EXPORT_SYMBOL(dma_async_memcpy_buf_to_pg);
> +
> +/**
> + * dma_async_memcpy_pg_to_pg - offloaded copy from page to page
> + * @chan: DMA channel to offload copy to
> + * @dest_pg: destination page
> + * @dest_off: offset in page to copy to
> + * @src_pg: source page
> + * @src_off: offset in page to copy from
> + * @len: length
> + *
> + * Both @dest_page/@dest_off and @src_page/@src_off must be mappable to a
> bus + * address according to the DMA mapping API rules for streaming
> mappings. + * Both @dest_page/@dest_off and @src_page/@src_off must stay
> memory resident + * (kernel memory or locked user space pages).
> + */
> +dma_cookie_t dma_async_memcpy_pg_to_pg(struct dma_chan *chan,
> +        struct page *dest_pg, unsigned int dest_off, struct page *src_pg,
> +        unsigned int src_off, size_t len)
> +{
> +	struct dma_device *dev = chan->device;
> +	struct dma_async_tx_descriptor *tx;
> +	dma_addr_t addr;
> +	dma_cookie_t cookie;
> +	int cpu;
> +
> +	tx = dev->device_prep_dma_memcpy(chan, len, 0);
> +	if (!tx)
> +		return -ENOMEM;
> +
> +	tx->ack = 1;
> +	tx->callback = NULL;
> +	addr = dma_map_page(dev->dev, src_pg, src_off, len, DMA_TO_DEVICE);
> +	dev->device_set_src(addr, tx, 0);
> +	addr = dma_map_page(dev->dev, dest_pg, dest_off, len, DMA_FROM_DEVICE);
> +	dev->device_set_dest(addr, tx, 0);
> +	cookie = dev->device_tx_submit(tx);
> +
> +	cpu = get_cpu();
> +	per_cpu_ptr(chan->local, cpu)->bytes_transferred += len;
> +	per_cpu_ptr(chan->local, cpu)->memcpy_count++;
> +	put_cpu();
> +
> +	return cookie;
> +}
> +EXPORT_SYMBOL(dma_async_memcpy_pg_to_pg);
> ...

  reply	other threads:[~2007-03-23 14:18 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2007-03-23  6:51 [PATCH 2.6.21-rc4 00/15] md raid5 acceleration and async_tx Dan Williams
2007-03-23  6:51 ` [PATCH 2.6.21-rc4 01/15] dmaengine: add base support for the async_tx api Dan Williams
2007-03-23 14:18   ` Yuri Tikhonov [this message]
2007-03-26 16:48     ` Dan Williams
2007-03-23  6:51 ` [PATCH 2.6.21-rc4 02/15] ARM: Add drivers/dma to arch/arm/Kconfig Dan Williams
2007-03-23  6:51 ` [PATCH 2.6.21-rc4 03/15] dmaengine: add the async_tx api Dan Williams
2007-03-23  6:51 ` [PATCH 2.6.21-rc4 04/15] md: add raid5_run_ops and support routines Dan Williams
2007-03-23  6:52 ` [PATCH 2.6.21-rc4 05/15] md: use raid5_run_ops for stripe cache operations Dan Williams
2007-03-23  6:52 ` [PATCH 2.6.21-rc4 06/15] md: move write operations to raid5_run_ops Dan Williams
2007-03-23  6:52 ` [PATCH 2.6.21-rc4 07/15] md: move raid5 compute block " Dan Williams
2007-03-23  6:52 ` [PATCH 2.6.21-rc4 08/15] md: move raid5 parity checks " Dan Williams
2007-03-23  6:52 ` [PATCH 2.6.21-rc4 09/15] md: satisfy raid5 read requests via raid5_run_ops Dan Williams
2007-03-23  6:52 ` [PATCH 2.6.21-rc4 10/15] md: use async_tx and raid5_run_ops for raid5 expansion operations Dan Williams
2007-03-23  6:52 ` [PATCH 2.6.21-rc4 11/15] md: move raid5 io requests to raid5_run_ops Dan Williams
2007-03-23  6:52 ` [PATCH 2.6.21-rc4 12/15] md: remove raid5 compute_block and compute_parity5 Dan Williams
2007-03-23  6:52 ` [PATCH 2.6.21-rc4 13/15] dmaengine: driver for the iop32x, iop33x, and iop13xx raid engines Dan Williams
2007-03-23  6:52 ` [PATCH 2.6.21-rc4 14/15] iop13xx: Surface the iop13xx adma units to the iop-adma driver Dan Williams
2007-03-23  6:52 ` [PATCH 2.6.21-rc4 15/15] iop3xx: Surface the iop3xx DMA and AAU " Dan Williams

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=200703231718.01770.yur@emcraft.com \
    --to=yur@emcraft.com \
    --cc=dan.j.williams@intel.com \
    --cc=linux-raid@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).