From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail333.us4.mandrillapp.com ([205.201.137.77]:37125 "EHLO mail333.us4.mandrillapp.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752712AbcCAIQN (ORCPT ); Tue, 1 Mar 2016 03:16:13 -0500 Received: from pmta03.dal05.mailchimp.com (127.0.0.1) by mail333.us4.mandrillapp.com id hql7sq174noi for ; Tue, 1 Mar 2016 08:15:55 +0000 (envelope-from ) From: Subject: Patch "async_tx: use GFP_NOWAIT rather than GFP_IO" has been added to the 3.14-stable tree To: , , , , Cc: , Message-Id: <145681634397200@kroah.com> Date: Tue, 01 Mar 2016 08:15:55 +0000 MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Sender: stable-owner@vger.kernel.org List-ID: This is a note to let you know that I've just added the patch titled async_tx: use GFP_NOWAIT rather than GFP_IO to the 3.14-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: async_tx-use-gfp_nowait-rather-than-gfp_io.patch and it can be found in the queue-3.14 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let know about it. >>From b02bab6b0f928d49dbfb03e1e4e9dd43647623d7 Mon Sep 17 00:00:00 2001 From: NeilBrown Date: Thu, 7 Jan 2016 11:02:34 +1100 Subject: async_tx: use GFP_NOWAIT rather than GFP_IO From: NeilBrown commit b02bab6b0f928d49dbfb03e1e4e9dd43647623d7 upstream. These async_XX functions are called from md/raid5 in an atomic section, between get_cpu() and put_cpu(), so they must not sleep. So use GFP_NOWAIT rather than GFP_IO. Dan Williams writes: Longer term async_tx needs to be merged into md directly as we can allocate this unmap data statically per-stripe rather than per request. Fixed: 7476bd79fc01 ("async_pq: convert to dmaengine_unmap_data") Reported-and-tested-by: Stanislav Samsonov Acked-by: Dan Williams Signed-off-by: NeilBrown Signed-off-by: Vinod Koul Signed-off-by: Greg Kroah-Hartman --- crypto/async_tx/async_memcpy.c | 2 +- crypto/async_tx/async_pq.c | 4 ++-- crypto/async_tx/async_raid6_recov.c | 4 ++-- crypto/async_tx/async_xor.c | 4 ++-- 4 files changed, 7 insertions(+), 7 deletions(-) --- a/crypto/async_tx/async_memcpy.c +++ b/crypto/async_tx/async_memcpy.c @@ -53,7 +53,7 @@ async_memcpy(struct page *dest, struct p struct dmaengine_unmap_data *unmap = NULL; if (device) - unmap = dmaengine_get_unmap_data(device->dev, 2, GFP_NOIO); + unmap = dmaengine_get_unmap_data(device->dev, 2, GFP_NOWAIT); if (unmap && is_dma_copy_aligned(device, src_offset, dest_offset, len)) { unsigned long dma_prep_flags = 0; --- a/crypto/async_tx/async_pq.c +++ b/crypto/async_tx/async_pq.c @@ -176,7 +176,7 @@ async_gen_syndrome(struct page **blocks, BUG_ON(disks > 255 || !(P(blocks, disks) || Q(blocks, disks))); if (device) - unmap = dmaengine_get_unmap_data(device->dev, disks, GFP_NOIO); + unmap = dmaengine_get_unmap_data(device->dev, disks, GFP_NOWAIT); if (unmap && (src_cnt <= dma_maxpq(device, 0) || @@ -294,7 +294,7 @@ async_syndrome_val(struct page **blocks, BUG_ON(disks < 4); if (device) - unmap = dmaengine_get_unmap_data(device->dev, disks, GFP_NOIO); + unmap = dmaengine_get_unmap_data(device->dev, disks, GFP_NOWAIT); if (unmap && disks <= dma_maxpq(device, 0) && is_dma_pq_aligned(device, offset, 0, len)) { --- a/crypto/async_tx/async_raid6_recov.c +++ b/crypto/async_tx/async_raid6_recov.c @@ -41,7 +41,7 @@ async_sum_product(struct page *dest, str u8 *a, *b, *c; if (dma) - unmap = dmaengine_get_unmap_data(dma->dev, 3, GFP_NOIO); + unmap = dmaengine_get_unmap_data(dma->dev, 3, GFP_NOWAIT); if (unmap) { struct device *dev = dma->dev; @@ -105,7 +105,7 @@ async_mult(struct page *dest, struct pag u8 *d, *s; if (dma) - unmap = dmaengine_get_unmap_data(dma->dev, 3, GFP_NOIO); + unmap = dmaengine_get_unmap_data(dma->dev, 3, GFP_NOWAIT); if (unmap) { dma_addr_t dma_dest[2]; --- a/crypto/async_tx/async_xor.c +++ b/crypto/async_tx/async_xor.c @@ -182,7 +182,7 @@ async_xor(struct page *dest, struct page BUG_ON(src_cnt <= 1); if (device) - unmap = dmaengine_get_unmap_data(device->dev, src_cnt+1, GFP_NOIO); + unmap = dmaengine_get_unmap_data(device->dev, src_cnt+1, GFP_NOWAIT); if (unmap && is_dma_xor_aligned(device, offset, 0, len)) { struct dma_async_tx_descriptor *tx; @@ -278,7 +278,7 @@ async_xor_val(struct page *dest, struct BUG_ON(src_cnt <= 1); if (device) - unmap = dmaengine_get_unmap_data(device->dev, src_cnt, GFP_NOIO); + unmap = dmaengine_get_unmap_data(device->dev, src_cnt, GFP_NOWAIT); if (unmap && src_cnt <= device->max_xor && is_dma_xor_aligned(device, offset, 0, len)) { Patches currently in stable-queue which might be from neilb@suse.com are queue-3.14/async_tx-use-gfp_nowait-rather-than-gfp_io.patch