netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Dan Carpenter <dan.carpenter@oracle.com>
To: Wei Yongjun <weiyongjun1@huawei.com>
Cc: Jeff Dike <jdike@addtoit.com>,
	Richard Weinberger <richard@nod.at>,
	Anton Ivanov <anton.ivanov@cambridgegreys.com>,
	Alexei Starovoitov <ast@kernel.org>,
	Daniel Borkmann <daniel@iogearbox.net>,
	Martin KaFai Lau <kafai@fb.com>, Song Liu <songliubraving@fb.com>,
	linux-um@lists.infradead.org, netdev@vger.kernel.org,
	bpf@vger.kernel.org, kernel-janitors@vger.kernel.org
Subject: Re: [PATCH -next] um: vector: use GFP_ATOMIC under spin lock
Date: Thu, 28 Nov 2019 11:06:41 +0300	[thread overview]
Message-ID: <20191128080641.GD1781@kadam> (raw)
In-Reply-To: <20191128020147.191893-1-weiyongjun1@huawei.com>

On Thu, Nov 28, 2019 at 02:01:47AM +0000, Wei Yongjun wrote:
> A spin lock is taken here so we should use GFP_ATOMIC.
> 
> Fixes: 9807019a62dc ("um: Loadable BPF "Firmware" for vector drivers")
> Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com>
> ---
>  arch/um/drivers/vector_kern.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/um/drivers/vector_kern.c b/arch/um/drivers/vector_kern.c
> index 92617e16829e..6ff0065a271d 100644
> --- a/arch/um/drivers/vector_kern.c
> +++ b/arch/um/drivers/vector_kern.c
> @@ -1402,7 +1402,7 @@ static int vector_net_load_bpf_flash(struct net_device *dev,
>  		kfree(vp->bpf->filter);
>  		vp->bpf->filter = NULL;
>  	} else {
> -		vp->bpf = kmalloc(sizeof(struct sock_fprog), GFP_KERNEL);
> +		vp->bpf = kmalloc(sizeof(struct sock_fprog), GFP_ATOMIC);
>  		if (vp->bpf == NULL) {
>  			netdev_err(dev, "failed to allocate memory for firmware\n");
>  			goto flash_fail;
> @@ -1414,7 +1414,7 @@ static int vector_net_load_bpf_flash(struct net_device *dev,
>  	if (request_firmware(&fw, efl->data, &vdevice->pdev.dev))
            ^^^^^^^^^^^^^^^^

Is it really possible to call request_firmware() while holding a
spin_lock?  I was so sure that read from the disk.

regards,
dan carpenter

>  		goto flash_fail;
>  
> -	vp->bpf->filter = kmemdup(fw->data, fw->size, GFP_KERNEL);
> +	vp->bpf->filter = kmemdup(fw->data, fw->size, GFP_ATOMIC);
>  	if (!vp->bpf->filter)
>  		goto free_buffer;
> 
> 

  parent reply	other threads:[~2019-11-28  8:09 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-11-28  2:01 [PATCH -next] um: vector: use GFP_ATOMIC under spin lock Wei Yongjun
2019-11-28  7:51 ` Anton Ivanov
2019-11-28  8:06 ` Dan Carpenter [this message]
2019-11-28  8:18   ` Anton Ivanov
2019-11-28  8:37     ` Dan Carpenter
2019-11-28  9:22       ` Anton Ivanov
2019-11-28  8:41     ` Richard Weinberger
2019-11-28  9:24       ` Anton Ivanov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20191128080641.GD1781@kadam \
    --to=dan.carpenter@oracle.com \
    --cc=anton.ivanov@cambridgegreys.com \
    --cc=ast@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=daniel@iogearbox.net \
    --cc=jdike@addtoit.com \
    --cc=kafai@fb.com \
    --cc=kernel-janitors@vger.kernel.org \
    --cc=linux-um@lists.infradead.org \
    --cc=netdev@vger.kernel.org \
    --cc=richard@nod.at \
    --cc=songliubraving@fb.com \
    --cc=weiyongjun1@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).