public inbox for linux-rdma@vger.kernel.org
 help / color / mirror / Atom feed
* [bug report] RDMA/rxe: Remove RXE_POOL_ATOMIC
@ 2021-08-11  8:54 Dan Carpenter
  2021-08-13 19:57 ` Bob Pearson
  2021-08-13 19:57 ` Bob Pearson
  0 siblings, 2 replies; 3+ messages in thread
From: Dan Carpenter @ 2021-08-11  8:54 UTC (permalink / raw)
  To: rpearsonhpe; +Cc: linux-rdma

Hello Bob Pearson,

The patch 4276fd0dddc9: "RDMA/rxe: Remove RXE_POOL_ATOMIC" from Jan
25, 2021, leads to the following
Smatch static checker warning:

	drivers/infiniband/sw/rxe/rxe_pool.c:362 rxe_alloc()
	warn: sleeping in atomic context

drivers/infiniband/sw/rxe/rxe_pool.c
    353 void *rxe_alloc(struct rxe_pool *pool)
    354 {
    355 	struct rxe_type_info *info = &rxe_type_info[pool->type];
    356 	struct rxe_pool_entry *elem;
    357 	u8 *obj;
    358 
    359 	if (atomic_inc_return(&pool->num_elem) > pool->max_elem)
    360 		goto out_cnt;
    361 
--> 362 	obj = kzalloc(info->size, GFP_KERNEL);
                                          ^^^^^^^^^^
It's possible the patch just exposed a bug instead of introducing it,
but rxe_mcast_add_grp_elem() calls rxe_alloc() with spin_locks held so
we can't sleep.

    363 	if (!obj)
    364 		goto out_cnt;
    365 
    366 	elem = (struct rxe_pool_entry *)(obj + info->elem_offset);
    367 
    368 	elem->pool = pool;
    369 	kref_init(&elem->ref_cnt);
    370 
    371 	return obj;
    372 
    373 out_cnt:
    374 	atomic_dec(&pool->num_elem);
    375 	return NULL;
    376 }

regards,
dan carpenter

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [bug report] RDMA/rxe: Remove RXE_POOL_ATOMIC
  2021-08-11  8:54 [bug report] RDMA/rxe: Remove RXE_POOL_ATOMIC Dan Carpenter
@ 2021-08-13 19:57 ` Bob Pearson
  2021-08-13 19:57 ` Bob Pearson
  1 sibling, 0 replies; 3+ messages in thread
From: Bob Pearson @ 2021-08-13 19:57 UTC (permalink / raw)
  To: Dan Carpenter; +Cc: linux-rdma

On 8/11/21 3:54 AM, Dan Carpenter wrote:
> Hello Bob Pearson,
> 
> The patch 4276fd0dddc9: "RDMA/rxe: Remove RXE_POOL_ATOMIC" from Jan
> 25, 2021, leads to the following
> Smatch static checker warning:
> 
> 	drivers/infiniband/sw/rxe/rxe_pool.c:362 rxe_alloc()
> 	warn: sleeping in atomic context
> 
> drivers/infiniband/sw/rxe/rxe_pool.c
>     353 void *rxe_alloc(struct rxe_pool *pool)
>     354 {
>     355 	struct rxe_type_info *info = &rxe_type_info[pool->type];
>     356 	struct rxe_pool_entry *elem;
>     357 	u8 *obj;
>     358 
>     359 	if (atomic_inc_return(&pool->num_elem) > pool->max_elem)
>     360 		goto out_cnt;
>     361 
> --> 362 	obj = kzalloc(info->size, GFP_KERNEL);
>                                           ^^^^^^^^^^
> It's possible the patch just exposed a bug instead of introducing it,
> but rxe_mcast_add_grp_elem() calls rxe_alloc() with spin_locks held so
> we can't sleep.
> 
>     363 	if (!obj)
>     364 		goto out_cnt;
>     365 
>     366 	elem = (struct rxe_pool_entry *)(obj + info->elem_offset);
>     367 
>     368 	elem->pool = pool;
>     369 	kref_init(&elem->ref_cnt);
>     370 
>     371 	return obj;
>     372 
>     373 out_cnt:
>     374 	atomic_dec(&pool->num_elem);
>     375 	return NULL;
>     376 }
> 
> regards,
> dan carpenter
> 

Dan,

That should have been rxe_alloc_locked() which uses the GFP_ATOMIC flag. Slowly but surely the rxe object allocations have been moving into rdma/core so there are only 3 left, mcast groups, elements and MRs of which only the first two happen in IRQs or have locks. I'll submit a fix. Thanks for finding this.

Bob

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [bug report] RDMA/rxe: Remove RXE_POOL_ATOMIC
  2021-08-11  8:54 [bug report] RDMA/rxe: Remove RXE_POOL_ATOMIC Dan Carpenter
  2021-08-13 19:57 ` Bob Pearson
@ 2021-08-13 19:57 ` Bob Pearson
  1 sibling, 0 replies; 3+ messages in thread
From: Bob Pearson @ 2021-08-13 19:57 UTC (permalink / raw)
  To: Dan Carpenter; +Cc: linux-rdma

On 8/11/21 3:54 AM, Dan Carpenter wrote:
> Hello Bob Pearson,
> 
> The patch 4276fd0dddc9: "RDMA/rxe: Remove RXE_POOL_ATOMIC" from Jan
> 25, 2021, leads to the following
> Smatch static checker warning:
> 
> 	drivers/infiniband/sw/rxe/rxe_pool.c:362 rxe_alloc()
> 	warn: sleeping in atomic context
> 
> drivers/infiniband/sw/rxe/rxe_pool.c
>     353 void *rxe_alloc(struct rxe_pool *pool)
>     354 {
>     355 	struct rxe_type_info *info = &rxe_type_info[pool->type];
>     356 	struct rxe_pool_entry *elem;
>     357 	u8 *obj;
>     358 
>     359 	if (atomic_inc_return(&pool->num_elem) > pool->max_elem)
>     360 		goto out_cnt;
>     361 
> --> 362 	obj = kzalloc(info->size, GFP_KERNEL);
>                                           ^^^^^^^^^^
> It's possible the patch just exposed a bug instead of introducing it,
> but rxe_mcast_add_grp_elem() calls rxe_alloc() with spin_locks held so
> we can't sleep.
> 
>     363 	if (!obj)
>     364 		goto out_cnt;
>     365 
>     366 	elem = (struct rxe_pool_entry *)(obj + info->elem_offset);
>     367 
>     368 	elem->pool = pool;
>     369 	kref_init(&elem->ref_cnt);
>     370 
>     371 	return obj;
>     372 
>     373 out_cnt:
>     374 	atomic_dec(&pool->num_elem);
>     375 	return NULL;
>     376 }
> 
> regards,
> dan carpenter
> 

Dan,

That should have been rxe_alloc_locked() which uses the GFP_ATOMIC flag. Slowly but surely the rxe object allocations have been moving into rdma/core so there are only 3 left, mcast groups, elements and MRs of which only the first two happen in IRQs or have locks. I'll submit a fix. Thanks for finding this.

Bob

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2021-08-13 19:57 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2021-08-11  8:54 [bug report] RDMA/rxe: Remove RXE_POOL_ATOMIC Dan Carpenter
2021-08-13 19:57 ` Bob Pearson
2021-08-13 19:57 ` Bob Pearson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox