public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/3] blk-mq: Minor tweaks
@ 2014-09-03 20:33 Alexander Gordeev
  2014-09-03 20:33 ` [PATCH 1/3] blk-mq: Cleanup blk_mq_tag_busy() and blk_mq_tag_idle() Alexander Gordeev
                   ` (2 more replies)
  0 siblings, 3 replies; 10+ messages in thread
From: Alexander Gordeev @ 2014-09-03 20:33 UTC (permalink / raw)
  To: linux-kernel; +Cc: Alexander Gordeev, Jens Axboe

Hi Jens,

These are few changes to blk-mq/blk-mq-tag, nothing really serious.

Cc: Jens Axboe <axboe@kernel.dk>

Alexander Gordeev (3):
  blk-mq: Cleanup blk_mq_tag_busy() and blk_mq_tag_idle()
  blk-mq: Eliminate superfluous check of BLK_MQ_S_TAG_ACTIVE flag
  blk-mq: Fix formula to calculate fair share of tags

 block/blk-mq-tag.c |  9 +++------
 block/blk-mq-tag.h | 17 ++++++++---------
 2 files changed, 11 insertions(+), 15 deletions(-)

-- 
1.9.3


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH 1/3] blk-mq: Cleanup blk_mq_tag_busy() and blk_mq_tag_idle()
  2014-09-03 20:33 [PATCH 0/3] blk-mq: Minor tweaks Alexander Gordeev
@ 2014-09-03 20:33 ` Alexander Gordeev
  2014-09-03 20:35   ` Jens Axboe
  2014-09-03 20:33 ` [PATCH 2/3] blk-mq: Eliminate superfluous check of BLK_MQ_S_TAG_ACTIVE flag Alexander Gordeev
  2014-09-03 20:33 ` [PATCH 3/3] blk-mq: Fix formula to calculate fair share of tags Alexander Gordeev
  2 siblings, 1 reply; 10+ messages in thread
From: Alexander Gordeev @ 2014-09-03 20:33 UTC (permalink / raw)
  To: linux-kernel; +Cc: Alexander Gordeev, Jens Axboe

Signed-off-by: Alexander Gordeev <agordeev@redhat.com>
Cc: Jens Axboe <axboe@kernel.dk>
---
 block/blk-mq-tag.c |  4 +---
 block/blk-mq-tag.h | 17 ++++++++---------
 2 files changed, 9 insertions(+), 12 deletions(-)

diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c
index c1b9242..4953b64 100644
--- a/block/blk-mq-tag.c
+++ b/block/blk-mq-tag.c
@@ -58,13 +58,11 @@ static inline void bt_index_atomic_inc(atomic_t *index)
 /*
  * If a previously inactive queue goes active, bump the active user count.
  */
-bool __blk_mq_tag_busy(struct blk_mq_hw_ctx *hctx)
+void __blk_mq_tag_busy(struct blk_mq_hw_ctx *hctx)
 {
 	if (!test_bit(BLK_MQ_S_TAG_ACTIVE, &hctx->state) &&
 	    !test_and_set_bit(BLK_MQ_S_TAG_ACTIVE, &hctx->state))
 		atomic_inc(&hctx->tags->active_queues);
-
-	return true;
 }
 
 /*
diff --git a/block/blk-mq-tag.h b/block/blk-mq-tag.h
index 6206ed1..795ec3f 100644
--- a/block/blk-mq-tag.h
+++ b/block/blk-mq-tag.h
@@ -66,23 +66,22 @@ enum {
 	BLK_MQ_TAG_MAX		= BLK_MQ_TAG_FAIL - 1,
 };
 
-extern bool __blk_mq_tag_busy(struct blk_mq_hw_ctx *);
+extern void __blk_mq_tag_busy(struct blk_mq_hw_ctx *);
 extern void __blk_mq_tag_idle(struct blk_mq_hw_ctx *);
 
 static inline bool blk_mq_tag_busy(struct blk_mq_hw_ctx *hctx)
 {
-	if (!(hctx->flags & BLK_MQ_F_TAG_SHARED))
-		return false;
-
-	return __blk_mq_tag_busy(hctx);
+	if (hctx->flags & BLK_MQ_F_TAG_SHARED) {
+		__blk_mq_tag_busy(hctx);
+		return true;
+	}
+	return false;
 }
 
 static inline void blk_mq_tag_idle(struct blk_mq_hw_ctx *hctx)
 {
-	if (!(hctx->flags & BLK_MQ_F_TAG_SHARED))
-		return;
-
-	__blk_mq_tag_idle(hctx);
+	if (hctx->flags & BLK_MQ_F_TAG_SHARED)
+		__blk_mq_tag_idle(hctx);
 }
 
 #endif
-- 
1.9.3


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 2/3] blk-mq: Eliminate superfluous check of BLK_MQ_S_TAG_ACTIVE flag
  2014-09-03 20:33 [PATCH 0/3] blk-mq: Minor tweaks Alexander Gordeev
  2014-09-03 20:33 ` [PATCH 1/3] blk-mq: Cleanup blk_mq_tag_busy() and blk_mq_tag_idle() Alexander Gordeev
@ 2014-09-03 20:33 ` Alexander Gordeev
  2014-09-03 20:40   ` Jens Axboe
  2014-09-03 20:33 ` [PATCH 3/3] blk-mq: Fix formula to calculate fair share of tags Alexander Gordeev
  2 siblings, 1 reply; 10+ messages in thread
From: Alexander Gordeev @ 2014-09-03 20:33 UTC (permalink / raw)
  To: linux-kernel; +Cc: Alexander Gordeev, Jens Axboe

Signed-off-by: Alexander Gordeev <agordeev@redhat.com>
Cc: Jens Axboe <axboe@kernel.dk>
---
 block/blk-mq-tag.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c
index 4953b64..d1eb579 100644
--- a/block/blk-mq-tag.c
+++ b/block/blk-mq-tag.c
@@ -60,8 +60,7 @@ static inline void bt_index_atomic_inc(atomic_t *index)
  */
 void __blk_mq_tag_busy(struct blk_mq_hw_ctx *hctx)
 {
-	if (!test_bit(BLK_MQ_S_TAG_ACTIVE, &hctx->state) &&
-	    !test_and_set_bit(BLK_MQ_S_TAG_ACTIVE, &hctx->state))
+	if (!test_and_set_bit(BLK_MQ_S_TAG_ACTIVE, &hctx->state))
 		atomic_inc(&hctx->tags->active_queues);
 }
 
-- 
1.9.3


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 3/3] blk-mq: Fix formula to calculate fair share of tags
  2014-09-03 20:33 [PATCH 0/3] blk-mq: Minor tweaks Alexander Gordeev
  2014-09-03 20:33 ` [PATCH 1/3] blk-mq: Cleanup blk_mq_tag_busy() and blk_mq_tag_idle() Alexander Gordeev
  2014-09-03 20:33 ` [PATCH 2/3] blk-mq: Eliminate superfluous check of BLK_MQ_S_TAG_ACTIVE flag Alexander Gordeev
@ 2014-09-03 20:33 ` Alexander Gordeev
  2014-09-03 20:43   ` Jens Axboe
  2 siblings, 1 reply; 10+ messages in thread
From: Alexander Gordeev @ 2014-09-03 20:33 UTC (permalink / raw)
  To: linux-kernel; +Cc: Alexander Gordeev, Jens Axboe

Fair share of tags is number of tags diveded on number
of users. Not sure why it is different.

Signed-off-by: Alexander Gordeev <agordeev@redhat.com>
Cc: Jens Axboe <axboe@kernel.dk>
---
 block/blk-mq-tag.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c
index d1eb579..1b9c949 100644
--- a/block/blk-mq-tag.c
+++ b/block/blk-mq-tag.c
@@ -127,7 +127,7 @@ static inline bool hctx_may_queue(struct blk_mq_hw_ctx *hctx,
 	/*
 	 * Allow at least some tags
 	 */
-	depth = max((bt->depth + users - 1) / users, 4U);
+	depth = max(bt->depth / users, 4U);
 	return atomic_read(&hctx->nr_active) < depth;
 }
 
-- 
1.9.3


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH 1/3] blk-mq: Cleanup blk_mq_tag_busy() and blk_mq_tag_idle()
  2014-09-03 20:33 ` [PATCH 1/3] blk-mq: Cleanup blk_mq_tag_busy() and blk_mq_tag_idle() Alexander Gordeev
@ 2014-09-03 20:35   ` Jens Axboe
  2014-09-05  1:26     ` Chuck Ebbert
  0 siblings, 1 reply; 10+ messages in thread
From: Jens Axboe @ 2014-09-03 20:35 UTC (permalink / raw)
  To: Alexander Gordeev, linux-kernel

On 09/03/2014 02:33 PM, Alexander Gordeev wrote:
> Signed-off-by: Alexander Gordeev <agordeev@redhat.com>
> Cc: Jens Axboe <axboe@kernel.dk>
> ---
>  block/blk-mq-tag.c |  4 +---
>  block/blk-mq-tag.h | 17 ++++++++---------
>  2 files changed, 9 insertions(+), 12 deletions(-)
> 
> diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c
> index c1b9242..4953b64 100644
> --- a/block/blk-mq-tag.c
> +++ b/block/blk-mq-tag.c
> @@ -58,13 +58,11 @@ static inline void bt_index_atomic_inc(atomic_t *index)
>  /*
>   * If a previously inactive queue goes active, bump the active user count.
>   */
> -bool __blk_mq_tag_busy(struct blk_mq_hw_ctx *hctx)
> +void __blk_mq_tag_busy(struct blk_mq_hw_ctx *hctx)
>  {
>  	if (!test_bit(BLK_MQ_S_TAG_ACTIVE, &hctx->state) &&
>  	    !test_and_set_bit(BLK_MQ_S_TAG_ACTIVE, &hctx->state))
>  		atomic_inc(&hctx->tags->active_queues);
> -
> -	return true;
>  }

This is obviously a good idea.

> diff --git a/block/blk-mq-tag.h b/block/blk-mq-tag.h
> index 6206ed1..795ec3f 100644
> --- a/block/blk-mq-tag.h
> +++ b/block/blk-mq-tag.h
> @@ -66,23 +66,22 @@ enum {
>  	BLK_MQ_TAG_MAX		= BLK_MQ_TAG_FAIL - 1,
>  };
>  
> -extern bool __blk_mq_tag_busy(struct blk_mq_hw_ctx *);
> +extern void __blk_mq_tag_busy(struct blk_mq_hw_ctx *);
>  extern void __blk_mq_tag_idle(struct blk_mq_hw_ctx *);
>  
>  static inline bool blk_mq_tag_busy(struct blk_mq_hw_ctx *hctx)
>  {
> -	if (!(hctx->flags & BLK_MQ_F_TAG_SHARED))
> -		return false;
> -
> -	return __blk_mq_tag_busy(hctx);
> +	if (hctx->flags & BLK_MQ_F_TAG_SHARED) {
> +		__blk_mq_tag_busy(hctx);
> +		return true;
> +	}
> +	return false;
>  }

The normal/fast path here is the flag NOT being set, which is why it was
coded that way to put the fast path inline.

>  
>  static inline void blk_mq_tag_idle(struct blk_mq_hw_ctx *hctx)
>  {
> -	if (!(hctx->flags & BLK_MQ_F_TAG_SHARED))
> -		return;
> -
> -	__blk_mq_tag_idle(hctx);
> +	if (hctx->flags & BLK_MQ_F_TAG_SHARED)
> +		__blk_mq_tag_idle(hctx);
>  }

Ditto


-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 2/3] blk-mq: Eliminate superfluous check of BLK_MQ_S_TAG_ACTIVE flag
  2014-09-03 20:33 ` [PATCH 2/3] blk-mq: Eliminate superfluous check of BLK_MQ_S_TAG_ACTIVE flag Alexander Gordeev
@ 2014-09-03 20:40   ` Jens Axboe
  0 siblings, 0 replies; 10+ messages in thread
From: Jens Axboe @ 2014-09-03 20:40 UTC (permalink / raw)
  To: Alexander Gordeev, linux-kernel

On 09/03/2014 02:33 PM, Alexander Gordeev wrote:
> Signed-off-by: Alexander Gordeev <agordeev@redhat.com>
> Cc: Jens Axboe <axboe@kernel.dk>
> ---
>  block/blk-mq-tag.c | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
> 
> diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c
> index 4953b64..d1eb579 100644
> --- a/block/blk-mq-tag.c
> +++ b/block/blk-mq-tag.c
> @@ -60,8 +60,7 @@ static inline void bt_index_atomic_inc(atomic_t *index)
>   */
>  void __blk_mq_tag_busy(struct blk_mq_hw_ctx *hctx)
>  {
> -	if (!test_bit(BLK_MQ_S_TAG_ACTIVE, &hctx->state) &&
> -	    !test_and_set_bit(BLK_MQ_S_TAG_ACTIVE, &hctx->state))
> +	if (!test_and_set_bit(BLK_MQ_S_TAG_ACTIVE, &hctx->state))
>  		atomic_inc(&hctx->tags->active_queues);

That's done on purpose as well, it's a lot faster to do the (non-atomic)
check first and only do the expensive test and set if needed.

See 7fcbbaf18392 for similar construct and actual numbers on a real case.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 3/3] blk-mq: Fix formula to calculate fair share of tags
  2014-09-03 20:33 ` [PATCH 3/3] blk-mq: Fix formula to calculate fair share of tags Alexander Gordeev
@ 2014-09-03 20:43   ` Jens Axboe
  0 siblings, 0 replies; 10+ messages in thread
From: Jens Axboe @ 2014-09-03 20:43 UTC (permalink / raw)
  To: Alexander Gordeev, linux-kernel

On 09/03/2014 02:33 PM, Alexander Gordeev wrote:
> Fair share of tags is number of tags diveded on number
> of users. Not sure why it is different.
> 
> Signed-off-by: Alexander Gordeev <agordeev@redhat.com>
> Cc: Jens Axboe <axboe@kernel.dk>
> ---
>  block/blk-mq-tag.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c
> index d1eb579..1b9c949 100644
> --- a/block/blk-mq-tag.c
> +++ b/block/blk-mq-tag.c
> @@ -127,7 +127,7 @@ static inline bool hctx_may_queue(struct blk_mq_hw_ctx *hctx,
>  	/*
>  	 * Allow at least some tags
>  	 */
> -	depth = max((bt->depth + users - 1) / users, 4U);
> +	depth = max(bt->depth / users, 4U);
>  	return atomic_read(&hctx->nr_active) < depth;

It's normal rounding, you'll round down otherwise. Say you have a tag
depth of 31 (SATA), and 4 active users. Your change would make that 7
tags per user, leaving 3 idle. If you round up, you end up with 8 tags
instead. That will potentially just leave some sleeping for a new tag,
but at least you'll exhaust the space.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 1/3] blk-mq: Cleanup blk_mq_tag_busy() and blk_mq_tag_idle()
  2014-09-03 20:35   ` Jens Axboe
@ 2014-09-05  1:26     ` Chuck Ebbert
  2014-09-05  1:30       ` Jens Axboe
  0 siblings, 1 reply; 10+ messages in thread
From: Chuck Ebbert @ 2014-09-05  1:26 UTC (permalink / raw)
  To: Jens Axboe; +Cc: Alexander Gordeev, linux-kernel

On Wed, 03 Sep 2014 14:35:29 -0600
Jens Axboe <axboe@kernel.dk> wrote:

> On 09/03/2014 02:33 PM, Alexander Gordeev wrote:
<snip>
> > diff --git a/block/blk-mq-tag.h b/block/blk-mq-tag.h
> > index 6206ed1..795ec3f 100644
> > --- a/block/blk-mq-tag.h
> > +++ b/block/blk-mq-tag.h
> > @@ -66,23 +66,22 @@ enum {
> >  	BLK_MQ_TAG_MAX		= BLK_MQ_TAG_FAIL - 1,
> >  };
> >  
> > -extern bool __blk_mq_tag_busy(struct blk_mq_hw_ctx *);
> > +extern void __blk_mq_tag_busy(struct blk_mq_hw_ctx *);
> >  extern void __blk_mq_tag_idle(struct blk_mq_hw_ctx *);
> >  
> >  static inline bool blk_mq_tag_busy(struct blk_mq_hw_ctx *hctx)
> >  {
> > -	if (!(hctx->flags & BLK_MQ_F_TAG_SHARED))
> > -		return false;
> > -
> > -	return __blk_mq_tag_busy(hctx);
> > +	if (hctx->flags & BLK_MQ_F_TAG_SHARED) {
> > +		__blk_mq_tag_busy(hctx);
> > +		return true;
> > +	}
> > +	return false;
> >  }
> 
> The normal/fast path here is the flag NOT being set, which is why it
> was coded that way to put the fast path inline.
> 
> >  
> >  static inline void blk_mq_tag_idle(struct blk_mq_hw_ctx *hctx)
> >  {
> > -	if (!(hctx->flags & BLK_MQ_F_TAG_SHARED))
> > -		return;
> > -
> > -	__blk_mq_tag_idle(hctx);
> > +	if (hctx->flags & BLK_MQ_F_TAG_SHARED)
> > +		__blk_mq_tag_idle(hctx);
> >  }
> 
> Ditto

Shouldn't it just add unlikely() then? That way it's obvious what the
common case is, instead of relying on convoluted code.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 1/3] blk-mq: Cleanup blk_mq_tag_busy() and blk_mq_tag_idle()
  2014-09-05  1:26     ` Chuck Ebbert
@ 2014-09-05  1:30       ` Jens Axboe
  2014-09-05  1:58         ` Chuck Ebbert
  0 siblings, 1 reply; 10+ messages in thread
From: Jens Axboe @ 2014-09-05  1:30 UTC (permalink / raw)
  To: Chuck Ebbert; +Cc: Alexander Gordeev, linux-kernel

On 09/04/2014 07:26 PM, Chuck Ebbert wrote:
> On Wed, 03 Sep 2014 14:35:29 -0600
> Jens Axboe <axboe@kernel.dk> wrote:
> 
>> On 09/03/2014 02:33 PM, Alexander Gordeev wrote:
> <snip>
>>> diff --git a/block/blk-mq-tag.h b/block/blk-mq-tag.h
>>> index 6206ed1..795ec3f 100644
>>> --- a/block/blk-mq-tag.h
>>> +++ b/block/blk-mq-tag.h
>>> @@ -66,23 +66,22 @@ enum {
>>>  	BLK_MQ_TAG_MAX		= BLK_MQ_TAG_FAIL - 1,
>>>  };
>>>  
>>> -extern bool __blk_mq_tag_busy(struct blk_mq_hw_ctx *);
>>> +extern void __blk_mq_tag_busy(struct blk_mq_hw_ctx *);
>>>  extern void __blk_mq_tag_idle(struct blk_mq_hw_ctx *);
>>>  
>>>  static inline bool blk_mq_tag_busy(struct blk_mq_hw_ctx *hctx)
>>>  {
>>> -	if (!(hctx->flags & BLK_MQ_F_TAG_SHARED))
>>> -		return false;
>>> -
>>> -	return __blk_mq_tag_busy(hctx);
>>> +	if (hctx->flags & BLK_MQ_F_TAG_SHARED) {
>>> +		__blk_mq_tag_busy(hctx);
>>> +		return true;
>>> +	}
>>> +	return false;
>>>  }
>>
>> The normal/fast path here is the flag NOT being set, which is why it
>> was coded that way to put the fast path inline.
>>
>>>  
>>>  static inline void blk_mq_tag_idle(struct blk_mq_hw_ctx *hctx)
>>>  {
>>> -	if (!(hctx->flags & BLK_MQ_F_TAG_SHARED))
>>> -		return;
>>> -
>>> -	__blk_mq_tag_idle(hctx);
>>> +	if (hctx->flags & BLK_MQ_F_TAG_SHARED)
>>> +		__blk_mq_tag_idle(hctx);
>>>  }
>>
>> Ditto
> 
> Shouldn't it just add unlikely() then? That way it's obvious what the
> common case is, instead of relying on convoluted code.

It's a common construct. Besides, if you find a flag-not-set check
convoluted, then I hope you are not programming anything I use. That's a
bit of a straw man, imho.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 1/3] blk-mq: Cleanup blk_mq_tag_busy() and blk_mq_tag_idle()
  2014-09-05  1:30       ` Jens Axboe
@ 2014-09-05  1:58         ` Chuck Ebbert
  0 siblings, 0 replies; 10+ messages in thread
From: Chuck Ebbert @ 2014-09-05  1:58 UTC (permalink / raw)
  To: Jens Axboe; +Cc: Alexander Gordeev, linux-kernel

On Thu, 04 Sep 2014 19:30:18 -0600
Jens Axboe <axboe@kernel.dk> wrote:

> On 09/04/2014 07:26 PM, Chuck Ebbert wrote:
> > On Wed, 03 Sep 2014 14:35:29 -0600
> > Jens Axboe <axboe@kernel.dk> wrote:
> > 
> >> On 09/03/2014 02:33 PM, Alexander Gordeev wrote:
> > <snip>
> >>> diff --git a/block/blk-mq-tag.h b/block/blk-mq-tag.h
> >>> index 6206ed1..795ec3f 100644
> >>> --- a/block/blk-mq-tag.h
> >>> +++ b/block/blk-mq-tag.h
> >>> @@ -66,23 +66,22 @@ enum {
> >>>  	BLK_MQ_TAG_MAX		= BLK_MQ_TAG_FAIL - 1,
> >>>  };
> >>>  
> >>> -extern bool __blk_mq_tag_busy(struct blk_mq_hw_ctx *);
> >>> +extern void __blk_mq_tag_busy(struct blk_mq_hw_ctx *);
> >>>  extern void __blk_mq_tag_idle(struct blk_mq_hw_ctx *);
> >>>  
> >>>  static inline bool blk_mq_tag_busy(struct blk_mq_hw_ctx *hctx)
> >>>  {
> >>> -	if (!(hctx->flags & BLK_MQ_F_TAG_SHARED))
> >>> -		return false;
> >>> -
> >>> -	return __blk_mq_tag_busy(hctx);
> >>> +	if (hctx->flags & BLK_MQ_F_TAG_SHARED) {
> >>> +		__blk_mq_tag_busy(hctx);
> >>> +		return true;
> >>> +	}
> >>> +	return false;
> >>>  }
> >>
> >> The normal/fast path here is the flag NOT being set, which is why
> >> it was coded that way to put the fast path inline.
> >>
> >>>  
> >>>  static inline void blk_mq_tag_idle(struct blk_mq_hw_ctx *hctx)
> >>>  {
> >>> -	if (!(hctx->flags & BLK_MQ_F_TAG_SHARED))
> >>> -		return;
> >>> -
> >>> -	__blk_mq_tag_idle(hctx);
> >>> +	if (hctx->flags & BLK_MQ_F_TAG_SHARED)
> >>> +		__blk_mq_tag_idle(hctx);
> >>>  }
> >>
> >> Ditto
> > 
> > Shouldn't it just add unlikely() then? That way it's obvious what
> > the common case is, instead of relying on convoluted code.
> 
> It's a common construct. Besides, if you find a flag-not-set check
> convoluted, then I hope you are not programming anything I use.
> That's a bit of a straw man, imho.
> 

Sure, it's a common construct. But there's nothing there to prevent the
optimizer from rearranging things any way it pleases. Nor is there
anything keeping a human from doing the same. ;)

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2014-09-05  1:58 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-09-03 20:33 [PATCH 0/3] blk-mq: Minor tweaks Alexander Gordeev
2014-09-03 20:33 ` [PATCH 1/3] blk-mq: Cleanup blk_mq_tag_busy() and blk_mq_tag_idle() Alexander Gordeev
2014-09-03 20:35   ` Jens Axboe
2014-09-05  1:26     ` Chuck Ebbert
2014-09-05  1:30       ` Jens Axboe
2014-09-05  1:58         ` Chuck Ebbert
2014-09-03 20:33 ` [PATCH 2/3] blk-mq: Eliminate superfluous check of BLK_MQ_S_TAG_ACTIVE flag Alexander Gordeev
2014-09-03 20:40   ` Jens Axboe
2014-09-03 20:33 ` [PATCH 3/3] blk-mq: Fix formula to calculate fair share of tags Alexander Gordeev
2014-09-03 20:43   ` Jens Axboe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox