* [PATCH] block: Make rq_affinity = 1 work as expected. @ 2011-08-05 4:39 Tao Ma 2011-08-05 5:12 ` Shaohua Li 2011-08-05 7:33 ` Jens Axboe 0 siblings, 2 replies; 10+ messages in thread From: Tao Ma @ 2011-08-05 4:39 UTC (permalink / raw) To: linux-kernel; +Cc: Christoph Hellwig, Roland Dreier, Dan Williams, Jens Axboe From: Tao Ma <boyu.mt@taobao.com> Commit 5757a6d76c introduced a new rq_affinity = 2 so as to make the request completed in the __make_request cpu. But it makes the old rq_affinity = 1 not work any more. The root cause is that if the 'cpu' and 'req->cpu' is in the same group and cpu != req->cpu, ccpu will be the same as group_cpu, so the completion will be excuted in the 'cpu' not 'group_cpu'. This patch fix problem by simpling removing group_cpu and the codes are more explicit now. If ccpu == cpu, we complete in cpu, otherwise we raise_blk_irq to ccpu. Cc: Christoph Hellwig <hch@infradead.org> Cc: Roland Dreier <roland@purestorage.com> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Jens Axboe <jaxboe@fusionio.com> Signed-off-by: Tao Ma <boyu.mt@taobao.com> --- block/blk-softirq.c | 8 +++----- 1 files changed, 3 insertions(+), 5 deletions(-) diff --git a/block/blk-softirq.c b/block/blk-softirq.c index 475fab8..487addc 100644 --- a/block/blk-softirq.c +++ b/block/blk-softirq.c @@ -103,7 +103,7 @@ static struct notifier_block __cpuinitdata blk_cpu_notifier = { void __blk_complete_request(struct request *req) { - int ccpu, cpu, group_cpu = NR_CPUS; + int ccpu, cpu; struct request_queue *q = req->q; unsigned long flags; @@ -117,14 +117,12 @@ void __blk_complete_request(struct request *req) */ if (test_bit(QUEUE_FLAG_SAME_COMP, &q->queue_flags) && req->cpu != -1) { ccpu = req->cpu; - if (!test_bit(QUEUE_FLAG_SAME_FORCE, &q->queue_flags)) { + if (!test_bit(QUEUE_FLAG_SAME_FORCE, &q->queue_flags)) ccpu = blk_cpu_to_group(ccpu); - group_cpu = blk_cpu_to_group(cpu); - } } else ccpu = cpu; - if (ccpu == cpu || ccpu == group_cpu) { + if (ccpu == cpu) { struct list_head *list; do_local: list = &__get_cpu_var(blk_cpu_done); -- 1.6.3.3.334.g916e1.dirty ^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH] block: Make rq_affinity = 1 work as expected. 2011-08-05 4:39 [PATCH] block: Make rq_affinity = 1 work as expected Tao Ma @ 2011-08-05 5:12 ` Shaohua Li 2011-08-05 21:26 ` Williams, Dan J 2011-08-05 7:33 ` Jens Axboe 1 sibling, 1 reply; 10+ messages in thread From: Shaohua Li @ 2011-08-05 5:12 UTC (permalink / raw) To: Tao Ma Cc: linux-kernel, Christoph Hellwig, Roland Dreier, Dan Williams, Jens Axboe 2011/8/5 Tao Ma <tm@tao.ma>: > From: Tao Ma <boyu.mt@taobao.com> > > Commit 5757a6d76c introduced a new rq_affinity = 2 so as to make > the request completed in the __make_request cpu. But it makes the > old rq_affinity = 1 not work any more. The root cause is that > if the 'cpu' and 'req->cpu' is in the same group and cpu != req->cpu, > ccpu will be the same as group_cpu, so the completion will be > excuted in the 'cpu' not 'group_cpu'. > > This patch fix problem by simpling removing group_cpu and the codes > are more explicit now. If ccpu == cpu, we complete in cpu, otherwise > we raise_blk_irq to ccpu. good catch. This changed old behavior and can cause more lock contention. and if user doesn't care about lock contention, he can use rq_affinity = 2 Reviewed-by: Shaohua Li <shaohua.li@intel.com> ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH] block: Make rq_affinity = 1 work as expected. 2011-08-05 5:12 ` Shaohua Li @ 2011-08-05 21:26 ` Williams, Dan J 0 siblings, 0 replies; 10+ messages in thread From: Williams, Dan J @ 2011-08-05 21:26 UTC (permalink / raw) To: Shaohua Li Cc: Tao Ma, linux-kernel, Christoph Hellwig, Roland Dreier, Jens Axboe On Thu, Aug 4, 2011 at 10:12 PM, Shaohua Li <shli@kernel.org> wrote: > 2011/8/5 Tao Ma <tm@tao.ma>: >> From: Tao Ma <boyu.mt@taobao.com> >> >> Commit 5757a6d76c introduced a new rq_affinity = 2 so as to make >> the request completed in the __make_request cpu. But it makes the >> old rq_affinity = 1 not work any more. The root cause is that >> if the 'cpu' and 'req->cpu' is in the same group and cpu != req->cpu, >> ccpu will be the same as group_cpu, so the completion will be >> excuted in the 'cpu' not 'group_cpu'. >> >> This patch fix problem by simpling removing group_cpu and the codes >> are more explicit now. If ccpu == cpu, we complete in cpu, otherwise >> we raise_blk_irq to ccpu. > good catch. This changed old behavior and can cause more lock contention. > and if user doesn't care about lock contention, he can use rq_affinity = 2 > Indeed it does change behavior of rq_affinity=1, apologies. Reviewed-by: Dan Williams <dan.j.williams@intel.com> ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH] block: Make rq_affinity = 1 work as expected. 2011-08-05 4:39 [PATCH] block: Make rq_affinity = 1 work as expected Tao Ma 2011-08-05 5:12 ` Shaohua Li @ 2011-08-05 7:33 ` Jens Axboe 2011-08-08 2:58 ` Shaohua Li 1 sibling, 1 reply; 10+ messages in thread From: Jens Axboe @ 2011-08-05 7:33 UTC (permalink / raw) To: Tao Ma Cc: linux-kernel@vger.kernel.org, Christoph Hellwig, Roland Dreier, Dan Williams On 2011-08-05 06:39, Tao Ma wrote: > From: Tao Ma <boyu.mt@taobao.com> > > Commit 5757a6d76c introduced a new rq_affinity = 2 so as to make > the request completed in the __make_request cpu. But it makes the > old rq_affinity = 1 not work any more. The root cause is that > if the 'cpu' and 'req->cpu' is in the same group and cpu != req->cpu, > ccpu will be the same as group_cpu, so the completion will be > excuted in the 'cpu' not 'group_cpu'. > > This patch fix problem by simpling removing group_cpu and the codes > are more explicit now. If ccpu == cpu, we complete in cpu, otherwise > we raise_blk_irq to ccpu. Thanks Tao Ma, much more readable too. -- Jens Axboe ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH] block: Make rq_affinity = 1 work as expected. 2011-08-05 7:33 ` Jens Axboe @ 2011-08-08 2:58 ` Shaohua Li 2011-08-08 3:46 ` Tao Ma 0 siblings, 1 reply; 10+ messages in thread From: Shaohua Li @ 2011-08-08 2:58 UTC (permalink / raw) To: Jens Axboe Cc: Tao Ma, linux-kernel@vger.kernel.org, Christoph Hellwig, Roland Dreier, Dan Williams 2011/8/5 Jens Axboe <jaxboe@fusionio.com>: > On 2011-08-05 06:39, Tao Ma wrote: >> From: Tao Ma <boyu.mt@taobao.com> >> >> Commit 5757a6d76c introduced a new rq_affinity = 2 so as to make >> the request completed in the __make_request cpu. But it makes the >> old rq_affinity = 1 not work any more. The root cause is that >> if the 'cpu' and 'req->cpu' is in the same group and cpu != req->cpu, >> ccpu will be the same as group_cpu, so the completion will be >> excuted in the 'cpu' not 'group_cpu'. >> >> This patch fix problem by simpling removing group_cpu and the codes >> are more explicit now. If ccpu == cpu, we complete in cpu, otherwise >> we raise_blk_irq to ccpu. > > Thanks Tao Ma, much more readable too. Hi Jens, I rethought the problem when I check interrupt in my system. I thought we don't need Tao's patch though it makes the code behavior like before. Let's take an example. My test box has cpu 0-7, one socket. Say request is added in CPU 1, blk_complete_request occurs at CPU 7. Without Tao's patch, softirq will be done at CPU 7. With it, an IPI will be directed to CPU 0, and softirq will be done at CPU 0. In this case, doing softirq at CPU 0 and CPU 7 have no difference and we can avoid an ipi if doing it in CPU 7. we don't need to worry about blk_complete_request occurs at different CPUs. it's called in interrupt handler. As far as I checked, all my HBA cards (several LSI) and AHCI don't support multiple MSI, so I assume blk_complete_request will only be called in one CPU. Sure, if the assumption is wrong, we still need Tao's patch, but in most common cases my assumption is correct. Thanks, Shaohua ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH] block: Make rq_affinity = 1 work as expected. 2011-08-08 2:58 ` Shaohua Li @ 2011-08-08 3:46 ` Tao Ma 2011-08-08 4:33 ` Shaohua Li 0 siblings, 1 reply; 10+ messages in thread From: Tao Ma @ 2011-08-08 3:46 UTC (permalink / raw) To: Shaohua Li Cc: Jens Axboe, linux-kernel@vger.kernel.org, Christoph Hellwig, Roland Dreier, Dan Williams Hi Shaohua, On 08/08/2011 10:58 AM, Shaohua Li wrote: > 2011/8/5 Jens Axboe <jaxboe@fusionio.com>: >> On 2011-08-05 06:39, Tao Ma wrote: >>> From: Tao Ma <boyu.mt@taobao.com> >>> >>> Commit 5757a6d76c introduced a new rq_affinity = 2 so as to make >>> the request completed in the __make_request cpu. But it makes the >>> old rq_affinity = 1 not work any more. The root cause is that >>> if the 'cpu' and 'req->cpu' is in the same group and cpu != req->cpu, >>> ccpu will be the same as group_cpu, so the completion will be >>> excuted in the 'cpu' not 'group_cpu'. >>> >>> This patch fix problem by simpling removing group_cpu and the codes >>> are more explicit now. If ccpu == cpu, we complete in cpu, otherwise >>> we raise_blk_irq to ccpu. >> >> Thanks Tao Ma, much more readable too. > Hi Jens, > I rethought the problem when I check interrupt in my system. I thought > we don't need Tao's patch though it makes the code behavior like before. > Let's take an example. My test box has cpu 0-7, one socket. Say request > is added in CPU 1, blk_complete_request occurs at CPU 7. Without Tao's > patch, softirq will be done at CPU 7. With it, an IPI will be directed to CPU 0, > and softirq will be done at CPU 0. In this case, doing softirq at CPU 0 and > CPU 7 have no difference and we can avoid an ipi if doing it in CPU 7. I totally agree with your analysis, but what I am worried is that this does change the old system behavior. And without this patch actually '1' and '2' in rq_affinity has the same effect now in your case. If you do prefer the new codes and the new behavior, then '1' don't need to exist any more(since from your description it seems to only adds an additional IPI overhead and no benefit), or '2' is totally unneeded here. Thanks Tao > > we don't need to worry about blk_complete_request occurs at different CPUs. > it's called in interrupt handler. As far as I checked, all my HBA > cards (several LSI) > and AHCI don't support multiple MSI, so I assume blk_complete_request will > only be called in one CPU. Sure, if the assumption is wrong, we still need > Tao's patch, but in most common cases my assumption is correct. ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH] block: Make rq_affinity = 1 work as expected. 2011-08-08 3:46 ` Tao Ma @ 2011-08-08 4:33 ` Shaohua Li 2011-08-08 5:40 ` Tao Ma 0 siblings, 1 reply; 10+ messages in thread From: Shaohua Li @ 2011-08-08 4:33 UTC (permalink / raw) To: Tao Ma Cc: Jens Axboe, linux-kernel@vger.kernel.org, Christoph Hellwig, Roland Dreier, Dan Williams 2011/8/8 Tao Ma <tm@tao.ma>: > Hi Shaohua, > On 08/08/2011 10:58 AM, Shaohua Li wrote: >> 2011/8/5 Jens Axboe <jaxboe@fusionio.com>: >>> On 2011-08-05 06:39, Tao Ma wrote: >>>> From: Tao Ma <boyu.mt@taobao.com> >>>> >>>> Commit 5757a6d76c introduced a new rq_affinity = 2 so as to make >>>> the request completed in the __make_request cpu. But it makes the >>>> old rq_affinity = 1 not work any more. The root cause is that >>>> if the 'cpu' and 'req->cpu' is in the same group and cpu != req->cpu, >>>> ccpu will be the same as group_cpu, so the completion will be >>>> excuted in the 'cpu' not 'group_cpu'. >>>> >>>> This patch fix problem by simpling removing group_cpu and the codes >>>> are more explicit now. If ccpu == cpu, we complete in cpu, otherwise >>>> we raise_blk_irq to ccpu. >>> >>> Thanks Tao Ma, much more readable too. >> Hi Jens, >> I rethought the problem when I check interrupt in my system. I thought >> we don't need Tao's patch though it makes the code behavior like before. >> Let's take an example. My test box has cpu 0-7, one socket. Say request >> is added in CPU 1, blk_complete_request occurs at CPU 7. Without Tao's >> patch, softirq will be done at CPU 7. With it, an IPI will be directed to CPU 0, >> and softirq will be done at CPU 0. In this case, doing softirq at CPU 0 and >> CPU 7 have no difference and we can avoid an ipi if doing it in CPU 7. > I totally agree with your analysis, but what I am worried is that this > does change the old system behavior. > And without this patch actually '1' and '2' in rq_affinity has the same > effect now in your case. If you do prefer the new codes and the new > behavior, then '1' don't need to exist any more(since from your > description it seems to only adds an additional IPI overhead and no > benefit), or '2' is totally unneeded here. with rq_affinity 2, CPU 1 will do the softirq in above case. it's still different like the rq_affinity 1 case. Thanks, Shaohua >> >> we don't need to worry about blk_complete_request occurs at different CPUs. >> it's called in interrupt handler. As far as I checked, all my HBA >> cards (several LSI) >> and AHCI don't support multiple MSI, so I assume blk_complete_request will >> only be called in one CPU. Sure, if the assumption is wrong, we still need >> Tao's patch, but in most common cases my assumption is correct. > -- > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html > Please read the FAQ at http://www.tux.org/lkml/ > ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH] block: Make rq_affinity = 1 work as expected. 2011-08-08 4:33 ` Shaohua Li @ 2011-08-08 5:40 ` Tao Ma 2011-08-08 5:56 ` Shaohua Li 0 siblings, 1 reply; 10+ messages in thread From: Tao Ma @ 2011-08-08 5:40 UTC (permalink / raw) To: Shaohua Li Cc: Jens Axboe, linux-kernel@vger.kernel.org, Christoph Hellwig, Roland Dreier, Dan Williams On 08/08/2011 12:33 PM, Shaohua Li wrote: > 2011/8/8 Tao Ma <tm@tao.ma>: >> Hi Shaohua, >> On 08/08/2011 10:58 AM, Shaohua Li wrote: >>> 2011/8/5 Jens Axboe <jaxboe@fusionio.com>: >>>> On 2011-08-05 06:39, Tao Ma wrote: >>>>> From: Tao Ma <boyu.mt@taobao.com> >>>>> >>>>> Commit 5757a6d76c introduced a new rq_affinity = 2 so as to make >>>>> the request completed in the __make_request cpu. But it makes the >>>>> old rq_affinity = 1 not work any more. The root cause is that >>>>> if the 'cpu' and 'req->cpu' is in the same group and cpu != req->cpu, >>>>> ccpu will be the same as group_cpu, so the completion will be >>>>> excuted in the 'cpu' not 'group_cpu'. >>>>> >>>>> This patch fix problem by simpling removing group_cpu and the codes >>>>> are more explicit now. If ccpu == cpu, we complete in cpu, otherwise >>>>> we raise_blk_irq to ccpu. >>>> >>>> Thanks Tao Ma, much more readable too. >>> Hi Jens, >>> I rethought the problem when I check interrupt in my system. I thought >>> we don't need Tao's patch though it makes the code behavior like before. >>> Let's take an example. My test box has cpu 0-7, one socket. Say request >>> is added in CPU 1, blk_complete_request occurs at CPU 7. Without Tao's >>> patch, softirq will be done at CPU 7. With it, an IPI will be directed to CPU 0, >>> and softirq will be done at CPU 0. In this case, doing softirq at CPU 0 and >>> CPU 7 have no difference and we can avoid an ipi if doing it in CPU 7. >> I totally agree with your analysis, but what I am worried is that this >> does change the old system behavior. >> And without this patch actually '1' and '2' in rq_affinity has the same >> effect now in your case. If you do prefer the new codes and the new >> behavior, then '1' don't need to exist any more(since from your >> description it seems to only adds an additional IPI overhead and no >> benefit), or '2' is totally unneeded here. > with rq_affinity 2, CPU 1 will do the softirq in above case. it's > still different > like the rq_affinity 1 case. OK, so let's see what's going on without the patch in case rq_affinity = 1. If the complete cpu and the request cpu are in the same group, the complete cpu will call softirq. If the complete cpu and the request cpu are not in the same group, the group cpu of the request cpu will call softirq. These behaviors are totally different. How can you tell the user what's going on there? And that' the reason we want 0, 1, 2 for rq_affinity. If the user does care about the extra IPI(in your case), fine, just set rq_affinty = 2. Thanks Tao ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH] block: Make rq_affinity = 1 work as expected. 2011-08-08 5:40 ` Tao Ma @ 2011-08-08 5:56 ` Shaohua Li 2011-08-08 6:31 ` Tao Ma 0 siblings, 1 reply; 10+ messages in thread From: Shaohua Li @ 2011-08-08 5:56 UTC (permalink / raw) To: Tao Ma Cc: Jens Axboe, linux-kernel@vger.kernel.org, Christoph Hellwig, Roland Dreier, Dan Williams 2011/8/8 Tao Ma <tm@tao.ma>: > On 08/08/2011 12:33 PM, Shaohua Li wrote: >> 2011/8/8 Tao Ma <tm@tao.ma>: >>> Hi Shaohua, >>> On 08/08/2011 10:58 AM, Shaohua Li wrote: >>>> 2011/8/5 Jens Axboe <jaxboe@fusionio.com>: >>>>> On 2011-08-05 06:39, Tao Ma wrote: >>>>>> From: Tao Ma <boyu.mt@taobao.com> >>>>>> >>>>>> Commit 5757a6d76c introduced a new rq_affinity = 2 so as to make >>>>>> the request completed in the __make_request cpu. But it makes the >>>>>> old rq_affinity = 1 not work any more. The root cause is that >>>>>> if the 'cpu' and 'req->cpu' is in the same group and cpu != req->cpu, >>>>>> ccpu will be the same as group_cpu, so the completion will be >>>>>> excuted in the 'cpu' not 'group_cpu'. >>>>>> >>>>>> This patch fix problem by simpling removing group_cpu and the codes >>>>>> are more explicit now. If ccpu == cpu, we complete in cpu, otherwise >>>>>> we raise_blk_irq to ccpu. >>>>> >>>>> Thanks Tao Ma, much more readable too. >>>> Hi Jens, >>>> I rethought the problem when I check interrupt in my system. I thought >>>> we don't need Tao's patch though it makes the code behavior like before. >>>> Let's take an example. My test box has cpu 0-7, one socket. Say request >>>> is added in CPU 1, blk_complete_request occurs at CPU 7. Without Tao's >>>> patch, softirq will be done at CPU 7. With it, an IPI will be directed to CPU 0, >>>> and softirq will be done at CPU 0. In this case, doing softirq at CPU 0 and >>>> CPU 7 have no difference and we can avoid an ipi if doing it in CPU 7. >>> I totally agree with your analysis, but what I am worried is that this >>> does change the old system behavior. >>> And without this patch actually '1' and '2' in rq_affinity has the same >>> effect now in your case. If you do prefer the new codes and the new >>> behavior, then '1' don't need to exist any more(since from your >>> description it seems to only adds an additional IPI overhead and no >>> benefit), or '2' is totally unneeded here. >> with rq_affinity 2, CPU 1 will do the softirq in above case. it's >> still different >> like the rq_affinity 1 case. > OK, so let's see what's going on without the patch in case rq_affinity = 1. > If the complete cpu and the request cpu are in the same group, the > complete cpu will call softirq. > If the complete cpu and the request cpu are not in the same group, the > group cpu of the request cpu will call softirq. > > These behaviors are totally different. How can you tell the user what's > going on there? And that' the reason we want 0, 1, 2 for rq_affinity. If > the user does care about the extra IPI(in your case), fine, just set > rq_affinty = 2. rq_affinity=2: finish request in each cpu rq_affinity=1: finish request in one CPU for each socket. Even without your patch, rq_affinity=1 finish request in one CPU too. Remember the controller only has one interrupt source. the only difference is request isn't always finished in the first CPU of a socket. I didn't think this is a behavior change which user even cares about. I originally worried about blk_complete_request can be called for all CPUs, but this isn't true. Thanks, Shaohua ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH] block: Make rq_affinity = 1 work as expected. 2011-08-08 5:56 ` Shaohua Li @ 2011-08-08 6:31 ` Tao Ma 0 siblings, 0 replies; 10+ messages in thread From: Tao Ma @ 2011-08-08 6:31 UTC (permalink / raw) To: Shaohua Li Cc: Jens Axboe, linux-kernel@vger.kernel.org, Christoph Hellwig, Roland Dreier, Dan Williams On 08/08/2011 01:56 PM, Shaohua Li wrote: > 2011/8/8 Tao Ma <tm@tao.ma>: >> On 08/08/2011 12:33 PM, Shaohua Li wrote: >>> 2011/8/8 Tao Ma <tm@tao.ma>: >>>> Hi Shaohua, >>>> On 08/08/2011 10:58 AM, Shaohua Li wrote: >>>>> 2011/8/5 Jens Axboe <jaxboe@fusionio.com>: >>>>>> On 2011-08-05 06:39, Tao Ma wrote: >>>>>>> From: Tao Ma <boyu.mt@taobao.com> >>>>>>> >>>>>>> Commit 5757a6d76c introduced a new rq_affinity = 2 so as to make >>>>>>> the request completed in the __make_request cpu. But it makes the >>>>>>> old rq_affinity = 1 not work any more. The root cause is that >>>>>>> if the 'cpu' and 'req->cpu' is in the same group and cpu != req->cpu, >>>>>>> ccpu will be the same as group_cpu, so the completion will be >>>>>>> excuted in the 'cpu' not 'group_cpu'. >>>>>>> >>>>>>> This patch fix problem by simpling removing group_cpu and the codes >>>>>>> are more explicit now. If ccpu == cpu, we complete in cpu, otherwise >>>>>>> we raise_blk_irq to ccpu. >>>>>> >>>>>> Thanks Tao Ma, much more readable too. >>>>> Hi Jens, >>>>> I rethought the problem when I check interrupt in my system. I thought >>>>> we don't need Tao's patch though it makes the code behavior like before. >>>>> Let's take an example. My test box has cpu 0-7, one socket. Say request >>>>> is added in CPU 1, blk_complete_request occurs at CPU 7. Without Tao's >>>>> patch, softirq will be done at CPU 7. With it, an IPI will be directed to CPU 0, >>>>> and softirq will be done at CPU 0. In this case, doing softirq at CPU 0 and >>>>> CPU 7 have no difference and we can avoid an ipi if doing it in CPU 7. >>>> I totally agree with your analysis, but what I am worried is that this >>>> does change the old system behavior. >>>> And without this patch actually '1' and '2' in rq_affinity has the same >>>> effect now in your case. If you do prefer the new codes and the new >>>> behavior, then '1' don't need to exist any more(since from your >>>> description it seems to only adds an additional IPI overhead and no >>>> benefit), or '2' is totally unneeded here. >>> with rq_affinity 2, CPU 1 will do the softirq in above case. it's >>> still different >>> like the rq_affinity 1 case. >> OK, so let's see what's going on without the patch in case rq_affinity = 1. >> If the complete cpu and the request cpu are in the same group, the >> complete cpu will call softirq. >> If the complete cpu and the request cpu are not in the same group, the >> group cpu of the request cpu will call softirq. >> >> These behaviors are totally different. How can you tell the user what's >> going on there? And that' the reason we want 0, 1, 2 for rq_affinity. If >> the user does care about the extra IPI(in your case), fine, just set >> rq_affinty = 2. > rq_affinity=2: finish request in each cpu > rq_affinity=1: finish request in one CPU for each socket. > Even without your patch, rq_affinity=1 finish request in one CPU too. We always finish request in one CPU, that is. The only difference is which cpu to do the softirq work. > Remember the controller only has one interrupt source. the only difference > is request isn't always finished in the first CPU of a socket. I didn't > think this is a behavior change which user even cares about. That is your think. Thanks. At least it makes me feels strange when I came across it and that's the reason why I found it. I am done with it. Thanks Tao ^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2011-08-08 6:32 UTC | newest] Thread overview: 10+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2011-08-05 4:39 [PATCH] block: Make rq_affinity = 1 work as expected Tao Ma 2011-08-05 5:12 ` Shaohua Li 2011-08-05 21:26 ` Williams, Dan J 2011-08-05 7:33 ` Jens Axboe 2011-08-08 2:58 ` Shaohua Li 2011-08-08 3:46 ` Tao Ma 2011-08-08 4:33 ` Shaohua Li 2011-08-08 5:40 ` Tao Ma 2011-08-08 5:56 ` Shaohua Li 2011-08-08 6:31 ` Tao Ma
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox