From mboxrd@z Thu Jan 1 00:00:00 1970 From: Jens Axboe Subject: Re: [PATCH v3] loop: Limit the number of requests in the bio list Date: Thu, 15 Nov 2012 07:05:27 -0700 Message-ID: <50A4F6A7.2070708@kernel.dk> References: <1352824065-6734-1-git-send-email-lczerner@redhat.com> <50A27892.1030800@kernel.dk> <50A3B705.7050008@kernel.dk> Mime-Version: 1.0 Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: QUOTED-PRINTABLE Cc: linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, jmoyer@redhat.com, akpm@linux-foundation.org To: =?windows-1252?Q?Luk=E1=9A_Czerner?= Return-path: In-Reply-To: Sender: linux-kernel-owner@vger.kernel.org List-Id: linux-fsdevel.vger.kernel.org On 2012-11-15 01:20, Luk=E1=9A Czerner wrote: > On Wed, 14 Nov 2012, Jens Axboe wrote: >=20 >> Date: Wed, 14 Nov 2012 08:21:41 -0700 >> From: Jens Axboe >> To: Luk=E1=9A Czerner >> Cc: linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, >> jmoyer@redhat.com, akpm@linux-foundation.org >> Subject: Re: [PATCH v3] loop: Limit the number of requests in the bi= o list >> >> On 2012-11-14 02:02, Luk=E1=9A Czerner wrote: >>> On Tue, 13 Nov 2012, Jens Axboe wrote: >>> >>>> Date: Tue, 13 Nov 2012 09:42:58 -0700 >>>> From: Jens Axboe >>>> To: Lukas Czerner >>>> Cc: linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, >>>> jmoyer@redhat.com, akpm@linux-foundation.org >>>> Subject: Re: [PATCH v3] loop: Limit the number of requests in the = bio list >>>> >>>>> @@ -489,6 +491,12 @@ static void loop_make_request(struct request= _queue *q, struct bio *old_bio) >>>>> goto out; >>>>> if (unlikely(rw =3D=3D WRITE && (lo->lo_flags & LO_FLAGS_READ_O= NLY))) >>>>> goto out; >>>>> + if (lo->lo_bio_count >=3D q->nr_congestion_on) { >>>>> + spin_unlock_irq(&lo->lo_lock); >>>>> + wait_event(lo->lo_req_wait, lo->lo_bio_count < >>>>> + q->nr_congestion_off); >>>>> + spin_lock_irq(&lo->lo_lock); >>>>> + } >>>> >>>> This makes me nervous. You are reading lo_bio_count outside the lo= ck. If >>>> you race with the prepare_to_wait() and condition check in >>>> __wait_event(), then you will sleep forever. >>> >>> Hi Jens, >>> >>> I am sorry for being dense, but I do not see how this would be >>> possible. The only place we increase the lo_bio_count is after that >>> piece of code (possibly after the wait). Moreover every time we're >>> decreasing the lo_bio_count and it is smaller than nr_congestion_of= f >>> we will wake_up(). >>> >>> That's how wait_event/wake_up is supposed to be used, right ? >> >> It is, yes. But you are checking the condition without the lock, so = you >> could be operating on a stale value. The point is, you have to safel= y >> check the condition _after prepare_to_wait() to be completely safe. = And >> you do not. Either lo_bio_count needs to be atomic, or you need to u= se a >> variant of wait_event() that holds the appropriate lock before >> prepare_to_wait() and condition check, then dropping it for the slee= p. >> >> See wait_even_lock_irq() in drivers/md/md.h. >=20 > Ok I knew that much. So the only possibility to deadlock is when we > would process all the bios in the loop_thread() before the waiting > event would get to checking the condition after which we would read > the stale data where lo_bio_count is still < nr_congestion_off so we > get back to sleep, never to be woken up again. That sounds highly > unlikely. But fair enough, it make sense to make it absolutely bullet > proof. It depends on the settings. At the current depth/batch count, yes, unlikely. But sometimes "highly unlikely" scenarios turn out to be hitting all the time for person X's setup and settings. > I'll take a look at the wait_event_lock_irq. Thanks. --=20 Jens Axboe