From mboxrd@z Thu Jan 1 00:00:00 1970 From: Jaegeuk Kim Subject: Re: [PATCH 2/3] f2fs: schedule in between two continous batch discards Date: Thu, 25 Aug 2016 19:50:57 -0700 Message-ID: <20160826025057.GA88444@jaegeuk> References: <1471792891-2388-1-git-send-email-chao@kernel.org> <1471792891-2388-2-git-send-email-chao@kernel.org> <20160823165353.GA73835@jaegeuk> <20160825165716.GA84318@jaegeuk> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Received: from sog-mx-1.v43.ch3.sourceforge.com ([172.29.43.191] helo=mx.sourceforge.net) by sfs-ml-2.v29.ch3.sourceforge.com with esmtp (Exim 4.76) (envelope-from ) id 1bd7GV-0000Dz-FD for linux-f2fs-devel@lists.sourceforge.net; Fri, 26 Aug 2016 02:53:07 +0000 Received: from mail.kernel.org ([198.145.29.136]) by sog-mx-1.v43.ch3.sourceforge.com with esmtps (TLSv1:AES256-SHA:256) (Exim 4.76) id 1bd7Eg-0006sb-9G for linux-f2fs-devel@lists.sourceforge.net; Fri, 26 Aug 2016 02:51:16 +0000 Content-Disposition: inline In-Reply-To: List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: linux-f2fs-devel-bounces@lists.sourceforge.net To: Chao Yu Cc: Chao Yu , linux-kernel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net On Fri, Aug 26, 2016 at 08:50:50AM +0800, Chao Yu wrote: > Hi Jaegeuk, > > On 2016/8/26 0:57, Jaegeuk Kim wrote: > > Hi Chao, > > > > On Thu, Aug 25, 2016 at 05:22:29PM +0800, Chao Yu wrote: > >> Hi Jaegeuk, > >> > >> On 2016/8/24 0:53, Jaegeuk Kim wrote: > >>> Hi Chao, > >>> > >>> On Sun, Aug 21, 2016 at 11:21:30PM +0800, Chao Yu wrote: > >>>> From: Chao Yu > >>>> > >>>> In batch discard approach of fstrim will grab/release gc_mutex lock > >>>> repeatly, it makes contention of the lock becoming more intensive. > >>>> > >>>> So after one batch discards were issued in checkpoint and the lock > >>>> was released, it's better to do schedule() to increase opportunity > >>>> of grabbing gc_mutex lock for other competitors. > >>>> > >>>> Signed-off-by: Chao Yu > >>>> --- > >>>> fs/f2fs/segment.c | 2 ++ > >>>> 1 file changed, 2 insertions(+) > >>>> > >>>> diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c > >>>> index 020767c..d0f74eb 100644 > >>>> --- a/fs/f2fs/segment.c > >>>> +++ b/fs/f2fs/segment.c > >>>> @@ -1305,6 +1305,8 @@ int f2fs_trim_fs(struct f2fs_sb_info *sbi, struct fstrim_range *range) > >>>> mutex_unlock(&sbi->gc_mutex); > >>>> if (err) > >>>> break; > >>>> + > >>>> + schedule(); > >>> > >>> Hmm, if other thread is already waiting for gc_mutex, we don't need this here. > >>> In order to avoid long latency, wouldn't it be enough to reduce the batch size? > >> > >> Hmm, when fstrim call mutex_unlock we will pop one blocked locker from FIFO list > >> of mutex lock, and wake it up, then fstrimer will try to lock gc_mutex for next > >> batch trim, so the popped locker and fstrimer will make a new competition in > >> gc_mutex. > > > > Before trying to grab gc_mutex by fstrim again, there are already blocked tasks > > waiting for gc_mutex. Hence the next one should be selectec by FIFO, no? > > The next one which is going to be waked up is selected by FIFO, but the waked > one is still needs to be race with other mutex lock grabber. > > So there is no such guarantee that the waked one must get the lock. Okay, I'll merge this. :) Thanks, > > Thanks, > > > > > Thanks, > > > >> If fstrimer is running in a big core, and popped locker is running in > >> a small core, we can't guarantee popped locker can win the race, and for the > >> most of time, fstrimer will win. So in order to reduce starvation of other > >> gc_mutext locker, it's better to do schedule() here. > >> > >> Thanks, > >> > >>> > >>> Thanks, > >>> > >>>> } > >>>> out: > >>>> range->len = F2FS_BLK_TO_BYTES(cpc.trimmed); > >>>> -- > >>>> 2.7.2 > >>> > >>> . > >>> > > > > . > > ------------------------------------------------------------------------------