From mboxrd@z Thu Jan 1 00:00:00 1970 From: Chao Yu Subject: Re: [PATCH] f2fs: refactor flush_nat_entries codes for reducing NAT writes Date: Tue, 17 Jun 2014 10:33:12 +0800 Message-ID: <002301cf89d4$97aaf7f0$c700e7d0$@samsung.com> References: <001201cf87c6$92b9fad0$b82df070$@samsung.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net To: 'Jaegeuk Kim' Return-path: In-reply-to: <001201cf87c6$92b9fad0$b82df070$@samsung.com> Content-language: zh-cn List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: linux-f2fs-devel-bounces@lists.sourceforge.net List-Id: linux-fsdevel.vger.kernel.org Hi all, There are problem in this patch, please ignore this patch, sorry for the noise. I will resend later. > -----Original Message----- > From: Chao Yu [mailto:chao2.yu@samsung.com] > Sent: Saturday, June 14, 2014 7:48 PM > To: Jaegeuk Kim > Cc: linux-fsdevel@vger.kernel.org; linux-kernel@vger.kernel.org; > linux-f2fs-devel@lists.sourceforge.net > Subject: [f2fs-dev] [PATCH] f2fs: refactor flush_nat_entries codes for reducing NAT writes > > Although building NAT journal in cursum reduce the read/write work for NAT > block, but previous design leave us lower performance when write checkpoint > frequently for these cases: > 1. if journal in cursum has already full, it's a bit of waste that we flush all > nat entries to page for persistence, but not to cache any entries. > 2. if journal in cursum is not full, we fill nat entries to journal util > journal is full, then flush the left dirty entries to disk without merge > journaled entries, so these journaled entries may be flushed to disk at next > checkpoint but lost chance to flushed last time. > > In this patch we merge dirty entries located in same NAT block to nat entry set, > and linked all set to list, sorted ascending order by entries' count of set. > Later we flush entries in sparse set into journal as many as we can, and then > flush merged entries to disk. > ------------------------------------------------------------------------------ HPCC Systems Open Source Big Data Platform from LexisNexis Risk Solutions Find What Matters Most in Your Big Data with HPCC Systems Open Source. Fast. Scalable. Simple. Ideal for Dirty Data. Leverages Graph Analysis for Fast Processing & Easy Data Exploration http://p.sf.net/sfu/hpccsystems