From mboxrd@z Thu Jan 1 00:00:00 1970 From: Artem Bityutskiy Subject: Re: [PATCH 05/10] writeback: support > 1 flusher thread per bdi Date: Mon, 06 Jul 2009 17:11:20 +0300 Message-ID: <4A520608.7070707@gmail.com> References: <1245926523-21959-1-git-send-email-jens.axboe@oracle.com> <1245926523-21959-6-git-send-email-jens.axboe@oracle.com> <4A51FE33.3070702@gmail.com> <20090706134930.GA4987@shareable.org> Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: QUOTED-PRINTABLE Cc: Jens Axboe , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, chris.mason@oracle.com, david@fromorbit.com, hch@infradead.org, akpm@linux-foundation.org, jack@suse.cz, yanmin_zhang@linux.intel.com, richard@rsk.demon.co.uk, damien.wyart@free.fr, fweisbec@gmail.com, Alan.Brunelle@hp.com To: Jamie Lokier Return-path: Received: from smtp.nokia.com ([192.100.122.233]:33999 "EHLO mgw-mx06.nokia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756942AbZGFOTg (ORCPT ); Mon, 6 Jul 2009 10:19:36 -0400 In-Reply-To: <20090706134930.GA4987@shareable.org> Sender: linux-fsdevel-owner@vger.kernel.org List-ID: Jamie Lokier wrote: > Artem Bityutskiy wrote: >> Jens Axboe wrote: >>> +static void bdi_queue_work(struct backing_dev_info *bdi, struct bd= i_work=20 >>> *work) >>> +{ >>> + if (work) { >>> + work->seen =3D bdi->wb_mask; >>> + BUG_ON(!work->seen); >>> + atomic_set(&work->pending, bdi->wb_cnt); >>> + BUG_ON(!bdi->wb_cnt); >>> + >>> + /* >>> + * Make sure stores are seen before it appears on the list >>> + */ >>> + smp_mb(); >>> + >>> + spin_lock(&bdi->wb_lock); >>> + list_add_tail_rcu(&work->list, &bdi->work_list); >>> + spin_unlock(&bdi->wb_lock); >>> + } >> Doesn't spin_lock() include an implicit memory barrier? >> After &bdi->wb_lock is acquired, it is guaranteed that all >> memory operations are finished. >=20 > I'm pretty sure spin_lock() is an "acquire" barrier, which just guara= ntees > loads/stores after the spin_lock() are done after taking the lock. >=20 > It doesn't guarantee anything about loads/stores before the spin_lock= (). Right, but comment says memops should be flushed before the list is changed. --=20 Best Regards, Artem Bityutskiy (=D0=90=D1=80=D1=82=D1=91=D0=BC =D0=91=D0=B8=D1=82=D1=8E= =D1=86=D0=BA=D0=B8=D0=B9) -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel= " in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html