From mboxrd@z Thu Jan 1 00:00:00 1970 From: corbet@lwn.net (Jonathan Corbet) Subject: Re: [RFC/PATCH 2/8] revoke: inode revoke lock V7 Date: Tue, 18 Dec 2007 09:31:49 -0700 Message-ID: <2605.1197995509@vena.lwn.net> References: Cc: alan@redhat.com, viro@zeniv.linux.org.uk, hch@infradead.org, peterz@infradead.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org To: Pekka J Enberg Return-path: Received: from vena.lwn.net ([206.168.112.25]:38279 "EHLO vena.lwn.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756776AbXLRQit (ORCPT ); Tue, 18 Dec 2007 11:38:49 -0500 In-reply-to: Your message of "Fri, 14 Dec 2007 17:16:17 +0200." Sender: linux-fsdevel-owner@vger.kernel.org List-ID: This is a relatively minor detail in the rather bigger context of this patch, but... > @@ -642,6 +644,7 @@ struct inode { > struct list_head inotify_watches; /* watches on this inode */ > struct mutex inotify_mutex; /* protects the watches list */ > #endif > + wait_queue_head_t i_revoke_wait; That seems like a relatively hefty addition to every inode in the system when revoke - I think - will be a fairly rare operation. Would there be any significant cost to using a single, global revoke-wait queue instead of growing the inode structure? jon