From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-db5eur01on0113.outbound.protection.outlook.com ([104.47.2.113]:63776 "EHLO EUR01-DB5-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1752225AbcJGMk1 (ORCPT ); Fri, 7 Oct 2016 08:40:27 -0400 Date: Thu, 6 Oct 2016 16:06:28 -0700 From: Andrei Vagin To: "Eric W. Biederman" CC: Andrei Vagin , Alexander Viro , , , Subject: Re: [PATCH v2] mount: dont execute propagate_umount() many times for same mounts Message-ID: <20161006230616.GA2296@outlook.office365.com> References: <1475772564-25627-1-git-send-email-avagin@openvz.org> <87eg3tclbd.fsf@x220.int.ebiederm.org> MIME-Version: 1.0 Content-Type: text/plain; charset="koi8-r" Content-Disposition: inline In-Reply-To: <87eg3tclbd.fsf@x220.int.ebiederm.org> Sender: linux-fsdevel-owner@vger.kernel.org List-ID: On Thu, Oct 06, 2016 at 02:46:30PM -0500, Eric W. Biederman wrote: > Andrei Vagin writes: > > > The reason of this optimization is that umount() can hold namespace_sem > > for a long time, this semaphore is global, so it affects all users. > > Recently Eric W. Biederman added a per mount namespace limit on the > > number of mounts. The default number of mounts allowed per mount > > namespace at 100,000. Currently this value is allowed to construct a tree > > which requires hours to be umounted. > > I am going to take a hard look at this as this problem sounds very > unfortunate. My memory of going through this code before strongly > suggests that changing the last list_for_each_entry to > list_for_each_entry_reverse is going to impact the correctness of this > change. I have read this code again and you are right, list_for_each_entry can't be changed on list_for_each_entry_reverse here. I tested these changes more carefully and find one more issue, so I am going to send a new patch and would like to get your comments to it. Thank you for your time. > > The order of traversal is important if there are several things mounted > one on the other that are all being unmounted. > > Now perhaps your other changes have addressed that but I haven't looked > closely enough to see that yet. > > > > @@ -454,7 +473,7 @@ int propagate_umount(struct list_head *list) > > list_for_each_entry_reverse(mnt, list, mnt_list) > > mark_umount_candidates(mnt); > > > > - list_for_each_entry(mnt, list, mnt_list) > > + list_for_each_entry_reverse(mnt, list, mnt_list) > > __propagate_umount(mnt); > > return 0; > > } > > Eric