linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Issue with lazy umount and closing file descriptor in between
@ 2011-09-06 17:26 Amit Sahrawat
  2011-09-07 16:37 ` Amit Sahrawat
  0 siblings, 1 reply; 6+ messages in thread
From: Amit Sahrawat @ 2011-09-06 17:26 UTC (permalink / raw)
  To: linux-kernel, linux-fsdevel

We have observed below issues in busybox umount:
1. force umount (umount -f): it does not work as expected.
2. lazy umount(umount -l): it detaches the mount point but waits for
current mount point users(processes) to finish.
Corruption happens when we powerdown, while lazy umount is waiting for
a process to finish.
(e.g. #dd if=/dev/zero of=/mnt/test.txt ).
What could be the ideal way so as to avoid file system corruption in
above scenario?
Is it fine to close all open file descriptors on umount system call
before attempting umount? But this results in OOPS in certain
situations like:
1. User app issue a write/read request
2. Write reaches in kernel space but sleeps for some time e.g. it is
not available in dentry cache.
3. In the meanwhile, we issue umount. This will close open file
descriptor, free file/dentry object and then umount.
4. Now write wakes up and finds NULL file/dentry object and triggers oops.
Please offer some advice on this issue.
Thanks & Regards,
Amit Sahrawat

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Issue with lazy umount and closing file descriptor in between
  2011-09-06 17:26 Issue with lazy umount and closing file descriptor in between Amit Sahrawat
@ 2011-09-07 16:37 ` Amit Sahrawat
  2011-09-10 11:18   ` NamJae Jeon
  2011-09-11 22:01   ` Bryan Donlan
  0 siblings, 2 replies; 6+ messages in thread
From: Amit Sahrawat @ 2011-09-07 16:37 UTC (permalink / raw)
  To: linux-kernel, linux-fsdevel; +Cc: linkinjeon

I know that lazy umount was designed keeping in mind that the
mountpoint is not accesible to all future I/O but for the ongoing I/O
it will continute to work. It is only after the I/O is finished that
the umount will actually occur. But this can be tricky at times
considering there are situations where the operation will continue to
be executed than what is expected duration, and you cannot unplug the
device during that period because there are chances of filesystem
corruption on doing so.
Is there anything which could be done in this context? because simply
reading the fd-table and closing fd's will not serve the purpose and
there is every chance of a OOPs occuring due to this closing.
Signalling from this point to the all the process's with open fd's on
that mountpoint to close fd i.e., handling needs to be done from the
user space applications...? Does this make sense

Please through some insight into this. I am not looking for exact
solution it is just mere opinion's on this that can add to this.

Thanks & Regards,
Amit Sahrawat

On Tue, Sep 6, 2011 at 10:56 PM, Amit Sahrawat
<amit.sahrawat83@gmail.com> wrote:
> We have observed below issues in busybox umount:
> 1. force umount (umount -f): it does not work as expected.
> 2. lazy umount(umount -l): it detaches the mount point but waits for
> current mount point users(processes) to finish.
> Corruption happens when we powerdown, while lazy umount is waiting for
> a process to finish.
> (e.g. #dd if=/dev/zero of=/mnt/test.txt ).
> What could be the ideal way so as to avoid file system corruption in
> above scenario?
> Is it fine to close all open file descriptors on umount system call
> before attempting umount? But this results in OOPS in certain
> situations like:
> 1. User app issue a write/read request
> 2. Write reaches in kernel space but sleeps for some time e.g. it is
> not available in dentry cache.
> 3. In the meanwhile, we issue umount. This will close open file
> descriptor, free file/dentry object and then umount.
> 4. Now write wakes up and finds NULL file/dentry object and triggers oops.
> Please offer some advice on this issue.
> Thanks & Regards,
> Amit Sahrawat
>

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Issue with lazy umount and closing file descriptor in between
  2011-09-07 16:37 ` Amit Sahrawat
@ 2011-09-10 11:18   ` NamJae Jeon
  2011-09-11 18:23     ` Amit Sahrawat
  2011-09-11 22:01   ` Bryan Donlan
  1 sibling, 1 reply; 6+ messages in thread
From: NamJae Jeon @ 2011-09-10 11:18 UTC (permalink / raw)
  To: Amit Sahrawat; +Cc: linux-kernel, linux-fsdevel

2011/9/8 Amit Sahrawat <amit.sahrawat83@gmail.com>:
> I know that lazy umount was designed keeping in mind that the
> mountpoint is not accesible to all future I/O but for the ongoing I/O
> it will continute to work. It is only after the I/O is finished that
> the umount will actually occur. But this can be tricky at times
> considering there are situations where the operation will continue to
> be executed than what is expected duration, and you cannot unplug the
> device during that period because there are chances of filesystem
> corruption on doing so.
> Is there anything which could be done in this context? because simply
> reading the fd-table and closing fd's will not serve the purpose and
> there is every chance of a OOPs occuring due to this closing.
> Signalling from this point to the all the process's with open fd's on
> that mountpoint to close fd i.e., handling needs to be done from the
> user space applications...? Does this make sense
>
> Please through some insight into this. I am not looking for exact
> solution it is just mere opinion's on this that can add to this.
>
> Thanks & Regards,
> Amit Sahrawat
>
> On Tue, Sep 6, 2011 at 10:56 PM, Amit Sahrawat
> <amit.sahrawat83@gmail.com> wrote:
>> We have observed below issues in busybox umount:
>> 1. force umount (umount -f): it does not work as expected.
>> 2. lazy umount(umount -l): it detaches the mount point but waits for
>> current mount point users(processes) to finish.
>> Corruption happens when we powerdown, while lazy umount is waiting for
>> a process to finish.
>> (e.g. #dd if=/dev/zero of=/mnt/test.txt ).
>> What could be the ideal way so as to avoid file system corruption in
>> above scenario?
>> Is it fine to close all open file descriptors on umount system call
>> before attempting umount? But this results in OOPS in certain
>> situations like:
>> 1. User app issue a write/read request
>> 2. Write reaches in kernel space but sleeps for some time e.g. it is
>> not available in dentry cache.
>> 3. In the meanwhile, we issue umount. This will close open file
>> descriptor, free file/dentry object and then umount.
>> 4. Now write wakes up and finds NULL file/dentry object and triggers oops.
>> Please offer some advice on this issue.
>> Thanks & Regards,
>> Amit Sahrawat
>>
>

Before close opend file, plz try to flush write request using
sys_fsync or etc.. and you should do mutex_lock about opened inode at
the same time. because next write request should be blocked by user
app.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Issue with lazy umount and closing file descriptor in between
  2011-09-10 11:18   ` NamJae Jeon
@ 2011-09-11 18:23     ` Amit Sahrawat
  0 siblings, 0 replies; 6+ messages in thread
From: Amit Sahrawat @ 2011-09-11 18:23 UTC (permalink / raw)
  To: NamJae Jeon; +Cc: linux-kernel, linux-fsdevel

There are few things, I am looking at trying out invaliding inode
mappings(address space mapping, flushing out the inode writes - that
will take care of invaliding all the cache entries for that inode and
for any new access it will again go out to disk. Regarding locks, we
need to release the lock in order the 'close the fd', Inode locking
does allows for parallel access - I am looking at its optimal usage if
this can be used.

I will try the cache flushing at first and check its impact.

Thanks & Regards,
Amit Sahrawat

On Sat, Sep 10, 2011 at 4:48 PM, NamJae Jeon <linkinjeon@gmail.com> wrote:
> 2011/9/8 Amit Sahrawat <amit.sahrawat83@gmail.com>:
>> I know that lazy umount was designed keeping in mind that the
>> mountpoint is not accesible to all future I/O but for the ongoing I/O
>> it will continute to work. It is only after the I/O is finished that
>> the umount will actually occur. But this can be tricky at times
>> considering there are situations where the operation will continue to
>> be executed than what is expected duration, and you cannot unplug the
>> device during that period because there are chances of filesystem
>> corruption on doing so.
>> Is there anything which could be done in this context? because simply
>> reading the fd-table and closing fd's will not serve the purpose and
>> there is every chance of a OOPs occuring due to this closing.
>> Signalling from this point to the all the process's with open fd's on
>> that mountpoint to close fd i.e., handling needs to be done from the
>> user space applications...? Does this make sense
>>
>> Please through some insight into this. I am not looking for exact
>> solution it is just mere opinion's on this that can add to this.
>>
>> Thanks & Regards,
>> Amit Sahrawat
>>
>> On Tue, Sep 6, 2011 at 10:56 PM, Amit Sahrawat
>> <amit.sahrawat83@gmail.com> wrote:
>>> We have observed below issues in busybox umount:
>>> 1. force umount (umount -f): it does not work as expected.
>>> 2. lazy umount(umount -l): it detaches the mount point but waits for
>>> current mount point users(processes) to finish.
>>> Corruption happens when we powerdown, while lazy umount is waiting for
>>> a process to finish.
>>> (e.g. #dd if=/dev/zero of=/mnt/test.txt ).
>>> What could be the ideal way so as to avoid file system corruption in
>>> above scenario?
>>> Is it fine to close all open file descriptors on umount system call
>>> before attempting umount? But this results in OOPS in certain
>>> situations like:
>>> 1. User app issue a write/read request
>>> 2. Write reaches in kernel space but sleeps for some time e.g. it is
>>> not available in dentry cache.
>>> 3. In the meanwhile, we issue umount. This will close open file
>>> descriptor, free file/dentry object and then umount.
>>> 4. Now write wakes up and finds NULL file/dentry object and triggers oops.
>>> Please offer some advice on this issue.
>>> Thanks & Regards,
>>> Amit Sahrawat
>>>
>>
>
> Before close opend file, plz try to flush write request using
> sys_fsync or etc.. and you should do mutex_lock about opened inode at
> the same time. because next write request should be blocked by user
> app.
>

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Issue with lazy umount and closing file descriptor in between
  2011-09-07 16:37 ` Amit Sahrawat
  2011-09-10 11:18   ` NamJae Jeon
@ 2011-09-11 22:01   ` Bryan Donlan
  2011-09-12  0:45     ` NamJae Jeon
  1 sibling, 1 reply; 6+ messages in thread
From: Bryan Donlan @ 2011-09-11 22:01 UTC (permalink / raw)
  To: Amit Sahrawat; +Cc: linux-kernel, linux-fsdevel, linkinjeon

On Wed, Sep 7, 2011 at 12:37, Amit Sahrawat <amit.sahrawat83@gmail.com> wrote:
> I know that lazy umount was designed keeping in mind that the
> mountpoint is not accesible to all future I/O but for the ongoing I/O
> it will continute to work. It is only after the I/O is finished that
> the umount will actually occur. But this can be tricky at times
> considering there are situations where the operation will continue to
> be executed than what is expected duration, and you cannot unplug the
> device during that period because there are chances of filesystem
> corruption on doing so.
> Is there anything which could be done in this context? because simply
> reading the fd-table and closing fd's will not serve the purpose and
> there is every chance of a OOPs occuring due to this closing.
> Signalling from this point to the all the process's with open fd's on
> that mountpoint to close fd i.e., handling needs to be done from the
> user space applications...? Does this make sense
>
> Please through some insight into this. I am not looking for exact
> solution it is just mere opinion's on this that can add to this.

Essentially what you want here is a 'forced unmount' option.

It's difficult to do this directly in the existing VFS model; you'd
need to essentially change the operations structure for all open
files/inodes for that filesystem in a race-free manner, _and_ wait for
any outstanding operations to complete. The VFS isn't really designed
to support something like this. What you could try doing, however, is
creating a wrapper filesystem - one that redirects all requests to an
underlying filesystem, but supports an operation to:

1) Make all future requests fail with -EIO
2) Invalidate any existing VMA mappings
3) Wait for all outstanding requests to complete
4) Unmount (ie, unreference) the underlying filesystem

This will result in some overhead, of course, but would seem to be the
safest route.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Issue with lazy umount and closing file descriptor in between
  2011-09-11 22:01   ` Bryan Donlan
@ 2011-09-12  0:45     ` NamJae Jeon
  0 siblings, 0 replies; 6+ messages in thread
From: NamJae Jeon @ 2011-09-12  0:45 UTC (permalink / raw)
  To: Amit Sahrawat; +Cc: linux-kernel, linux-fsdevel, Bryan Donlan

2011/9/12 Bryan Donlan <bdonlan@gmail.com>:
> On Wed, Sep 7, 2011 at 12:37, Amit Sahrawat <amit.sahrawat83@gmail.com> wrote:
>> I know that lazy umount was designed keeping in mind that the
>> mountpoint is not accesible to all future I/O but for the ongoing I/O
>> it will continute to work. It is only after the I/O is finished that
>> the umount will actually occur. But this can be tricky at times
>> considering there are situations where the operation will continue to
>> be executed than what is expected duration, and you cannot unplug the
>> device during that period because there are chances of filesystem
>> corruption on doing so.
>> Is there anything which could be done in this context? because simply
>> reading the fd-table and closing fd's will not serve the purpose and
>> there is every chance of a OOPs occuring due to this closing.
>> Signalling from this point to the all the process's with open fd's on
>> that mountpoint to close fd i.e., handling needs to be done from the
>> user space applications...? Does this make sense
>>
>> Please through some insight into this. I am not looking for exact
>> solution it is just mere opinion's on this that can add to this.
>
> Essentially what you want here is a 'forced unmount' option.
>
> It's difficult to do this directly in the existing VFS model; you'd
> need to essentially change the operations structure for all open
> files/inodes for that filesystem in a race-free manner, _and_ wait for
> any outstanding operations to complete. The VFS isn't really designed
> to support something like this. What you could try doing, however, is
> creating a wrapper filesystem - one that redirects all requests to an
> underlying filesystem, but supports an operation to:
>
> 1) Make all future requests fail with -EIO
> 2) Invalidate any existing VMA mappings
> 3) Wait for all outstanding requests to complete
> 4) Unmount (ie, unreference) the underlying filesystem
>
> This will result in some overhead, of course, but would seem to be the
> safest route.
>

I have a idea to resolve this problem. plz try to do like this.

1. first, fd is set to -1 or .. in fdtable about opened file. If then,
when user app try next write, BAD desctriptor error will be faced.(it
can not cause BAD descrpitor, plz check condition of BAD descriptor).
2. second, plz flush dirty page and request in i/o scheduler.
3. third , try to close fd after previous I/O is finished like above trying.

Thanks.

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2011-09-12  0:45 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-09-06 17:26 Issue with lazy umount and closing file descriptor in between Amit Sahrawat
2011-09-07 16:37 ` Amit Sahrawat
2011-09-10 11:18   ` NamJae Jeon
2011-09-11 18:23     ` Amit Sahrawat
2011-09-11 22:01   ` Bryan Donlan
2011-09-12  0:45     ` NamJae Jeon

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).