* [PATCH] snapshot: Push BKL down into ioctl handlers @ 2008-05-22 21:25 Alan Cox 2008-05-23 1:09 ` Rafael J. Wysocki 0 siblings, 1 reply; 6+ messages in thread From: Alan Cox @ 2008-05-22 21:25 UTC (permalink / raw) To: linux-kernel, akpm Signed-off-by: Alan Cox <alan@redhat.com> diff --git a/kernel/power/user.c b/kernel/power/user.c index f5512cb..658262b 100644 --- a/kernel/power/user.c +++ b/kernel/power/user.c @@ -23,6 +23,7 @@ #include <linux/console.h> #include <linux/cpu.h> #include <linux/freezer.h> +#include <linux/smp_lock.h> #include <asm/uaccess.h> @@ -164,8 +165,8 @@ static ssize_t snapshot_write(struct file *filp, const char __user *buf, return res; } -static int snapshot_ioctl(struct inode *inode, struct file *filp, - unsigned int cmd, unsigned long arg) +static long snapshot_ioctl(struct file *filp, unsigned int cmd, + unsigned long arg) { int error = 0; struct snapshot_data *data; @@ -181,6 +182,8 @@ static int snapshot_ioctl(struct inode *inode, struct file *filp, data = filp->private_data; + lock_kernel(); + switch (cmd) { case SNAPSHOT_FREEZE: @@ -389,7 +392,7 @@ static int snapshot_ioctl(struct inode *inode, struct file *filp, error = -ENOTTY; } - + unlock_kernel(); return error; } @@ -399,7 +402,7 @@ static const struct file_operations snapshot_fops = { .read = snapshot_read, .write = snapshot_write, .llseek = no_llseek, - .ioctl = snapshot_ioctl, + .unlocked_ioctl = snapshot_ioctl, }; static struct miscdevice snapshot_device = { ^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [PATCH] snapshot: Push BKL down into ioctl handlers 2008-05-22 21:25 [PATCH] snapshot: Push BKL down into ioctl handlers Alan Cox @ 2008-05-23 1:09 ` Rafael J. Wysocki 2008-05-23 11:04 ` Alan Cox 0 siblings, 1 reply; 6+ messages in thread From: Rafael J. Wysocki @ 2008-05-23 1:09 UTC (permalink / raw) To: Alan Cox; +Cc: linux-kernel, akpm On Thursday, 22 of May 2008, Alan Cox wrote: > Signed-off-by: Alan Cox <alan@redhat.com> > > diff --git a/kernel/power/user.c b/kernel/power/user.c > index f5512cb..658262b 100644 > --- a/kernel/power/user.c > +++ b/kernel/power/user.c > @@ -23,6 +23,7 @@ > #include <linux/console.h> > #include <linux/cpu.h> > #include <linux/freezer.h> > +#include <linux/smp_lock.h> > > #include <asm/uaccess.h> > > @@ -164,8 +165,8 @@ static ssize_t snapshot_write(struct file *filp, const char __user *buf, > return res; > } > > -static int snapshot_ioctl(struct inode *inode, struct file *filp, > - unsigned int cmd, unsigned long arg) > +static long snapshot_ioctl(struct file *filp, unsigned int cmd, > + unsigned long arg) > { > int error = 0; > struct snapshot_data *data; > @@ -181,6 +182,8 @@ static int snapshot_ioctl(struct inode *inode, struct file *filp, > > data = filp->private_data; > > + lock_kernel(); > + Hm, well, I admit I'm a bit ignorant as far as the chardev locking is concerned, but can you please tell me why would that be wrong if we didn't call lock_kernel() here at all? > switch (cmd) { > > case SNAPSHOT_FREEZE: > @@ -389,7 +392,7 @@ static int snapshot_ioctl(struct inode *inode, struct file *filp, > error = -ENOTTY; > > } > - > + unlock_kernel(); > return error; > } > > @@ -399,7 +402,7 @@ static const struct file_operations snapshot_fops = { > .read = snapshot_read, > .write = snapshot_write, > .llseek = no_llseek, > - .ioctl = snapshot_ioctl, > + .unlocked_ioctl = snapshot_ioctl, > }; > > static struct miscdevice snapshot_device = { Thanks, Rafael ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] snapshot: Push BKL down into ioctl handlers 2008-05-23 1:09 ` Rafael J. Wysocki @ 2008-05-23 11:04 ` Alan Cox 2008-05-23 11:22 ` Rafael J. Wysocki 0 siblings, 1 reply; 6+ messages in thread From: Alan Cox @ 2008-05-23 11:04 UTC (permalink / raw) To: Rafael J. Wysocki; +Cc: linux-kernel, akpm > > > > + lock_kernel(); > > + > > Hm, well, I admit I'm a bit ignorant as far as the chardev locking is > concerned, but can you please tell me why would that be wrong if we didn't call > lock_kernel() here at all? I've just been pushing the lock down. If the code already has enough internal locking for things like multiple ioctls in parallel then you can probably kill it entirely - but that needs someone who knows that driver well to decide and evaluate. ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] snapshot: Push BKL down into ioctl handlers 2008-05-23 11:04 ` Alan Cox @ 2008-05-23 11:22 ` Rafael J. Wysocki 2008-06-02 21:33 ` [RFC][PATCH] snapshot: Use pm_mutex for mutual exclusion Rafael J. Wysocki 0 siblings, 1 reply; 6+ messages in thread From: Rafael J. Wysocki @ 2008-05-23 11:22 UTC (permalink / raw) To: Alan Cox; +Cc: linux-kernel, akpm, Pavel Machek On Friday, 23 of May 2008, Alan Cox wrote: > > > > > > + lock_kernel(); > > > + > > > > Hm, well, I admit I'm a bit ignorant as far as the chardev locking is > > concerned, but can you please tell me why would that be wrong if we didn't call > > lock_kernel() here at all? > > I've just been pushing the lock down. If the code already has enough > internal locking for things like multiple ioctls in parallel then you can > probably kill it entirely - but that needs someone who knows that driver > well to decide and evaluate. Thanks, I'll have a look. ACK for the patch from me. Thanks, Rafael ^ permalink raw reply [flat|nested] 6+ messages in thread
* [RFC][PATCH] snapshot: Use pm_mutex for mutual exclusion 2008-05-23 11:22 ` Rafael J. Wysocki @ 2008-06-02 21:33 ` Rafael J. Wysocki 2008-06-03 22:20 ` Pavel Machek 0 siblings, 1 reply; 6+ messages in thread From: Rafael J. Wysocki @ 2008-06-02 21:33 UTC (permalink / raw) To: Pavel Machek; +Cc: Alan Cox, linux-kernel, akpm, pm list On Friday, 23 of May 2008, Rafael J. Wysocki wrote: > On Friday, 23 of May 2008, Alan Cox wrote: > > > > > > > > + lock_kernel(); > > > > + > > > > > > Hm, well, I admit I'm a bit ignorant as far as the chardev locking is > > > concerned, but can you please tell me why would that be wrong if we didn't call > > > lock_kernel() here at all? > > > > I've just been pushing the lock down. If the code already has enough > > internal locking for things like multiple ioctls in parallel then you can > > probably kill it entirely - but that needs someone who knows that driver > > well to decide and evaluate. > > Thanks, I'll have a look. Appended is what I think we can do. Thanks, Rafael --- We can avoid taking the BKL in snapshot_ioctl() if pm_mutex is used to prevent the ioctls from being executed concurrently. In addition, although it is only possible to open /dev/snapshot once, the task which has done that may spawn a child that will inherit the open descriptor, so in theory they can call snapshot_write(), snapshot_read() and snapshot_release() concurrently. pm_mutex can also be used for mutual exclusion in such cases. Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl> --- kernel/power/user.c | 68 ++++++++++++++++++++++++++++++++-------------------- 1 file changed, 42 insertions(+), 26 deletions(-) Index: linux-2.6/kernel/power/user.c =================================================================== --- linux-2.6.orig/kernel/power/user.c +++ linux-2.6/kernel/power/user.c @@ -70,16 +70,22 @@ static int snapshot_open(struct inode *i struct snapshot_data *data; int error; - if (!atomic_add_unless(&snapshot_device_available, -1, 0)) - return -EBUSY; + mutex_lock(&pm_mutex); + + if (!atomic_add_unless(&snapshot_device_available, -1, 0)) { + error = -EBUSY; + goto Unlock; + } if ((filp->f_flags & O_ACCMODE) == O_RDWR) { atomic_inc(&snapshot_device_available); - return -ENOSYS; + error = -ENOSYS; + goto Unlock; } if(create_basic_memory_bitmaps()) { atomic_inc(&snapshot_device_available); - return -ENOMEM; + error = -ENOMEM; + goto Unlock; } nonseekable_open(inode, filp); data = &snapshot_state; @@ -99,33 +105,36 @@ static int snapshot_open(struct inode *i if (error) pm_notifier_call_chain(PM_POST_HIBERNATION); } - if (error) { + if (error) atomic_inc(&snapshot_device_available); - return error; - } data->frozen = 0; data->ready = 0; data->platform_support = 0; - return 0; + Unlock: + mutex_unlock(&pm_mutex); + + return error; } static int snapshot_release(struct inode *inode, struct file *filp) { struct snapshot_data *data; + mutex_lock(&pm_mutex); + swsusp_free(); free_basic_memory_bitmaps(); data = filp->private_data; free_all_swap_pages(data->swap); - if (data->frozen) { - mutex_lock(&pm_mutex); + if (data->frozen) thaw_processes(); - mutex_unlock(&pm_mutex); - } pm_notifier_call_chain(data->mode == O_WRONLY ? PM_POST_HIBERNATION : PM_POST_RESTORE); atomic_inc(&snapshot_device_available); + + mutex_unlock(&pm_mutex); + return 0; } @@ -135,9 +144,13 @@ static ssize_t snapshot_read(struct file struct snapshot_data *data; ssize_t res; + mutex_lock(&pm_mutex); + data = filp->private_data; - if (!data->ready) - return -ENODATA; + if (!data->ready) { + res = -ENODATA; + goto Unlock; + } res = snapshot_read_next(&data->handle, count); if (res > 0) { if (copy_to_user(buf, data_of(data->handle), res)) @@ -145,6 +158,10 @@ static ssize_t snapshot_read(struct file else *offp = data->handle.offset; } + + Unlock: + mutex_unlock(&pm_mutex); + return res; } @@ -154,6 +171,8 @@ static ssize_t snapshot_write(struct fil struct snapshot_data *data; ssize_t res; + mutex_lock(&pm_mutex); + data = filp->private_data; res = snapshot_write_next(&data->handle, count); if (res > 0) { @@ -162,6 +181,9 @@ static ssize_t snapshot_write(struct fil else *offp = data->handle.offset; } + + mutex_unlock(&pm_mutex); + return res; } @@ -180,16 +202,16 @@ static long snapshot_ioctl(struct file * if (!capable(CAP_SYS_ADMIN)) return -EPERM; - data = filp->private_data; + if (!mutex_trylock(&pm_mutex)) + return -EBUSY; - lock_kernel(); + data = filp->private_data; switch (cmd) { case SNAPSHOT_FREEZE: if (data->frozen) break; - mutex_lock(&pm_mutex); printk("Syncing filesystems ... "); sys_sync(); printk("done.\n"); @@ -197,7 +219,6 @@ static long snapshot_ioctl(struct file * error = freeze_processes(); if (error) thaw_processes(); - mutex_unlock(&pm_mutex); if (!error) data->frozen = 1; break; @@ -205,9 +226,7 @@ static long snapshot_ioctl(struct file * case SNAPSHOT_UNFREEZE: if (!data->frozen || data->ready) break; - mutex_lock(&pm_mutex); thaw_processes(); - mutex_unlock(&pm_mutex); data->frozen = 0; break; @@ -310,16 +329,11 @@ static long snapshot_ioctl(struct file * error = -EPERM; break; } - if (!mutex_trylock(&pm_mutex)) { - error = -EBUSY; - break; - } /* * Tasks are frozen and the notifiers have been called with * PM_HIBERNATION_PREPARE */ error = suspend_devices_and_enter(PM_SUSPEND_MEM); - mutex_unlock(&pm_mutex); break; case SNAPSHOT_PLATFORM_SUPPORT: @@ -392,7 +406,9 @@ static long snapshot_ioctl(struct file * error = -ENOTTY; } - unlock_kernel(); + + mutex_unlock(&pm_mutex); + return error; } ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [RFC][PATCH] snapshot: Use pm_mutex for mutual exclusion 2008-06-02 21:33 ` [RFC][PATCH] snapshot: Use pm_mutex for mutual exclusion Rafael J. Wysocki @ 2008-06-03 22:20 ` Pavel Machek 0 siblings, 0 replies; 6+ messages in thread From: Pavel Machek @ 2008-06-03 22:20 UTC (permalink / raw) To: Rafael J. Wysocki; +Cc: Alan Cox, linux-kernel, akpm, pm list Hi! > Appended is what I think we can do. Looks ok to me. ACK. Pavel > --- > We can avoid taking the BKL in snapshot_ioctl() if pm_mutex is used to prevent > the ioctls from being executed concurrently. > > In addition, although it is only possible to open /dev/snapshot once, the task > which has done that may spawn a child that will inherit the open descriptor, > so in theory they can call snapshot_write(), snapshot_read() and > snapshot_release() concurrently. pm_mutex can also be used for mutual > exclusion in such cases. > > Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl> > --- > kernel/power/user.c | 68 ++++++++++++++++++++++++++++++++-------------------- > 1 file changed, 42 insertions(+), 26 deletions(-) > > Index: linux-2.6/kernel/power/user.c > =================================================================== > --- linux-2.6.orig/kernel/power/user.c > +++ linux-2.6/kernel/power/user.c > @@ -70,16 +70,22 @@ static int snapshot_open(struct inode *i > struct snapshot_data *data; > int error; > > - if (!atomic_add_unless(&snapshot_device_available, -1, 0)) > - return -EBUSY; > + mutex_lock(&pm_mutex); > + > + if (!atomic_add_unless(&snapshot_device_available, -1, 0)) { > + error = -EBUSY; > + goto Unlock; > + } > > if ((filp->f_flags & O_ACCMODE) == O_RDWR) { > atomic_inc(&snapshot_device_available); > - return -ENOSYS; > + error = -ENOSYS; > + goto Unlock; > } > if(create_basic_memory_bitmaps()) { > atomic_inc(&snapshot_device_available); > - return -ENOMEM; > + error = -ENOMEM; > + goto Unlock; > } > nonseekable_open(inode, filp); > data = &snapshot_state; > @@ -99,33 +105,36 @@ static int snapshot_open(struct inode *i > if (error) > pm_notifier_call_chain(PM_POST_HIBERNATION); > } > - if (error) { > + if (error) > atomic_inc(&snapshot_device_available); > - return error; > - } > data->frozen = 0; > data->ready = 0; > data->platform_support = 0; > > - return 0; > + Unlock: > + mutex_unlock(&pm_mutex); > + > + return error; > } > > static int snapshot_release(struct inode *inode, struct file *filp) > { > struct snapshot_data *data; > > + mutex_lock(&pm_mutex); > + > swsusp_free(); > free_basic_memory_bitmaps(); > data = filp->private_data; > free_all_swap_pages(data->swap); > - if (data->frozen) { > - mutex_lock(&pm_mutex); > + if (data->frozen) > thaw_processes(); > - mutex_unlock(&pm_mutex); > - } > pm_notifier_call_chain(data->mode == O_WRONLY ? > PM_POST_HIBERNATION : PM_POST_RESTORE); > atomic_inc(&snapshot_device_available); > + > + mutex_unlock(&pm_mutex); > + > return 0; > } > > @@ -135,9 +144,13 @@ static ssize_t snapshot_read(struct file > struct snapshot_data *data; > ssize_t res; > > + mutex_lock(&pm_mutex); > + > data = filp->private_data; > - if (!data->ready) > - return -ENODATA; > + if (!data->ready) { > + res = -ENODATA; > + goto Unlock; > + } > res = snapshot_read_next(&data->handle, count); > if (res > 0) { > if (copy_to_user(buf, data_of(data->handle), res)) > @@ -145,6 +158,10 @@ static ssize_t snapshot_read(struct file > else > *offp = data->handle.offset; > } > + > + Unlock: > + mutex_unlock(&pm_mutex); > + > return res; > } > > @@ -154,6 +171,8 @@ static ssize_t snapshot_write(struct fil > struct snapshot_data *data; > ssize_t res; > > + mutex_lock(&pm_mutex); > + > data = filp->private_data; > res = snapshot_write_next(&data->handle, count); > if (res > 0) { > @@ -162,6 +181,9 @@ static ssize_t snapshot_write(struct fil > else > *offp = data->handle.offset; > } > + > + mutex_unlock(&pm_mutex); > + > return res; > } > > @@ -180,16 +202,16 @@ static long snapshot_ioctl(struct file * > if (!capable(CAP_SYS_ADMIN)) > return -EPERM; > > - data = filp->private_data; > + if (!mutex_trylock(&pm_mutex)) > + return -EBUSY; > > - lock_kernel(); > + data = filp->private_data; > > switch (cmd) { > > case SNAPSHOT_FREEZE: > if (data->frozen) > break; > - mutex_lock(&pm_mutex); > printk("Syncing filesystems ... "); > sys_sync(); > printk("done.\n"); > @@ -197,7 +219,6 @@ static long snapshot_ioctl(struct file * > error = freeze_processes(); > if (error) > thaw_processes(); > - mutex_unlock(&pm_mutex); > if (!error) > data->frozen = 1; > break; > @@ -205,9 +226,7 @@ static long snapshot_ioctl(struct file * > case SNAPSHOT_UNFREEZE: > if (!data->frozen || data->ready) > break; > - mutex_lock(&pm_mutex); > thaw_processes(); > - mutex_unlock(&pm_mutex); > data->frozen = 0; > break; > > @@ -310,16 +329,11 @@ static long snapshot_ioctl(struct file * > error = -EPERM; > break; > } > - if (!mutex_trylock(&pm_mutex)) { > - error = -EBUSY; > - break; > - } > /* > * Tasks are frozen and the notifiers have been called with > * PM_HIBERNATION_PREPARE > */ > error = suspend_devices_and_enter(PM_SUSPEND_MEM); > - mutex_unlock(&pm_mutex); > break; > > case SNAPSHOT_PLATFORM_SUPPORT: > @@ -392,7 +406,9 @@ static long snapshot_ioctl(struct file * > error = -ENOTTY; > > } > - unlock_kernel(); > + > + mutex_unlock(&pm_mutex); > + > return error; > } > -- (english) http://www.livejournal.com/~pavelmachek (cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html ^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2008-06-03 22:19 UTC | newest] Thread overview: 6+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2008-05-22 21:25 [PATCH] snapshot: Push BKL down into ioctl handlers Alan Cox 2008-05-23 1:09 ` Rafael J. Wysocki 2008-05-23 11:04 ` Alan Cox 2008-05-23 11:22 ` Rafael J. Wysocki 2008-06-02 21:33 ` [RFC][PATCH] snapshot: Use pm_mutex for mutual exclusion Rafael J. Wysocki 2008-06-03 22:20 ` Pavel Machek
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox