* [PATCH 00/14] GFS
@ 2005-08-02 7:18 David Teigland
2005-08-02 7:45 ` Arjan van de Ven
` (4 more replies)
0 siblings, 5 replies; 79+ messages in thread
From: David Teigland @ 2005-08-02 7:18 UTC (permalink / raw)
To: akpm; +Cc: linux-kernel, linux-cluster
Hi, GFS (Global File System) is a cluster file system that we'd like to
see added to the kernel. The 14 patches total about 900K so I won't send
them to the list unless that's requested. Comments and suggestions are
welcome. Thanks
http://redhat.com/~teigland/gfs2/20050801/gfs2-full.patch
http://redhat.com/~teigland/gfs2/20050801/broken-out/
Dave
^ permalink raw reply [flat|nested] 79+ messages in thread* Re: [PATCH 00/14] GFS 2005-08-02 7:18 [PATCH 00/14] GFS David Teigland @ 2005-08-02 7:45 ` Arjan van de Ven 2005-08-02 14:57 ` Jan Engelhardt ` (3 more replies) 2005-08-02 10:16 ` Pekka Enberg ` (3 subsequent siblings) 4 siblings, 4 replies; 79+ messages in thread From: Arjan van de Ven @ 2005-08-02 7:45 UTC (permalink / raw) To: David Teigland; +Cc: akpm, linux-kernel, linux-cluster On Tue, 2005-08-02 at 15:18 +0800, David Teigland wrote: > Hi, GFS (Global File System) is a cluster file system that we'd like to > see added to the kernel. The 14 patches total about 900K so I won't send > them to the list unless that's requested. Comments and suggestions are > welcome. Thanks > > http://redhat.com/~teigland/gfs2/20050801/gfs2-full.patch > http://redhat.com/~teigland/gfs2/20050801/broken-out/ * The on disk structures are defined in terms of uint32_t and friends, which are NOT endian neutral. Why are they not le32/be32 and thus endian-defined? Did you run bitwise-sparse on GFS yet ? * None of your on disk structures are packet. Are you sure? * +#define gfs2_16_to_cpu be16_to_cpu +#define gfs2_32_to_cpu be32_to_cpu +#define gfs2_64_to_cpu be64_to_cpu why this pointless abstracting? * +static const uint32_t crc_32_tab[] = ..... why do you duplicate this? The kernel has a perfectly good set of generic crc32 tables/functions just fine * Why are you using bufferheads extensively in a new filesystem? * + if (create) + down_write(&ip->i_rw_mutex); + else + down_read(&ip->i_rw_mutex); why do you use a rwsem and not a regular semaphore? You are aware that rwsems are far more expensive than regular ones right? How skewed is the read/write ratio? * Why use your own journalling layer and not say ... jbd ? * + while (!kthread_should_stop()) { + gfs2_scand_internal(sdp); + + set_current_state(TASK_INTERRUPTIBLE); + schedule_timeout(gfs2_tune_get(sdp, gt_scand_secs) * HZ); + } you probably really want to check for signals if you do interruptible sleeps (multiple places) * why not use msleep() and friends instead of schedule_timeout(), you're not using the complex variants anyway * +++ b/fs/gfs2/fixed_div64.h 2005-08-01 14:13:08.009808200 +0800 ehhhh why? * int gfs2_copy2user(struct buffer_head *bh, char **buf, unsigned int offset, + unsigned int size) +{ + int error; + + if (bh) + error = copy_to_user(*buf, bh->b_data + offset, size); + else + error = clear_user(*buf, size); that looks to be missing a few kmaps.. whats the guarantee that b_data is actually, like in lowmem? * [PATCH 08/14] GFS: diaper device The diaper device is a block device within gfs that gets transparently inserted between the real device the and rest of the filesystem. hmmmm why not use device mapper or something? Is this really needed? Should it live in drivers/block ? Doesn't this wrapper just increase the risk for memory deadlocks? * [PATCH 06/14] GFS: logging and recovery quoting the ren and stimpy show is nice.. but did the ren ans stimpy authors agree to license their stuff under the GPL? * do_lock_wait that almost screams for using wait_event and related APIs * +static inline void gfs2_log_lock(struct gfs2_sbd *sdp) +{ + spin_lock(&sdp->sd_log_lock); +} why the abstraction ? ^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 00/14] GFS 2005-08-02 7:45 ` Arjan van de Ven @ 2005-08-02 14:57 ` Jan Engelhardt 2005-08-02 15:02 ` Arjan van de Ven 2005-08-03 3:56 ` David Teigland ` (2 subsequent siblings) 3 siblings, 1 reply; 79+ messages in thread From: Jan Engelhardt @ 2005-08-02 14:57 UTC (permalink / raw) To: Arjan van de Ven; +Cc: David Teigland, akpm, linux-kernel, linux-cluster >* Why use your own journalling layer and not say ... jbd ? Why does reiser use its own journalling layer and not say ... jbd ? Jan Engelhardt -- ^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 00/14] GFS 2005-08-02 14:57 ` Jan Engelhardt @ 2005-08-02 15:02 ` Arjan van de Ven 2005-08-03 1:00 ` Hans Reiser 0 siblings, 1 reply; 79+ messages in thread From: Arjan van de Ven @ 2005-08-02 15:02 UTC (permalink / raw) To: Jan Engelhardt; +Cc: David Teigland, akpm, linux-kernel, linux-cluster On Tue, 2005-08-02 at 16:57 +0200, Jan Engelhardt wrote: > >* Why use your own journalling layer and not say ... jbd ? > > Why does reiser use its own journalling layer and not say ... jbd ? because reiser got merged before jbd. Next question. Now the question for GFS is still a valid one; there might be reasons to not use it (which is fair enough) but if there's no real reason then using jdb sounds a lot better given it's maturity (and it is used by 2 filesystems in -mm already). ^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 00/14] GFS 2005-08-02 15:02 ` Arjan van de Ven @ 2005-08-03 1:00 ` Hans Reiser 2005-08-03 4:07 ` Kyle Moffett 2005-08-03 9:09 ` Arjan van de Ven 0 siblings, 2 replies; 79+ messages in thread From: Hans Reiser @ 2005-08-03 1:00 UTC (permalink / raw) To: Arjan van de Ven Cc: Jan Engelhardt, David Teigland, akpm, linux-kernel, linux-cluster Arjan van de Ven wrote: >On Tue, 2005-08-02 at 16:57 +0200, Jan Engelhardt wrote: > > >>>* Why use your own journalling layer and not say ... jbd ? >>> >>> >>Why does reiser use its own journalling layer and not say ... jbd ? >> >> > >because reiser got merged before jbd. Next question. > > That is the wrong reason. We use our own journaling layer for the reason that Vivaldi used his own melody. I don't know anything about GFS, but expecting a filesystem author to use a journaling layer he does not want to is a bit arrogant. Now, if you got into details, and said jbd does X, Y and Z, and GFS does the same X and Y, and does not do Z as well as jbd, that would be a more serious comment. He might want to look at how reiser4 does wandering logs instead of using jbd..... but I would never claim that for sure some other author should be expected to use it..... and something like changing one's journaling system is not something to do just before a merge..... >Now the question for GFS is still a valid one; there might be reasons to >not use it (which is fair enough) but if there's no real reason then >using jdb sounds a lot better given it's maturity (and it is used by 2 >filesystems in -mm already). > > > >- >To unsubscribe from this list: send the line "unsubscribe linux-kernel" in >the body of a message to majordomo@vger.kernel.org >More majordomo info at http://vger.kernel.org/majordomo-info.html >Please read the FAQ at http://www.tux.org/lkml/ > > > > ^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 00/14] GFS 2005-08-03 1:00 ` Hans Reiser @ 2005-08-03 4:07 ` Kyle Moffett 2005-08-03 6:37 ` Jan Engelhardt 2005-08-03 9:09 ` Arjan van de Ven 1 sibling, 1 reply; 79+ messages in thread From: Kyle Moffett @ 2005-08-03 4:07 UTC (permalink / raw) To: Hans Reiser Cc: Arjan van de Ven, Jan Engelhardt, David Teigland, akpm, linux-kernel, linux-cluster On Aug 2, 2005, at 21:00:02, Hans Reiser wrote: > Arjan van de Ven wrote: >> because reiser got merged before jbd. Next question. > That is the wrong reason. We use our own journaling layer for the > reason that Vivaldi used his own melody. > > I don't know anything about GFS, but expecting a filesystem author to > use a journaling layer he does not want to is a bit arrogant. Now, if > you got into details, and said jbd does X, Y and Z, and GFS does the > same X and Y, and does not do Z as well as jbd, that would be a more > serious comment. He might want to look at how reiser4 does wandering > logs instead of using jbd..... but I would never claim that for sure > some other author should be expected to use it..... and something > like > changing one's journaling system is not something to do just before a > merge..... I don't want to start another big reiser4 flamewar, but... "I don't know anything about Reiser4, but expecting a filesystem author to use a VFS layer he does not want to is a bit arrogant. Now, if you got into details, and said the linux VFS does X, Y, and Z, and Reiser4 does..." Do you see my point here? If every person who added new kernel code just wrote their own thing without checking to see if it had already been done before, then there would be a lot of poorly maintained code in the kernel. If a journalling layer already exists, _new_ journaled filesystems should either (A) use the layer as is, or (B) fix the layer so it has sufficient functionality for them to use, and submit patches. That way if somebody later says, "Ah, crap, there's a bug in the kernel journalling layer", and fixes it, there are not eight other filesystems with their own open-coded layers that need to be audited for similar mistakes. This is similar to why some kernel developers did not like the Reiser4 code, because it implemented some private layers that looked kinda like stuff the VFS should be doing (Again, I don't want to get into that argument again, I'm just bringing up the similarities to clarify _this_ particular point, as that one has been beaten to death enough already). >> Now the question for GFS is still a valid one; there might be >> reasons to >> not use it (which is fair enough) but if there's no real reason then >> using jdb sounds a lot better given it's maturity (and it is used >> by 2 >> filesystems in -mm already). Personally, I am of the opinion that if GFS cannot use jdb, the developers ought to clarify why it isn't useable, and possibly submit fixes to make it useful, so that others can share the benefits. Cheers, Kyle Moffett -- I lost interest in "blade servers" when I found they didn't throw knives at people who weren't supposed to be in your machine room. -- Anthony de Boer ^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 00/14] GFS 2005-08-03 4:07 ` Kyle Moffett @ 2005-08-03 6:37 ` Jan Engelhardt 0 siblings, 0 replies; 79+ messages in thread From: Jan Engelhardt @ 2005-08-03 6:37 UTC (permalink / raw) To: Kyle Moffett Cc: Hans Reiser, Arjan van de Ven, David Teigland, akpm, linux-kernel, linux-cluster >> > because reiser got merged before jbd. Next question. >> >> That is the wrong reason. We use our own journaling layer for the >> reason that Vivaldi used his own melody. >> >> [...] He might want to look at how reiser4 does wandering >> logs instead of using jbd..... but I would never claim that for sure >> some other author should be expected to use it..... and something like >> changing one's journaling system is not something to do just before a >> merge..... > > Do you see my point here? If every person who added new kernel code > just wrote their own thing without checking to see if it had already > been done before, then there would be a lot of poorly maintained code > in the kernel. If a journalling layer already exists, _new_ journaled > filesystems should either (A) use the layer as is, or (B) fix the layer > so it has sufficient functionality for them to use, and submit patches. Maybe jbd 'sucks' for something 'cool' like reiser*, and modifying jbd to be 'eleet enough' for reiser* would overwhelm ext. Lastly, there is the 'political' thing, when a <your-favorite-jbd-fs>-only specific change to jbd is rejected by all other jbd-using fs. (Basically the situation thing that leads to software forks, in any area.) Jan Engelhardt -- ^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 00/14] GFS 2005-08-03 1:00 ` Hans Reiser 2005-08-03 4:07 ` Kyle Moffett @ 2005-08-03 9:09 ` Arjan van de Ven 1 sibling, 0 replies; 79+ messages in thread From: Arjan van de Ven @ 2005-08-03 9:09 UTC (permalink / raw) To: Hans Reiser Cc: Jan Engelhardt, David Teigland, akpm, linux-kernel, linux-cluster > I don't know anything about GFS, but expecting a filesystem author to > use a journaling layer he does not want to is a bit arrogant. good that I didn't expect that then. I think it's fair enough to ask people if they can use it. If the answer is "No because it doesn't fit our model <here>" then that's fine. If the answer is "eh yeah we could" then I think it's entirely reasonable to expect people to use common code as opposed to adding new code. ^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 00/14] GFS 2005-08-02 7:45 ` Arjan van de Ven 2005-08-02 14:57 ` Jan Engelhardt @ 2005-08-03 3:56 ` David Teigland 2005-08-03 9:17 ` Arjan van de Ven 2005-08-03 10:37 ` Lars Marowsky-Bree 2005-08-05 7:14 ` David Teigland 2005-08-11 6:06 ` David Teigland 3 siblings, 2 replies; 79+ messages in thread From: David Teigland @ 2005-08-03 3:56 UTC (permalink / raw) To: Arjan van de Ven; +Cc: akpm, linux-kernel, linux-cluster On Tue, Aug 02, 2005 at 09:45:24AM +0200, Arjan van de Ven wrote: > * The on disk structures are defined in terms of uint32_t and friends, > which are NOT endian neutral. Why are they not le32/be32 and thus > endian-defined? Did you run bitwise-sparse on GFS yet ? GFS has had proper endian handling for many years, it's still correct as far as we've been able to test. I ran bitwise-sparse yesterday and didn't find anything alarming. > * None of your on disk structures are packet. Are you sure? Quite, particular attention has been paid to aligning the structure fields, you'll find "pad" fields throughout. We'll write a quick test to verify that packing doesn't change anything. > +#define gfs2_16_to_cpu be16_to_cpu > +#define gfs2_32_to_cpu be32_to_cpu > +#define gfs2_64_to_cpu be64_to_cpu > > why this pointless abstracting? #ifdef GFS2_ENDIAN_BIG #define gfs2_16_to_cpu be16_to_cpu #define gfs2_32_to_cpu be32_to_cpu #define gfs2_64_to_cpu be64_to_cpu #define cpu_to_gfs2_16 cpu_to_be16 #define cpu_to_gfs2_32 cpu_to_be32 #define cpu_to_gfs2_64 cpu_to_be64 #else /* GFS2_ENDIAN_BIG */ #define gfs2_16_to_cpu le16_to_cpu #define gfs2_32_to_cpu le32_to_cpu #define gfs2_64_to_cpu le64_to_cpu #define cpu_to_gfs2_16 cpu_to_le16 #define cpu_to_gfs2_32 cpu_to_le32 #define cpu_to_gfs2_64 cpu_to_le64 #endif /* GFS2_ENDIAN_BIG */ The point is you can define GFS2_ENDIAN_BIG to compile gfs to be BE on-disk instead of LE which is another useful way to verify endian correctness. You should be able to use gfs in mixed architecture and mixed endian clusters. We don't have a mixed endian cluster to test, though. > * +static const uint32_t crc_32_tab[] = ..... > why do you duplicate this? The kernel has a perfectly good set of generic > crc32 tables/functions just fine We'll try them, they'll probably do fine. > * Why use your own journalling layer and not say ... jbd ? Here's an analysis of three approaches to cluster-fs journaling and their pros/cons (including using jbd): http://tinyurl.com/7sbqq > * + while (!kthread_should_stop()) { > + gfs2_scand_internal(sdp); > + > + set_current_state(TASK_INTERRUPTIBLE); > + schedule_timeout(gfs2_tune_get(sdp, gt_scand_secs) * HZ); > + } > > you probably really want to check for signals if you do interruptible sleeps I don't know why we'd be interested in signals here. > * why not use msleep() and friends instead of schedule_timeout(), you're > not using the complex variants anyway When unmounting we really appreciate waking up more often than the timeout, otherwise the unmount sits and waits for the longest daemon's msleep to complete. I converted this to msleep recently but it was too painful and had to go back. We'll get to your other comments, thanks. Dave ^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 00/14] GFS 2005-08-03 3:56 ` David Teigland @ 2005-08-03 9:17 ` Arjan van de Ven 2005-08-03 10:08 ` David Teigland 2005-08-03 10:37 ` Lars Marowsky-Bree 1 sibling, 1 reply; 79+ messages in thread From: Arjan van de Ven @ 2005-08-03 9:17 UTC (permalink / raw) To: David Teigland; +Cc: akpm, linux-kernel, linux-cluster On Wed, 2005-08-03 at 11:56 +0800, David Teigland wrote: > The point is you can define GFS2_ENDIAN_BIG to compile gfs to be BE > on-disk instead of LE which is another useful way to verify endian > correctness. that sounds wrong to be a compile option. If you really want to deal with dual disk endianness it really ought to be a runtime one (see jffs2 for example). > > * Why use your own journalling layer and not say ... jbd ? > > Here's an analysis of three approaches to cluster-fs journaling and their > pros/cons (including using jbd): http://tinyurl.com/7sbqq > > > * + while (!kthread_should_stop()) { > > + gfs2_scand_internal(sdp); > > + > > + set_current_state(TASK_INTERRUPTIBLE); > > + schedule_timeout(gfs2_tune_get(sdp, gt_scand_secs) * HZ); > > + } > > > > you probably really want to check for signals if you do interruptible sleeps > > I don't know why we'd be interested in signals here. well.. because if you don't your schedule_timeout becomes a nop when you get one, which makes your loop a busy waiting one. ^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 00/14] GFS 2005-08-03 9:17 ` Arjan van de Ven @ 2005-08-03 10:08 ` David Teigland 0 siblings, 0 replies; 79+ messages in thread From: David Teigland @ 2005-08-03 10:08 UTC (permalink / raw) To: Arjan van de Ven; +Cc: akpm, linux-kernel, linux-cluster On Wed, Aug 03, 2005 at 11:17:09AM +0200, Arjan van de Ven wrote: > On Wed, 2005-08-03 at 11:56 +0800, David Teigland wrote: > > The point is you can define GFS2_ENDIAN_BIG to compile gfs to be BE > > on-disk instead of LE which is another useful way to verify endian > > correctness. > > that sounds wrong to be a compile option. If you really want to deal > with dual disk endianness it really ought to be a runtime one (see jffs2 > for example). We don't want BE to be an "option" per se; as developers we'd just like to be able to compile it that way to verify gfs's endianness handling. If you think that's unmaintainable or a bad idea we'll rip it out. > > > * + while (!kthread_should_stop()) { > > > + gfs2_scand_internal(sdp); > > > + > > > + set_current_state(TASK_INTERRUPTIBLE); > > > + schedule_timeout(gfs2_tune_get(sdp, gt_scand_secs) * HZ); > > > > > > you probably really want to check for signals if you do > > > interruptible sleeps > > > > I don't know why we'd be interested in signals here. > > well.. because if you don't your schedule_timeout becomes a nop when you > get one, which makes your loop a busy waiting one. OK, it looks like we need to block/flush signals a la daemonize(); I guess I mistakenly figured the kthread routines did everything daemonize did. Thanks, Dave ^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 00/14] GFS 2005-08-03 3:56 ` David Teigland 2005-08-03 9:17 ` Arjan van de Ven @ 2005-08-03 10:37 ` Lars Marowsky-Bree 2005-08-03 18:54 ` Mark Fasheh 1 sibling, 1 reply; 79+ messages in thread From: Lars Marowsky-Bree @ 2005-08-03 10:37 UTC (permalink / raw) To: David Teigland, Arjan van de Ven; +Cc: akpm, linux-kernel, linux-cluster On 2005-08-03T11:56:18, David Teigland <teigland@redhat.com> wrote: > > * Why use your own journalling layer and not say ... jbd ? > Here's an analysis of three approaches to cluster-fs journaling and their > pros/cons (including using jbd): http://tinyurl.com/7sbqq Very instructive read, thanks for the link. -- High Availability & Clustering SUSE Labs, Research and Development SUSE LINUX Products GmbH - A Novell Business -- Charles Darwin "Ignorance more frequently begets confidence than does knowledge" ^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 00/14] GFS 2005-08-03 10:37 ` Lars Marowsky-Bree @ 2005-08-03 18:54 ` Mark Fasheh 0 siblings, 0 replies; 79+ messages in thread From: Mark Fasheh @ 2005-08-03 18:54 UTC (permalink / raw) To: Lars Marowsky-Bree Cc: David Teigland, Arjan van de Ven, akpm, linux-kernel, linux-cluster On Wed, Aug 03, 2005 at 12:37:44PM +0200, Lars Marowsky-Bree wrote: > On 2005-08-03T11:56:18, David Teigland <teigland@redhat.com> wrote: > > > > * Why use your own journalling layer and not say ... jbd ? > > Here's an analysis of three approaches to cluster-fs journaling and their > > pros/cons (including using jbd): http://tinyurl.com/7sbqq > > Very instructive read, thanks for the link. While it may be true that for a full log, flushing for a *single* lock may be more expensive in OCFS2, Ken ignores the fact that in our one big flush we've made all locks on journalled resources immediately releasable. According to that description, GFS2 would have to do a seperate transaction flush (including the extra step of writing revoke records) for each lock protecting a journalled resource. Assuming the same number of locks are required to be dropped under both systems then for a number of locks > 1 OCFS2 will actually do less work - the actual metadata blocks would be the same on either end, but JBD only has to write that the journal is now clean to the journal superblock whereas GFS2 has to revoke the blocks for each dropped lock. Of course all of this talk completely avoids the fact that in any case these things are expensive so a cluster file system has to take care to ping locks as little as possible. OCFS2 takes great pains to make as many operations node local (requiring no cluster locks) as possible - data allocation is usually done from a node local pool which is refreshed from the main bitmap. Deallocation happens similarly - we have a truncate log in which we record deleted clusters. Each node has their own inode and metadata chain allocators which another node will only lock for delete (a truncate log style local metadata delete log could easily be added if that ever became a problem). --Mark -- Mark Fasheh Senior Software Developer, Oracle mark.fasheh@oracle.com ^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 00/14] GFS 2005-08-02 7:45 ` Arjan van de Ven 2005-08-02 14:57 ` Jan Engelhardt 2005-08-03 3:56 ` David Teigland @ 2005-08-05 7:14 ` David Teigland 2005-08-05 7:27 ` [Linux-cluster] " Mike Christie ` (3 more replies) 2005-08-11 6:06 ` David Teigland 3 siblings, 4 replies; 79+ messages in thread From: David Teigland @ 2005-08-05 7:14 UTC (permalink / raw) To: Arjan van de Ven; +Cc: akpm, linux-kernel, linux-cluster On Tue, Aug 02, 2005 at 09:45:24AM +0200, Arjan van de Ven wrote: > * +static const uint32_t crc_32_tab[] = ..... > why do you duplicate this? The kernel has a perfectly good set of > generic crc32 tables/functions just fine The gfs2_disk_hash() function and the crc table on which it's based are a part of gfs2_ondisk.h: the ondisk metadata specification. This is a bit unusual since gfs uses a hash table on-disk for its directory structure. This header, including the hash function/table, must be included by user space programs like fsck that want to decipher a fs, and any change to the function or table would effectively make the fs corrupted. Because of this I think it's best for gfs to keep it's own copy as part of its ondisk format spec. > * Why are you using bufferheads extensively in a new filesystem? bh's are used for metadata, the log, and journaled data which need to be written at the block granularity, not page. > why do you use a rwsem and not a regular semaphore? You are aware that > rwsems are far more expensive than regular ones right? How skewed is > the read/write ratio? Aware, yes, it's the only rwsem in gfs. Specific skew, no, we'll have to measure that. > * +++ b/fs/gfs2/fixed_div64.h 2005-08-01 14:13:08.009808200 +0800 > ehhhh why? I'm not sure, actually, apart from the comments: do_div: /* For ia32 we need to pull some tricks to get past various versions of the compiler which do not like us using do_div in the middle of large functions. */ do_mod: /* Side effect free 64 bit mod operation */ fs/xfs/linux-2.6/xfs_linux.h (the origin of this file) has the same thing, perhaps this is an old problem that's now fixed? > * int gfs2_copy2user(struct buffer_head *bh, char **buf, unsigned int offset, > + unsigned int size) > +{ > + int error; > + > + if (bh) > + error = copy_to_user(*buf, bh->b_data + offset, size); > + else > + error = clear_user(*buf, size); > > that looks to be missing a few kmaps.. whats the guarantee that b_data > is actually, like in lowmem? This is only used in the specific case of reading a journaled-data file. That seems to effectively be the same as reading a buffer of fs metadata. > The diaper device is a block device within gfs that gets transparently > inserted between the real device the and rest of the filesystem. > > hmmmm why not use device mapper or something? Is this really needed? This is needed for the "withdraw" feature (described in the comment) which is fairly important. We'll see if dm could be used instead. Thanks, Dave ^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [Linux-cluster] Re: [PATCH 00/14] GFS 2005-08-05 7:14 ` David Teigland @ 2005-08-05 7:27 ` Mike Christie 2005-08-05 7:30 ` Mike Christie 2005-08-05 7:34 ` Arjan van de Ven ` (2 subsequent siblings) 3 siblings, 1 reply; 79+ messages in thread From: Mike Christie @ 2005-08-05 7:27 UTC (permalink / raw) To: linux clustering; +Cc: Arjan van de Ven, akpm, linux-kernel David Teigland wrote: > On Tue, Aug 02, 2005 at 09:45:24AM +0200, Arjan van de Ven wrote: > >>* Why are you using bufferheads extensively in a new filesystem? > > > bh's are used for metadata, the log, and journaled data which need to be > written at the block granularity, not page. > In a scsi tree http://kernel.org/git/?p=linux/kernel/git/jejb/scsi-block-2.6.git;a=summary there is a function, bio_map_kern(), in fs.c that maps a buffer into a bio. It does not have to be page granularity. Can something like that be used in these places? ^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [Linux-cluster] Re: [PATCH 00/14] GFS 2005-08-05 7:27 ` [Linux-cluster] " Mike Christie @ 2005-08-05 7:30 ` Mike Christie 0 siblings, 0 replies; 79+ messages in thread From: Mike Christie @ 2005-08-05 7:30 UTC (permalink / raw) To: linux clustering; +Cc: akpm, linux-kernel, Arjan van de Ven Mike Christie wrote: > David Teigland wrote: > >>On Tue, Aug 02, 2005 at 09:45:24AM +0200, Arjan van de Ven wrote: >> >> >>>* Why are you using bufferheads extensively in a new filesystem? >> >> >>bh's are used for metadata, the log, and journaled data which need to be >>written at the block granularity, not page. >> > > > In a scsi tree > http://kernel.org/git/?p=linux/kernel/git/jejb/scsi-block-2.6.git;a=summary oh yeah it is in -mm too. > there is a function, bio_map_kern(), in fs.c that maps a buffer into a > bio. It does not have to be page granularity. Can something like that be > used in these places? > > -- > Linux-cluster mailing list > Linux-cluster@redhat.com > http://www.redhat.com/mailman/listinfo/linux-cluster ^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 00/14] GFS 2005-08-05 7:14 ` David Teigland 2005-08-05 7:27 ` [Linux-cluster] " Mike Christie @ 2005-08-05 7:34 ` Arjan van de Ven 2005-08-05 9:44 ` David Teigland 2005-08-05 8:28 ` Jan Engelhardt 2005-08-08 6:26 ` David Teigland 3 siblings, 1 reply; 79+ messages in thread From: Arjan van de Ven @ 2005-08-05 7:34 UTC (permalink / raw) To: David Teigland; +Cc: akpm, linux-kernel, linux-cluster On Fri, 2005-08-05 at 15:14 +0800, David Teigland wrote: > On Tue, Aug 02, 2005 at 09:45:24AM +0200, Arjan van de Ven wrote: > > > * +static const uint32_t crc_32_tab[] = ..... > > why do you duplicate this? The kernel has a perfectly good set of > > generic crc32 tables/functions just fine > > The gfs2_disk_hash() function and the crc table on which it's based are a > part of gfs2_ondisk.h: the ondisk metadata specification. This is a bit > unusual since gfs uses a hash table on-disk for its directory structure. > This header, including the hash function/table, must be included by user > space programs like fsck that want to decipher a fs, and any change to the > function or table would effectively make the fs corrupted. Because of > this I think it's best for gfs to keep it's own copy as part of its ondisk > format spec. for userspace there's libcrc32 as well. If it's *the* bog standard crc32 I don't see a reason why your "spec" can't just reference that instead. And esp in the kernel you should just use the in kernel one not your own regardless; you can assume the in kernel one is optimized and it also keeps size down. ^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 00/14] GFS 2005-08-05 7:34 ` Arjan van de Ven @ 2005-08-05 9:44 ` David Teigland 2005-08-05 10:07 ` Jörn Engel 0 siblings, 1 reply; 79+ messages in thread From: David Teigland @ 2005-08-05 9:44 UTC (permalink / raw) To: Arjan van de Ven; +Cc: akpm, linux-kernel, linux-cluster On Fri, Aug 05, 2005 at 09:34:38AM +0200, Arjan van de Ven wrote: > On Fri, 2005-08-05 at 15:14 +0800, David Teigland wrote: > > On Tue, Aug 02, 2005 at 09:45:24AM +0200, Arjan van de Ven wrote: > > > > > * +static const uint32_t crc_32_tab[] = ..... > > > why do you duplicate this? The kernel has a perfectly good set of > > > generic crc32 tables/functions just fine > > > > The gfs2_disk_hash() function and the crc table on which it's based are a > > part of gfs2_ondisk.h: the ondisk metadata specification. This is a bit > > unusual since gfs uses a hash table on-disk for its directory structure. > > This header, including the hash function/table, must be included by user > > space programs like fsck that want to decipher a fs, and any change to the > > function or table would effectively make the fs corrupted. Because of > > this I think it's best for gfs to keep it's own copy as part of its ondisk > > format spec. > > for userspace there's libcrc32 as well. If it's *the* bog standard crc32 > I don't see a reason why your "spec" can't just reference that instead. > And esp in the kernel you should just use the in kernel one not your own > regardless; you can assume the in kernel one is optimized and it also > keeps size down. linux/lib/crc32table.h : crc32table_le[] is the same as our crc_32_tab[]. This looks like a standard that's not going to change, as you've said, so including crc32table.h and getting rid of our own table would work fine. Do we go a step beyond this and use say the crc32() function from linux/crc32.h? Is this _function_ as standard and unchanging as the table of crcs? In my tests it doesn't produce the same results as our gfs2_disk_hash() function, even with both using the same crc table. I don't mind adopting a new function and just writing a user space equivalent for the tools if it's a fixed standard. Dave ^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 00/14] GFS 2005-08-05 9:44 ` David Teigland @ 2005-08-05 10:07 ` Jörn Engel 2005-08-05 10:31 ` David Teigland 0 siblings, 1 reply; 79+ messages in thread From: Jörn Engel @ 2005-08-05 10:07 UTC (permalink / raw) To: David Teigland; +Cc: Arjan van de Ven, akpm, linux-kernel, linux-cluster On Fri, 5 August 2005 17:44:52 +0800, David Teigland wrote: > > linux/lib/crc32table.h : crc32table_le[] is the same as our crc_32_tab[]. > This looks like a standard that's not going to change, as you've said, so > including crc32table.h and getting rid of our own table would work fine. > > Do we go a step beyond this and use say the crc32() function from > linux/crc32.h? Is this _function_ as standard and unchanging as the table > of crcs? In my tests it doesn't produce the same results as our > gfs2_disk_hash() function, even with both using the same crc table. I > don't mind adopting a new function and just writing a user space > equivalent for the tools if it's a fixed standard. The function is basically set in stone. Variants exists depending on how it is called. I know of four variants, but there may be more: 1. Initial value is 0 2. Initial value is 0xffffffff a) Result is taken as-is b) Result is XORed with 0xffffffff Maybe your code implements 1a, while you tried 2b with the lib/crc32.c function or something similar? Jörn -- And spam is a useful source of entropy for /dev/random too! -- Jasmine Strong ^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 00/14] GFS 2005-08-05 10:07 ` Jörn Engel @ 2005-08-05 10:31 ` David Teigland 0 siblings, 0 replies; 79+ messages in thread From: David Teigland @ 2005-08-05 10:31 UTC (permalink / raw) To: J?rn Engel; +Cc: Arjan van de Ven, akpm, linux-kernel, linux-cluster On Fri, Aug 05, 2005 at 12:07:50PM +0200, J?rn Engel wrote: > On Fri, 5 August 2005 17:44:52 +0800, David Teigland wrote: > > Do we go a step beyond this and use say the crc32() function from > > linux/crc32.h? Is this _function_ as standard and unchanging as the table > > of crcs? In my tests it doesn't produce the same results as our > > gfs2_disk_hash() function, even with both using the same crc table. I > > don't mind adopting a new function and just writing a user space > > equivalent for the tools if it's a fixed standard. > > The function is basically set in stone. Variants exists depending on > how it is called. I know of four variants, but there may be more: > > 1. Initial value is 0 > 2. Initial value is 0xffffffff > a) Result is taken as-is > b) Result is XORed with 0xffffffff > > Maybe your code implements 1a, while you tried 2b with the lib/crc32.c > function or something similar? You're right, initial value 0xffffffff and xor result with 0xffffffff matches the results from our function. Great, we can get rid of gfs2_disk_hash() and use crc32() directly. Thanks, Dave ^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 00/14] GFS 2005-08-05 7:14 ` David Teigland 2005-08-05 7:27 ` [Linux-cluster] " Mike Christie 2005-08-05 7:34 ` Arjan van de Ven @ 2005-08-05 8:28 ` Jan Engelhardt 2005-08-05 8:34 ` Arjan van de Ven 2005-08-08 6:26 ` David Teigland 3 siblings, 1 reply; 79+ messages in thread From: Jan Engelhardt @ 2005-08-05 8:28 UTC (permalink / raw) To: David Teigland; +Cc: Arjan van de Ven, akpm, linux-kernel, linux-cluster >The gfs2_disk_hash() function and the crc table on which it's based are a >part of gfs2_ondisk.h: the ondisk metadata specification. This is a bit >unusual since gfs uses a hash table on-disk for its directory structure. >This header, including the hash function/table, must be included by user >space programs like fsck that want to decipher a fs, and any change to the >function or table would effectively make the fs corrupted. Because of >this I think it's best for gfs to keep it's own copy as part of its ondisk >format spec. Tune the spec to use kernel and libcrc32 tables and bump the version number of the spec to e.g. GFS 2.1. That way, things transform smoothly and could go out eventually at some later date. Jan Engelhardt -- ^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 00/14] GFS 2005-08-05 8:28 ` Jan Engelhardt @ 2005-08-05 8:34 ` Arjan van de Ven 0 siblings, 0 replies; 79+ messages in thread From: Arjan van de Ven @ 2005-08-05 8:34 UTC (permalink / raw) To: Jan Engelhardt; +Cc: David Teigland, akpm, linux-kernel, linux-cluster On Fri, 2005-08-05 at 10:28 +0200, Jan Engelhardt wrote: > >The gfs2_disk_hash() function and the crc table on which it's based are a > >part of gfs2_ondisk.h: the ondisk metadata specification. This is a bit > >unusual since gfs uses a hash table on-disk for its directory structure. > >This header, including the hash function/table, must be included by user > >space programs like fsck that want to decipher a fs, and any change to the > >function or table would effectively make the fs corrupted. Because of > >this I think it's best for gfs to keep it's own copy as part of its ondisk > >format spec. > > Tune the spec to use kernel and libcrc32 tables and bump the version number of > the spec to e.g. GFS 2.1. That way, things transform smoothly and could go out > eventually at some later date. afaik the tables aren't actually different. So no need to bump the spec! ^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 00/14] GFS 2005-08-05 7:14 ` David Teigland ` (2 preceding siblings ...) 2005-08-05 8:28 ` Jan Engelhardt @ 2005-08-08 6:26 ` David Teigland 3 siblings, 0 replies; 79+ messages in thread From: David Teigland @ 2005-08-08 6:26 UTC (permalink / raw) To: Arjan van de Ven, akpm; +Cc: linux-kernel, linux-cluster On Fri, Aug 05, 2005 at 03:14:15PM +0800, David Teigland wrote: > On Tue, Aug 02, 2005 at 09:45:24AM +0200, Arjan van de Ven wrote: > > * +++ b/fs/gfs2/fixed_div64.h 2005-08-01 14:13:08.009808200 +0800 > > ehhhh why? > > I'm not sure, actually, apart from the comments: > > do_div: /* For ia32 we need to pull some tricks to get past various versions > of the compiler which do not like us using do_div in the middle > of large functions. */ > > do_mod: /* Side effect free 64 bit mod operation */ > > fs/xfs/linux-2.6/xfs_linux.h (the origin of this file) has the same thing, > perhaps this is an old problem that's now fixed? I've looked into getting rid of these: - The existing do_div() works fine for me with 64 bit numerators, so I'll get rid of the "fixed" version. - The "fixed" do_mod() seems to be the only way to do 64 bit modulus. It would be great if I was wrong about that... Thanks, Dave ^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 00/14] GFS 2005-08-02 7:45 ` Arjan van de Ven ` (2 preceding siblings ...) 2005-08-05 7:14 ` David Teigland @ 2005-08-11 6:06 ` David Teigland 2005-08-11 6:55 ` Arjan van de Ven 3 siblings, 1 reply; 79+ messages in thread From: David Teigland @ 2005-08-11 6:06 UTC (permalink / raw) To: Arjan van de Ven; +Cc: akpm, linux-kernel, linux-cluster On Tue, Aug 02, 2005 at 09:45:24AM +0200, Arjan van de Ven wrote: > * + if (create) > + down_write(&ip->i_rw_mutex); > + else > + down_read(&ip->i_rw_mutex); > > why do you use a rwsem and not a regular semaphore? You are aware that > rwsems are far more expensive than regular ones right? How skewed is > the read/write ratio? Rough tests show around 4/1, that high or low? ^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 00/14] GFS 2005-08-11 6:06 ` David Teigland @ 2005-08-11 6:55 ` Arjan van de Ven 0 siblings, 0 replies; 79+ messages in thread From: Arjan van de Ven @ 2005-08-11 6:55 UTC (permalink / raw) To: David Teigland; +Cc: akpm, linux-kernel, linux-cluster On Thu, 2005-08-11 at 14:06 +0800, David Teigland wrote: > On Tue, Aug 02, 2005 at 09:45:24AM +0200, Arjan van de Ven wrote: > > > * + if (create) > > + down_write(&ip->i_rw_mutex); > > + else > > + down_read(&ip->i_rw_mutex); > > > > why do you use a rwsem and not a regular semaphore? You are aware that > > rwsems are far more expensive than regular ones right? How skewed is > > the read/write ratio? > > Rough tests show around 4/1, that high or low? that's quite borderline; if it was my code I'd not use a rwsem for that ratio (my own rule of thumb, based on not a lot other than gut feeling) is a 10/1 ratio at minimum... but it's not so low that it screams for removing it. However.... it might well make your code a lot simpler so it might still be worth simplifying. ^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 00/14] GFS 2005-08-02 7:18 [PATCH 00/14] GFS David Teigland 2005-08-02 7:45 ` Arjan van de Ven @ 2005-08-02 10:16 ` Pekka Enberg 2005-08-03 6:36 ` David Teigland 2005-08-03 6:44 ` [PATCH 00/14] GFS Pekka Enberg ` (2 subsequent siblings) 4 siblings, 1 reply; 79+ messages in thread From: Pekka Enberg @ 2005-08-02 10:16 UTC (permalink / raw) To: David Teigland; +Cc: akpm, linux-kernel, linux-cluster, Pekka Enberg Hi David, On 8/2/05, David Teigland <teigland@redhat.com> wrote: > Hi, GFS (Global File System) is a cluster file system that we'd like to > see added to the kernel. The 14 patches total about 900K so I won't send > them to the list unless that's requested. Comments and suggestions are > welcome. Thanks > +#define kmalloc_nofail(size, flags) \ > + gmalloc_nofail((size), (flags), __FILE__, __LINE__) [snip] > +void *gmalloc_nofail_real(unsigned int size, int flags, char *file, > + unsigned int line) > +{ > + void *x; > + for (;;) { > + x = kmalloc(size, flags); > + if (x) > + return x; > + if (time_after_eq(jiffies, gfs2_malloc_warning + 5 * HZ)) { > + printk("GFS2: out of memory: %s, %u\n", > + __FILE__, __LINE__); > + gfs2_malloc_warning = jiffies; > + } > + yield(); This does not belong in a filesystem. It also seems like a very bad idea. What are you trying to do here? If you absolutely must not fail, use __GFP_NOFAIL instead. > + } > +} > + > +#if defined(GFS2_MEMORY_SIMPLE) > + > +atomic_t gfs2_memory_count; > + > +void gfs2_memory_add_i(void *data, char *file, unsigned int line) > +{ > + atomic_inc(&gfs2_memory_count); > +} > + > +void gfs2_memory_rm_i(void *data, char *file, unsigned int line) > +{ > + if (data) > + atomic_dec(&gfs2_memory_count); > +} > + > +void *gmalloc(unsigned int size, int flags, char *file, unsigned int line) > +{ > + void *data = kmalloc(size, flags); > + if (data) > + atomic_inc(&gfs2_memory_count); > + return data; > +} > + > +void *gmalloc_nofail(unsigned int size, int flags, char *file, > + unsigned int line) > +{ > + atomic_inc(&gfs2_memory_count); > + return gmalloc_nofail_real(size, flags, file, line); > +} > + > +void gfree(void *data, char *file, unsigned int line) > +{ > + if (data) { > + atomic_dec(&gfs2_memory_count); > + kfree(data); > + } > +} -mm has memory leak detection patches and there are others floating around. Please do not introduce yet another subsystem-specific debug allocator. Pekka ^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 00/14] GFS 2005-08-02 10:16 ` Pekka Enberg @ 2005-08-03 6:36 ` David Teigland 2005-08-08 14:14 ` GFS Pekka J Enberg 0 siblings, 1 reply; 79+ messages in thread From: David Teigland @ 2005-08-03 6:36 UTC (permalink / raw) To: Pekka Enberg; +Cc: akpm, linux-kernel, linux-cluster, Pekka Enberg On Tue, Aug 02, 2005 at 01:16:53PM +0300, Pekka Enberg wrote: > > +void *gmalloc_nofail_real(unsigned int size, int flags, char *file, > > + unsigned int line) > > +{ > > + void *x; > > + for (;;) { > > + x = kmalloc(size, flags); > > + if (x) > > + return x; > > + if (time_after_eq(jiffies, gfs2_malloc_warning + 5 * HZ)) { > > + printk("GFS2: out of memory: %s, %u\n", > > + __FILE__, __LINE__); > > + gfs2_malloc_warning = jiffies; > > + } > > + yield(); > > This does not belong in a filesystem. It also seems like a very bad > idea. What are you trying to do here? If you absolutely must not fail, > use __GFP_NOFAIL instead. will do, carried over from before NOFAIL existed > -mm has memory leak detection patches and there are others floating > around. Please do not introduce yet another subsystem-specific debug > allocator. ok, thanks Dave ^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: GFS 2005-08-03 6:36 ` David Teigland @ 2005-08-08 14:14 ` Pekka J Enberg 2005-08-08 18:32 ` GFS Zach Brown 2005-08-10 5:59 ` GFS David Teigland 0 siblings, 2 replies; 79+ messages in thread From: Pekka J Enberg @ 2005-08-08 14:14 UTC (permalink / raw) To: David Teigland; +Cc: Pekka Enberg, akpm, linux-kernel, linux-cluster David Teigland writes: > +static ssize_t walk_vm_hard(struct file *file, char *buf, size_t size, > + loff_t *offset, do_rw_t operation) > +{ > + struct gfs2_holder *ghs; > + unsigned int num_gh = 0; > + ssize_t count; > + > + { Can we please get rid of the extra braces everywhere? [snip] David Teigland writes: > + > + for (vma = find_vma(mm, start); vma; vma = vma->vm_next) { > + if (end <= vma->vm_start) > + break; > + if (vma->vm_file && > + vma->vm_file->f_dentry->d_inode->i_sb == sb) { > + num_gh++; > + } > + } > + > + ghs = kmalloc((num_gh + 1) * sizeof(struct gfs2_holder), > + GFP_KERNEL); > + if (!ghs) { > + if (!dumping) > + up_read(&mm->mmap_sem); > + return -ENOMEM; > + } > + > + for (vma = find_vma(mm, start); vma; vma = vma->vm_next) { Sorry if this is an obvious question but what prevents another thread from doing mmap() before we do the second walk and messing up num_gh? > + if (end <= vma->vm_start) > + break; > + if (vma->vm_file) { > + struct inode *inode; > + inode = vma->vm_file->f_dentry->d_inode; > + if (inode->i_sb == sb) > + gfs2_holder_init(get_v2ip(inode)->i_gl, > + vma2state(vma), > + 0, &ghs[x++]); > + } > + } Pekka ^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: GFS 2005-08-08 14:14 ` GFS Pekka J Enberg @ 2005-08-08 18:32 ` Zach Brown 2005-08-09 14:49 ` GFS Pekka Enberg 2005-08-10 5:59 ` GFS David Teigland 1 sibling, 1 reply; 79+ messages in thread From: Zach Brown @ 2005-08-08 18:32 UTC (permalink / raw) To: Pekka J Enberg Cc: David Teigland, Pekka Enberg, akpm, linux-kernel, linux-cluster, mark.fasheh Pekka J Enberg wrote: > Sorry if this is an obvious question but what prevents another thread > from doing mmap() before we do the second walk and messing up num_gh? Nothing, I suspect. OCFS2 has a problem like this, too. It wants a way for a file system to serialize mmap/munmap/mremap during file IO. Well, more specifically, it wants to make sure that the locks it acquired at the start of the IO really cover the buf regions that might fault during the IO.. mapping activity during the IO can wreck that. - z ^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: GFS 2005-08-08 18:32 ` GFS Zach Brown @ 2005-08-09 14:49 ` Pekka Enberg 2005-08-09 17:17 ` GFS Zach Brown 2005-08-10 7:21 ` GFS Christoph Hellwig 0 siblings, 2 replies; 79+ messages in thread From: Pekka Enberg @ 2005-08-09 14:49 UTC (permalink / raw) To: Zach Brown Cc: David Teigland, Pekka Enberg, akpm, linux-kernel, linux-cluster, mark.fasheh On Mon, 2005-08-08 at 11:32 -0700, Zach Brown wrote: > > Sorry if this is an obvious question but what prevents another thread > > from doing mmap() before we do the second walk and messing up num_gh? > > Nothing, I suspect. OCFS2 has a problem like this, too. It wants a way > for a file system to serialize mmap/munmap/mremap during file IO. Well, > more specifically, it wants to make sure that the locks it acquired at > the start of the IO really cover the buf regions that might fault during > the IO.. mapping activity during the IO can wreck that. In addition, the vma walk will become an unmaintainable mess as soon as someone introduces another mmap() capable fs that needs similar locking. I am not an expert so could someone please explain why this cannot be done with a_ops->prepare_write and friends? Pekka ^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: GFS 2005-08-09 14:49 ` GFS Pekka Enberg @ 2005-08-09 17:17 ` Zach Brown 2005-08-09 18:35 ` GFS Pekka J Enberg 2005-08-10 4:48 ` GFS Pekka J Enberg 2005-08-10 7:21 ` GFS Christoph Hellwig 1 sibling, 2 replies; 79+ messages in thread From: Zach Brown @ 2005-08-09 17:17 UTC (permalink / raw) To: Pekka Enberg Cc: David Teigland, Pekka Enberg, akpm, linux-kernel, linux-cluster, mark.fasheh Pekka Enberg wrote: > In addition, the vma walk will become an unmaintainable mess as soon > as someone introduces another mmap() capable fs that needs similar > locking. Yup, I suspect that if the core kernel ends up caring about this problem then the VFS will be involved in helping file systems sort the locks they'll acquire around IO. > I am not an expert so could someone please explain why this cannot be > done with a_ops->prepare_write and friends? I'll try, briefly. Usually clustered file systems in Linux maintain data consistency for normal posix IO by holding DLM locks for the duration of their file->{read,write} methods. A task on a node won't be able to read until all tasks on other nodes have finished any conflicting writes they might have been performing, etc, nothing surprising here. Now say we want to extend consistency guarantees to mmap(). This boils down to protecting mappings with DLM locks. Say a page is mapped for reading, the continued presence of that mapping is protected by holding a DLM lock. If another node goes to write to that page, the read lock is revoked and the mapping is torn down. These locks are acquired in a_ops->nopage as the task faults and tries to bring up the mapping. And that's the problem. Because they're acquired in ->nopage they can be acquired during a fault that is servicing the 'buf' argument to an outer file->{read,write} operation which has grabbed a lock for the target file. Acquiring multiple locks introduces the risk of ABBA deadlocks. It's trivial to construct examples of mmap(), read(), and write() on 2 nodes with 2 files that deadlock. So clustered file systems in Linux (GFS, Lustre, OCFS2, (GPFS?)) all walk vmas in their file->{read,write} to discover mappings that belong to their files so that they can preemptively sort and acquire the locks that will be needed to cover the mappings that might be established in ->nopage. As you point out, this both relies on the mappings not changing and gets very exciting when you mix files and mappings between file systems that are each sorting and acquiring their own DLM locks. I brought this up with some people at the kernel summit but no one, including myself, considers it a high priority. It wouldn't be too hard to construct a patch if people want to take a look. - z ^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: GFS 2005-08-09 17:17 ` GFS Zach Brown @ 2005-08-09 18:35 ` Pekka J Enberg 2005-08-10 4:48 ` GFS Pekka J Enberg 1 sibling, 0 replies; 79+ messages in thread From: Pekka J Enberg @ 2005-08-09 18:35 UTC (permalink / raw) To: Zach Brown Cc: David Teigland, Pekka Enberg, akpm, linux-kernel, linux-cluster, mark.fasheh Hi Zach, Zach Brown writes: > I'll try, briefly. Thanks for the excellent explanation. Zach Brown writes: > And that's the problem. Because they're acquired in ->nopage they can > be acquired during a fault that is servicing the 'buf' argument to an > outer file->{read,write} operation which has grabbed a lock for the > target file. Acquiring multiple locks introduces the risk of ABBA > deadlocks. It's trivial to construct examples of mmap(), read(), and > write() on 2 nodes with 2 files that deadlock. But couldn't we use make_pages_present() to figure which locks we need, sort them, and then grab them? Zach Brown writes: > I brought this up with some people at the kernel summit but no one, > including myself, considers it a high priority. It wouldn't be too hard > to construct a patch if people want to take a look. I guess it's not a problem as long as the kernel has zero or one cluster filesystems that support mmap(). After we have two or more, we have a problem. The GFS2 vma walk needs fixing anyway, I think, as it can lead to buffer overflow (if we have more locks during the second walk). Pekka ^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: GFS 2005-08-09 17:17 ` GFS Zach Brown 2005-08-09 18:35 ` GFS Pekka J Enberg @ 2005-08-10 4:48 ` Pekka J Enberg 1 sibling, 0 replies; 79+ messages in thread From: Pekka J Enberg @ 2005-08-10 4:48 UTC (permalink / raw) To: Pekka J Enberg Cc: Zach Brown, David Teigland, Pekka Enberg, akpm, linux-kernel, linux-cluster, mark.fasheh Zach Brown writes: > But couldn't we use make_pages_present() to figure which locks we need, > sort them, and then grab them? Doh, obviously we can't as nopage() needs to bring the page in. Sorry about that. I also thought of another failure case for the vma walk. When a thread uses userspace memcpy() between two clusterfs mmap'd regions instead of write() or read(). Pekka ^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: GFS 2005-08-09 14:49 ` GFS Pekka Enberg 2005-08-09 17:17 ` GFS Zach Brown @ 2005-08-10 7:21 ` Christoph Hellwig 2005-08-10 7:31 ` GFS Pekka J Enberg 1 sibling, 1 reply; 79+ messages in thread From: Christoph Hellwig @ 2005-08-10 7:21 UTC (permalink / raw) To: Pekka Enberg Cc: Zach Brown, David Teigland, Pekka Enberg, akpm, linux-kernel, linux-cluster, mark.fasheh On Tue, Aug 09, 2005 at 05:49:43PM +0300, Pekka Enberg wrote: > On Mon, 2005-08-08 at 11:32 -0700, Zach Brown wrote: > > > Sorry if this is an obvious question but what prevents another thread > > > from doing mmap() before we do the second walk and messing up num_gh? > > > > Nothing, I suspect. OCFS2 has a problem like this, too. It wants a way > > for a file system to serialize mmap/munmap/mremap during file IO. Well, > > more specifically, it wants to make sure that the locks it acquired at > > the start of the IO really cover the buf regions that might fault during > > the IO.. mapping activity during the IO can wreck that. > > In addition, the vma walk will become an unmaintainable mess as soon as > someone introduces another mmap() capable fs that needs similar locking. We already have OCFS2 in -mm that does similar things. I think we need to solve this in common code before either of them can be merged. ^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: GFS 2005-08-10 7:21 ` GFS Christoph Hellwig @ 2005-08-10 7:31 ` Pekka J Enberg 2005-08-10 16:26 ` GFS Mark Fasheh 0 siblings, 1 reply; 79+ messages in thread From: Pekka J Enberg @ 2005-08-10 7:31 UTC (permalink / raw) To: Christoph Hellwig Cc: Zach Brown, David Teigland, Pekka Enberg, akpm, linux-kernel, linux-cluster, mark.fasheh On Tue, Aug 09, 2005 at 05:49:43PM +0300, Pekka Enberg wrote: > > In addition, the vma walk will become an unmaintainable mess as soon as > > someone introduces another mmap() capable fs that needs similar locking. Christoph Hellwig writes: > We already have OCFS2 in -mm that does similar things. I think we need > to solve this in common code before either of them can be merged. It seems to me that the distributed locks must be acquired in ->nopage anyway to solve the problem with memcpy() between two mmap'd regions. One possible solution would be for the lock manager to detect deadlocks and break some locks accordingly. Don't know how well that would mix with ->nopage though... Pekka ^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: GFS 2005-08-10 7:31 ` GFS Pekka J Enberg @ 2005-08-10 16:26 ` Mark Fasheh 2005-08-10 16:57 ` GFS Pekka J Enberg 0 siblings, 1 reply; 79+ messages in thread From: Mark Fasheh @ 2005-08-10 16:26 UTC (permalink / raw) To: Pekka J Enberg Cc: Christoph Hellwig, Zach Brown, David Teigland, Pekka Enberg, akpm, linux-kernel, linux-cluster On Wed, Aug 10, 2005 at 10:31:04AM +0300, Pekka J Enberg wrote: > It seems to me that the distributed locks must be acquired in ->nopage > anyway to solve the problem with memcpy() between two mmap'd regions. One > possible solution would be for the lock manager to detect deadlocks and > break some locks accordingly. Don't know how well that would mix with > ->nopage though... Yeah, my experience with ->nopage so far has indicated to me that we are to avoid erroring out if at all possible which I believe is what we'd have to do if a deadlock is found. Also, I'm not sure how multiple dlms would coordinate deadlock detection in that case. This may sound naive, but so far OCFS2 has avoided the nead for deadlock detection... I'd hate to have to add it now -- better to try avoiding them in the first place. --Mark -- Mark Fasheh Senior Software Developer, Oracle mark.fasheh@oracle.com ^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: GFS 2005-08-10 16:26 ` GFS Mark Fasheh @ 2005-08-10 16:57 ` Pekka J Enberg 2005-08-10 18:21 ` GFS Mark Fasheh 0 siblings, 1 reply; 79+ messages in thread From: Pekka J Enberg @ 2005-08-10 16:57 UTC (permalink / raw) To: Mark Fasheh Cc: Christoph Hellwig, Zach Brown, David Teigland, Pekka Enberg, akpm, linux-kernel, linux-cluster Hi Mark, Mark Fasheh writes: > This may sound naive, but so far OCFS2 has avoided the nead for deadlock > detection... I'd hate to have to add it now -- better to try avoiding them > in the first place. Surely avoiding them is preferred but how do you do that when you have to mmap'd regions where userspace does memcpy()? The kernel won't much saying in it until ->nopage. We cannot grab all the required locks in proper order here because we don't know what size the buffer is. That's why I think lock sorting won't work of all the cases and thus the problem needs to be taken care of by the dlm. Pekka ^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: GFS 2005-08-10 16:57 ` GFS Pekka J Enberg @ 2005-08-10 18:21 ` Mark Fasheh 2005-08-10 20:18 ` GFS Pekka J Enberg 0 siblings, 1 reply; 79+ messages in thread From: Mark Fasheh @ 2005-08-10 18:21 UTC (permalink / raw) To: Pekka J Enberg Cc: Christoph Hellwig, Zach Brown, David Teigland, Pekka Enberg, akpm, linux-kernel, linux-cluster On Wed, Aug 10, 2005 at 07:57:43PM +0300, Pekka J Enberg wrote: > Surely avoiding them is preferred but how do you do that when you have to > mmap'd regions where userspace does memcpy()? The kernel won't much saying > in it until ->nopage. We cannot grab all the required locks in proper order > here because we don't know what size the buffer is. That's why I think lock > sorting won't work of all the cases and thus the problem needs to be taken > care of by the dlm. Hmm, well today in OCFS2 if you're not coming from read or write, the lock is held only for the duration of ->nopage so I don't think we could get into any deadlocks for that usage. --Mark -- Mark Fasheh Senior Software Developer, Oracle mark.fasheh@oracle.com ^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: GFS 2005-08-10 18:21 ` GFS Mark Fasheh @ 2005-08-10 20:18 ` Pekka J Enberg 2005-08-10 22:07 ` GFS Mark Fasheh 0 siblings, 1 reply; 79+ messages in thread From: Pekka J Enberg @ 2005-08-10 20:18 UTC (permalink / raw) To: Mark Fasheh Cc: Christoph Hellwig, Zach Brown, David Teigland, Pekka Enberg, akpm, linux-kernel, linux-cluster Mark Fasheh writes: > Hmm, well today in OCFS2 if you're not coming from read or write, the lock > is held only for the duration of ->nopage so I don't think we could get into > any deadlocks for that usage. Aah, I see GFS2 does that too so no deadlocks here. Thanks. You, however, don't maintain the same level of data consistency when reads and writes are from other filesystems as they use ->nopage. Fixing this requires a generic vma walk in every write() and read(), no? That doesn't seem such an hot idea which brings us back to using ->nopage for taking the locks (but now the deadlocks are back). Pekka ^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: GFS 2005-08-10 20:18 ` GFS Pekka J Enberg @ 2005-08-10 22:07 ` Mark Fasheh 2005-08-11 4:41 ` GFS Pekka J Enberg 0 siblings, 1 reply; 79+ messages in thread From: Mark Fasheh @ 2005-08-10 22:07 UTC (permalink / raw) To: Pekka J Enberg Cc: Christoph Hellwig, Zach Brown, David Teigland, Pekka Enberg, akpm, linux-kernel, linux-cluster On Wed, Aug 10, 2005 at 11:18:48PM +0300, Pekka J Enberg wrote: > Aah, I see GFS2 does that too so no deadlocks here. Thanks. Yep, no problem :) > You, however, don't maintain the same level of data consistency when reads > and writes are from other filesystems as they use ->nopage. I'm not sure what you mean here... > Fixing this requires a generic vma walk in every write() and read(), no? > That doesn't seem such an hot idea which brings us back to using ->nopage > for taking the locks (but now the deadlocks are back). Yeah if you look through mmap.c in ocfs2_fill_ctxt_from_buf() we do this... Or am I misunderstanding what you mean? --Mark -- Mark Fasheh Senior Software Developer, Oracle mark.fasheh@oracle.com ^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: GFS 2005-08-10 22:07 ` GFS Mark Fasheh @ 2005-08-11 4:41 ` Pekka J Enberg 0 siblings, 0 replies; 79+ messages in thread From: Pekka J Enberg @ 2005-08-11 4:41 UTC (permalink / raw) To: Mark Fasheh Cc: Christoph Hellwig, Zach Brown, David Teigland, Pekka Enberg, akpm, linux-kernel, linux-cluster Hi, On Wed, Aug 10, 2005 at 11:18:48PM +0300, Pekka J Enberg wrote: > > You, however, don't maintain the same level of data consistency when reads > > and writes are from other filesystems as they use ->nopage. Mark Fasheh writes: > I'm not sure what you mean here... Reading and writing from other filesystems to a GFS2 mmap'd file does not walk the vmas. Therefore, data consistency guarantees are different: - A GFS2 filesystem does a read that writes to a GFS2 mmap'd file -> we take all locks for the mmap'd buffer in order and release them after read() is done. - A ext3 filesystem, for example, does a read that writes to a GFS2 mmap'd file -> we now take locks one page at the time releasing them before we exit ->nopage(). Other nodes are now free to write to the same GFS2 mmap'd file. Or am I missing something here? On Wed, Aug 10, 2005 at 11:18:48PM +0300, Pekka J Enberg wrote: > > Fixing this requires a generic vma walk in every write() and read(), no? > > That doesn't seem such an hot idea which brings us back to using ->nopage > > for taking the locks (but now the deadlocks are back). Mark Fasheh writes: > Yeah if you look through mmap.c in ocfs2_fill_ctxt_from_buf() we do this... > Or am I misunderstanding what you mean? If are doing write() or read() from some other filesystem, we don't walk the vmas but instead rely on ->nopage for locking, right? Pekka ^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: GFS 2005-08-08 14:14 ` GFS Pekka J Enberg 2005-08-08 18:32 ` GFS Zach Brown @ 2005-08-10 5:59 ` David Teigland 2005-08-10 6:06 ` GFS Pekka J Enberg 1 sibling, 1 reply; 79+ messages in thread From: David Teigland @ 2005-08-10 5:59 UTC (permalink / raw) To: Pekka J Enberg; +Cc: Pekka Enberg, akpm, linux-kernel, linux-cluster On Mon, Aug 08, 2005 at 05:14:45PM +0300, Pekka J Enberg wrote: if (!dumping) down_read(&mm->mmap_sem); > >+ > >+ for (vma = find_vma(mm, start); vma; vma = vma->vm_next) { > >+ if (end <= vma->vm_start) > >+ break; > >+ if (vma->vm_file && > >+ vma->vm_file->f_dentry->d_inode->i_sb == sb) { > >+ num_gh++; > >+ } > >+ } > >+ > >+ ghs = kmalloc((num_gh + 1) * sizeof(struct gfs2_holder), > >+ GFP_KERNEL); > >+ if (!ghs) { > >+ if (!dumping) > >+ up_read(&mm->mmap_sem); > >+ return -ENOMEM; > >+ } > >+ > >+ for (vma = find_vma(mm, start); vma; vma = vma->vm_next) { > > Sorry if this is an obvious question but what prevents another thread from > doing mmap() before we do the second walk and messing up num_gh? mm->mmap_sem ? ^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: GFS 2005-08-10 5:59 ` GFS David Teigland @ 2005-08-10 6:06 ` Pekka J Enberg 0 siblings, 0 replies; 79+ messages in thread From: Pekka J Enberg @ 2005-08-10 6:06 UTC (permalink / raw) To: David Teigland; +Cc: Pekka Enberg, akpm, linux-kernel, linux-cluster David Teigland writes: > > if (!dumping) > down_read(&mm->mmap_sem); > > > + > > > + for (vma = find_vma(mm, start); vma; vma = vma->vm_next) { > > > + if (end <= vma->vm_start) > > > + break; > > > + if (vma->vm_file && > > > + vma->vm_file->f_dentry->d_inode->i_sb == sb) { > > > + num_gh++; > > > + } > > > + } > > > + > > > + ghs = kmalloc((num_gh + 1) * sizeof(struct gfs2_holder), > > > + GFP_KERNEL); > > > + if (!ghs) { > > > + if (!dumping) > > > + up_read(&mm->mmap_sem); > > > + return -ENOMEM; > > > + } > > > + > > > + for (vma = find_vma(mm, start); vma; vma = vma->vm_next) { > > > > Sorry if this is an obvious question but what prevents another thread from > > doing mmap() before we do the second walk and messing up num_gh? > > mm->mmap_sem ? Aah, I read that !dumping expression the other way around. Sorry and thanks. Pekka ^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 00/14] GFS 2005-08-02 7:18 [PATCH 00/14] GFS David Teigland 2005-08-02 7:45 ` Arjan van de Ven 2005-08-02 10:16 ` Pekka Enberg @ 2005-08-03 6:44 ` Pekka Enberg 2005-08-08 9:57 ` David Teigland 2005-08-09 15:20 ` [PATCH 00/14] GFS Al Viro 2005-08-11 8:17 ` GFS - updated patches David Teigland 4 siblings, 1 reply; 79+ messages in thread From: Pekka Enberg @ 2005-08-03 6:44 UTC (permalink / raw) To: David Teigland; +Cc: akpm, linux-kernel, linux-cluster, Pekka Enberg Hi David, Some more comments below. Pekka On 8/2/05, David Teigland <teigland@redhat.com> wrote: > +/** > + * inode_create - create a struct gfs2_inode > + * @i_gl: The glock covering the inode > + * @inum: The inode number > + * @io_gl: the iopen glock to acquire/hold (using holder in new gfs2_inode) > + * @io_state: the state the iopen glock should be acquired in > + * @ipp: pointer to put the returned inode in > + * > + * Returns: errno > + */ > + > +static int inode_create(struct gfs2_glock *i_gl, struct gfs2_inum *inum, > + struct gfs2_glock *io_gl, unsigned int io_state, > + struct gfs2_inode **ipp) > +{ > + struct gfs2_sbd *sdp = i_gl->gl_sbd; > + struct gfs2_inode *ip; > + int error = 0; > + > + RETRY_MALLOC(ip = kmem_cache_alloc(gfs2_inode_cachep, GFP_KERNEL), ip); Why do you want to do this? The callers can handle ENOMEM just fine. > +/** > + * gfs2_random - Generate a random 32-bit number > + * > + * Generate a semi-crappy 32-bit pseudo-random number without using > + * floating point. > + * > + * The PRNG is from "Numerical Recipes in C" (second edition), page 284. > + * > + * Returns: a 32-bit random number > + */ > + > +uint32_t gfs2_random(void) > +{ > + gfs2_random_number = 0x0019660D * gfs2_random_number + 0x3C6EF35F; > + return gfs2_random_number; > +} Please consider moving this into lib/random.c. This one already appears in drivers/net/hamradio/dmascc.c. > +/** > + * gfs2_hash - hash an array of data > + * @data: the data to be hashed > + * @len: the length of data to be hashed > + * > + * Take some data and convert it to a 32-bit hash. > + * > + * This is the 32-bit FNV-1a hash from: > + * http://www.isthe.com/chongo/tech/comp/fnv/ > + * > + * Returns: the hash > + */ > + > +uint32_t gfs2_hash(const void *data, unsigned int len) > +{ > + uint32_t h = 0x811C9DC5; > + h = hash_more_internal(data, len, h); > + return h; > +} Is there a reason why you cannot use <linux/hash.h> or <linux/jhash.h>? > +void gfs2_sort(void *base, unsigned int num_elem, unsigned int size, > + int (*compar) (const void *, const void *)) > +{ > + register char *pbase = (char *)base; > + int i, j, k, h; > + static int cols[16] = {1391376, 463792, 198768, 86961, > + 33936, 13776, 4592, 1968, > + 861, 336, 112, 48, > + 21, 7, 3, 1}; > + > + for (k = 0; k < 16; k++) { > + h = cols[k]; > + for (i = h; i < num_elem; i++) { > + j = i; > + while (j >= h && > + (*compar)((void *)(pbase + size * (j - h)), > + (void *)(pbase + size * j)) > 0) { > + SWAP(pbase + size * j, > + pbase + size * (j - h), > + size); > + j = j - h; > + } > + } > + } > +} Please use sort() from lib/sort.c. > +/** > + * gfs2_io_error_inode_i - Flag an inode I/O error and withdraw > + * @ip: > + * @function: > + * @file: > + * @line: Please drop empty kerneldoc tags. (Appears in various other places as well.) > +#define RETRY_MALLOC(do_this, until_this) \ > +for (;;) { \ > + { do_this; } \ > + if (until_this) \ > + break; \ > + if (time_after_eq(jiffies, gfs2_malloc_warning + 5 * HZ)) { \ > + printk("GFS2: out of memory: %s, %u\n", __FILE__, __LINE__); \ > + gfs2_malloc_warning = jiffies; \ > + } \ > + yield(); \ > +} Please drop this. > +int gfs2_acl_create(struct gfs2_inode *dip, struct gfs2_inode *ip) > +{ > + struct gfs2_sbd *sdp = dip->i_sbd; > + struct posix_acl *acl = NULL; > + struct gfs2_ea_request er; > + mode_t mode = ip->i_di.di_mode; > + int error; > + > + if (!sdp->sd_args.ar_posix_acl) > + return 0; > + if (S_ISLNK(ip->i_di.di_mode)) > + return 0; > + > + memset(&er, 0, sizeof(struct gfs2_ea_request)); > + er.er_type = GFS2_EATYPE_SYS; > + > + error = acl_get(dip, ACL_DEFAULT, &acl, NULL, > + &er.er_data, &er.er_data_len); > + if (error) > + return error; > + if (!acl) { > + mode &= ~current->fs->umask; > + if (mode != ip->i_di.di_mode) > + error = munge_mode(ip, mode); > + return error; > + } > + > + { > + struct posix_acl *clone = posix_acl_clone(acl, GFP_KERNEL); > + error = -ENOMEM; > + if (!clone) > + goto out; > + gfs2_memory_add(clone); > + gfs2_memory_rm(acl); > + posix_acl_release(acl); > + acl = clone; > + } Please make this a real function. It is duplicated below. > + if (error > 0) { > + er.er_name = GFS2_POSIX_ACL_ACCESS; > + er.er_name_len = GFS2_POSIX_ACL_ACCESS_LEN; > + posix_acl_to_xattr(acl, er.er_data, er.er_data_len); > + er.er_mode = mode; > + er.er_flags = GFS2_ERF_MODE; > + error = gfs2_system_eaops.eo_set(ip, &er); > + if (error) > + goto out; > + } else > + munge_mode(ip, mode); > + > + out: > + gfs2_memory_rm(acl); > + posix_acl_release(acl); > + kfree(er.er_data); > + > + return error; Whitespace damage. > +int gfs2_acl_chmod(struct gfs2_inode *ip, struct iattr *attr) > +{ > + struct posix_acl *acl = NULL; > + struct gfs2_ea_location el; > + char *data; > + unsigned int len; > + int error; > + > + error = acl_get(ip, ACL_ACCESS, &acl, &el, &data, &len); > + if (error) > + return error; > + if (!acl) > + return gfs2_setattr_simple(ip, attr); > + > + { > + struct posix_acl *clone = posix_acl_clone(acl, GFP_KERNEL); > + error = -ENOMEM; > + if (!clone) > + goto out; > + gfs2_memory_add(clone); > + gfs2_memory_rm(acl); > + posix_acl_release(acl); > + acl = clone; > + } Duplicated above. > +static int ea_foreach(struct gfs2_inode *ip, ea_call_t ea_call, void *data) > +{ > + struct buffer_head *bh; > + int error; > + > + error = gfs2_meta_read(ip->i_gl, ip->i_di.di_eattr, > + DIO_START | DIO_WAIT, &bh); > + if (error) > + return error; > + > + if (!(ip->i_di.di_flags & GFS2_DIF_EA_INDIRECT)) > + error = ea_foreach_i(ip, bh, ea_call, data); goto out here so you can drop the else branch below. > + else { > + struct buffer_head *eabh; > + uint64_t *eablk, *end; > + > + if (gfs2_metatype_check(ip->i_sbd, bh, GFS2_METATYPE_IN)) { > + error = -EIO; > + goto out; > + } > + > + eablk = (uint64_t *)(bh->b_data + > + sizeof(struct gfs2_meta_header)); > + end = eablk + ip->i_sbd->sd_inptrs; > + > +static int ea_find_i(struct gfs2_inode *ip, struct buffer_head *bh, > + struct gfs2_ea_header *ea, struct gfs2_ea_header *prev, > + void *private) > +{ > + struct ea_find *ef = (struct ea_find *)private; > + struct gfs2_ea_request *er = ef->ef_er; > + > + if (ea->ea_type == GFS2_EATYPE_UNUSED) > + return 0; > + > + if (ea->ea_type == er->er_type) { > + if (ea->ea_name_len == er->er_name_len && > + !memcmp(GFS2_EA2NAME(ea), er->er_name, ea->ea_name_len)) { > + struct gfs2_ea_location *el = ef->ef_el; > + get_bh(bh); > + el->el_bh = bh; > + el->el_ea = ea; > + el->el_prev = prev; > + return 1; > + } > + } > + > +#if 0 > + else if ((ip->i_di.di_flags & GFS2_DIF_EA_PACKED) && > + er->er_type == GFS2_EATYPE_SYS) > + return 1; > +#endif Please drop commented out code. > +static int ea_list_i(struct gfs2_inode *ip, struct buffer_head *bh, > + struct gfs2_ea_header *ea, struct gfs2_ea_header *prev, > + void *private) > +{ > + struct ea_list *ei = (struct ea_list *)private; Please drop redundant cast. > +static int ea_set_i(struct gfs2_inode *ip, struct gfs2_ea_request *er, > + struct gfs2_ea_location *el) > +{ > + { > + struct ea_set es; > + int error; > + > + memset(&es, 0, sizeof(struct ea_set)); > + es.es_er = er; > + es.es_el = el; > + > + error = ea_foreach(ip, ea_set_simple, &es); > + if (error > 0) > + return 0; > + if (error) > + return error; > + } > + { > + unsigned int blks = 2; > + if (!(ip->i_di.di_flags & GFS2_DIF_EA_INDIRECT)) > + blks++; > + if (GFS2_EAREQ_SIZE_STUFFED(er) > ip->i_sbd->sd_jbsize) > + blks += DIV_RU(er->er_data_len, > + ip->i_sbd->sd_jbsize); > + > + return ea_alloc_skeleton(ip, er, blks, ea_set_block, el); > + } Please drop the extra braces. ^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 00/14] GFS 2005-08-03 6:44 ` [PATCH 00/14] GFS Pekka Enberg @ 2005-08-08 9:57 ` David Teigland 2005-08-08 10:00 ` GFS Pekka J Enberg ` (5 more replies) 0 siblings, 6 replies; 79+ messages in thread From: David Teigland @ 2005-08-08 9:57 UTC (permalink / raw) To: Pekka Enberg; +Cc: akpm, linux-kernel, linux-cluster, Pekka Enberg On Wed, Aug 03, 2005 at 09:44:06AM +0300, Pekka Enberg wrote: > > +uint32_t gfs2_hash(const void *data, unsigned int len) > > +{ > > + uint32_t h = 0x811C9DC5; > > + h = hash_more_internal(data, len, h); > > + return h; > > +} > > Is there a reason why you cannot use <linux/hash.h> or <linux/jhash.h>? See gfs2_hash_more() and comment; we hash discontiguous regions. > > +#define RETRY_MALLOC(do_this, until_this) \ > > +for (;;) { \ > > + { do_this; } \ > > + if (until_this) \ > > + break; \ > > + if (time_after_eq(jiffies, gfs2_malloc_warning + 5 * HZ)) { \ > > + printk("GFS2: out of memory: %s, %u\n", __FILE__, __LINE__); \ > > + gfs2_malloc_warning = jiffies; \ > > + } \ > > + yield(); \ > > +} > > Please drop this. Done in the spot that could deal with an error, but there are three other places that still need it. > > +static int ea_set_i(struct gfs2_inode *ip, struct gfs2_ea_request *er, > > + struct gfs2_ea_location *el) > > +{ > > + { > > + struct ea_set es; > > + int error; > > + > > + memset(&es, 0, sizeof(struct ea_set)); > > + es.es_er = er; > > + es.es_el = el; > > + > > + error = ea_foreach(ip, ea_set_simple, &es); > > + if (error > 0) > > + return 0; > > + if (error) > > + return error; > > + } > > + { > > + unsigned int blks = 2; > > + if (!(ip->i_di.di_flags & GFS2_DIF_EA_INDIRECT)) > > + blks++; > > + if (GFS2_EAREQ_SIZE_STUFFED(er) > ip->i_sbd->sd_jbsize) > > + blks += DIV_RU(er->er_data_len, > > + ip->i_sbd->sd_jbsize); > > + > > + return ea_alloc_skeleton(ip, er, blks, ea_set_block, el); > > + } > > Please drop the extra braces. Here and elsewhere we try to keep unused stuff off the stack. Are you suggesting that we're being overly cautious, or do you just dislike the way it looks? Thanks, Dave ^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: GFS 2005-08-08 9:57 ` David Teigland @ 2005-08-08 10:00 ` Pekka J Enberg 2005-08-08 10:05 ` [PATCH 00/14] GFS Arjan van de Ven ` (4 subsequent siblings) 5 siblings, 0 replies; 79+ messages in thread From: Pekka J Enberg @ 2005-08-08 10:00 UTC (permalink / raw) To: David Teigland; +Cc: Pekka Enberg, akpm, linux-kernel, linux-cluster David Teigland writes: > > > +static int ea_set_i(struct gfs2_inode *ip, struct gfs2_ea_request *er, > > > + struct gfs2_ea_location *el) > > > +{ > > > + { > > > + struct ea_set es; > > > + int error; > > > + > > > + memset(&es, 0, sizeof(struct ea_set)); > > > + es.es_er = er; > > > + es.es_el = el; > > > + > > > + error = ea_foreach(ip, ea_set_simple, &es); > > > + if (error > 0) > > > + return 0; > > > + if (error) > > > + return error; > > > + } > > > + { > > > + unsigned int blks = 2; > > > + if (!(ip->i_di.di_flags & GFS2_DIF_EA_INDIRECT)) > > > + blks++; > > > + if (GFS2_EAREQ_SIZE_STUFFED(er) > ip->i_sbd->sd_jbsize) > > > + blks += DIV_RU(er->er_data_len, > > > + ip->i_sbd->sd_jbsize); > > > + > > > + return ea_alloc_skeleton(ip, er, blks, ea_set_block, el); > > > + } > > > > Please drop the extra braces. > > Here and elsewhere we try to keep unused stuff off the stack. Are you > suggesting that we're being overly cautious, or do you just dislike the > way it looks? The extra braces hurt readability. Please drop them or make them proper functions instead. And yes, I think you're hiding potential stack usage problems here. Small unused stuff on the stack don't matter and large ones should probably be kmalloc() anyway. Pekka ^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 00/14] GFS 2005-08-08 9:57 ` David Teigland 2005-08-08 10:00 ` GFS Pekka J Enberg @ 2005-08-08 10:05 ` Arjan van de Ven 2005-08-08 10:20 ` Jörn Engel 2005-08-08 10:18 ` GFS Pekka J Enberg ` (3 subsequent siblings) 5 siblings, 1 reply; 79+ messages in thread From: Arjan van de Ven @ 2005-08-08 10:05 UTC (permalink / raw) To: David Teigland Cc: Pekka Enberg, akpm, linux-kernel, linux-cluster, Pekka Enberg On Mon, 2005-08-08 at 17:57 +0800, David Teigland wrote: > > > > Please drop the extra braces. > > Here and elsewhere we try to keep unused stuff off the stack. Are you > suggesting that we're being overly cautious, or do you just dislike the > way it looks? nice theory. In practice gcc 3.x still adds up all the stack space anyway and as long as gcc 3.x is a supported kernel compiler, you can't depend on this. Also.. please favor readability. gcc is getting smarter about stack use nowadays, and {}'s shouldn't be needed to help it, it tracks liveness of variables already. ^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 00/14] GFS 2005-08-08 10:05 ` [PATCH 00/14] GFS Arjan van de Ven @ 2005-08-08 10:20 ` Jörn Engel 0 siblings, 0 replies; 79+ messages in thread From: Jörn Engel @ 2005-08-08 10:20 UTC (permalink / raw) To: Arjan van de Ven Cc: David Teigland, Pekka Enberg, akpm, linux-kernel, linux-cluster, Pekka Enberg On Mon, 8 August 2005 12:05:25 +0200, Arjan van de Ven wrote: > On Mon, 2005-08-08 at 17:57 +0800, David Teigland wrote: > > > > > > Please drop the extra braces. > > > > Here and elsewhere we try to keep unused stuff off the stack. Are you > > suggesting that we're being overly cautious, or do you just dislike the > > way it looks? > > nice theory. In practice gcc 3.x still adds up all the stack space > anyway and as long as gcc 3.x is a supported kernel compiler, you can't > depend on this. Also.. please favor readability. gcc is getting smarter > about stack use nowadays, and {}'s shouldn't be needed to help it, it > tracks liveness of variables already. Plus, you don't have to guess about stack usage. Run "make checkstack" or, better yet, run the objdump of fs/gfs/built-in.o through the perl script. Jörn -- It's just what we asked for, but not what we want! -- anonymous ^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: GFS 2005-08-08 9:57 ` David Teigland 2005-08-08 10:00 ` GFS Pekka J Enberg 2005-08-08 10:05 ` [PATCH 00/14] GFS Arjan van de Ven @ 2005-08-08 10:18 ` Pekka J Enberg 2005-08-08 10:56 ` GFS David Teigland 2005-08-08 10:34 ` GFS Pekka J Enberg ` (2 subsequent siblings) 5 siblings, 1 reply; 79+ messages in thread From: Pekka J Enberg @ 2005-08-08 10:18 UTC (permalink / raw) To: David Teigland; +Cc: Pekka Enberg, akpm, linux-kernel, linux-cluster David Teigland writes: > > > +#define RETRY_MALLOC(do_this, until_this) \ > > > +for (;;) { \ > > > + { do_this; } \ > > > + if (until_this) \ > > > + break; \ > > > + if (time_after_eq(jiffies, gfs2_malloc_warning + 5 * HZ)) { \ > > > + printk("GFS2: out of memory: %s, %u\n", __FILE__, __LINE__); \ > > > + gfs2_malloc_warning = jiffies; \ > > > + } \ > > > + yield(); \ > > > +} > > > > Please drop this. > > Done in the spot that could deal with an error, but there are three other > places that still need it. Which places are those? I only see these: gfs2-02.patch:+ RETRY_MALLOC(ip = kmem_cache_alloc(gfs2_inode_cachep, GFP_KERNEL), ip); gfs2-02.patch-+ gfs2_memory_add(ip); gfs2-02.patch-+ memset(ip, 0, sizeof(struct gfs2_inode)); gfs2-02.patch-+ gfs2-02.patch-+ ip->i_num = *inum; gfs2-02.patch-+ -> GFP_NOFAIL. gfs2-02.patch:+ RETRY_MALLOC(page = grab_cache_page(aspace->i_mapping, index), gfs2-02.patch-+ page); gfs2-02.patch-+ } else { gfs2-02.patch-+ page = find_lock_page(aspace->i_mapping, index); gfs2-02.patch-+ if (!page) gfs2-02.patch-+ return NULL; I think you can set aspace->flags to GFP_NOFAIL but why can't you return NULL here on failure like you do for find_lock_page()? gfs2-02.patch:+ RETRY_MALLOC(bd = kmem_cache_alloc(gfs2_bufdata_cachep, GFP_KERNEL), gfs2-02.patch-+ bd); gfs2-02.patch-+ gfs2_memory_add(bd); gfs2-02.patch-+ atomic_inc(&gl->gl_sbd->sd_bufdata_count); gfs2-02.patch-+ gfs2-02.patch-+ memset(bd, 0, sizeof(struct gfs2_bufdata)); -> GFP_NOFAIL gfs2-08.patch:+ RETRY_MALLOC(gm = kmalloc(sizeof(struct gfs2_memory), GFP_KERNEL), gm); gfs2-08.patch-+ gm->gm_data = data; gfs2-08.patch-+ gm->gm_file = file; gfs2-08.patch-+ gm->gm_line = line; gfs2-08.patch-+ gfs2-08.patch-+ spin_lock(&memory_lock); -> GFP_NOFAIL gfs2-10.patch:+ RETRY_MALLOC(new_gh = gfs2_holder_get(gl, state, gfs2-10.patch-+ LM_FLAG_TRY | gfs2-10.patch-+ GL_NEVER_RECURSE), gfs2-10.patch-+ new_gh); gfs2-10.patch-+ set_bit(HIF_DEMOTE, &new_gh->gh_iflags); gfs2-10.patch-+ set_bit(HIF_DEALLOC, &new_gh->gh_iflags); gfs2_holder_get uses kmalloc which can use GFP_NOFAIL. Pekka ^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: GFS 2005-08-08 10:18 ` GFS Pekka J Enberg @ 2005-08-08 10:56 ` David Teigland 2005-08-08 10:57 ` GFS Pekka J Enberg 0 siblings, 1 reply; 79+ messages in thread From: David Teigland @ 2005-08-08 10:56 UTC (permalink / raw) To: Pekka J Enberg; +Cc: Pekka Enberg, akpm, linux-kernel, linux-cluster On Mon, Aug 08, 2005 at 01:18:45PM +0300, Pekka J Enberg wrote: > gfs2-02.patch:+ RETRY_MALLOC(ip = kmem_cache_alloc(gfs2_inode_cachep, > -> GFP_NOFAIL. Already gone, inode_create() can return an error. if (create) { RETRY_MALLOC(page = grab_cache_page(aspace->i_mapping, index), page); } else { page = find_lock_page(aspace->i_mapping, index); if (!page) return NULL; } > I think you can set aspace->flags to GFP_NOFAIL will try that > but why can't you return NULL here on failure like you do for > find_lock_page()? because create is set > gfs2-02.patch:+ RETRY_MALLOC(bd = kmem_cache_alloc(gfs2_bufdata_cachep, > GFP_KERNEL), > -> GFP_NOFAIL It looks to me like NOFAIL does nothing for kmem_cache_alloc(). Am I seeing that wrong? > gfs2-10.patch:+ RETRY_MALLOC(new_gh = gfs2_holder_get(gl, state, > gfs2_holder_get uses kmalloc which can use GFP_NOFAIL. Which means adding a new gfp_flags parameter... fine. Dave ^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: GFS 2005-08-08 10:56 ` GFS David Teigland @ 2005-08-08 10:57 ` Pekka J Enberg 2005-08-08 11:39 ` GFS David Teigland 0 siblings, 1 reply; 79+ messages in thread From: Pekka J Enberg @ 2005-08-08 10:57 UTC (permalink / raw) To: David Teigland; +Cc: Pekka Enberg, akpm, linux-kernel, linux-cluster David Teigland writes: > > but why can't you return NULL here on failure like you do for > > find_lock_page()? > > because create is set Yes, but looking at (some of the) top-level callers, there's no real reason why create must not fail. Am I missing something here? > > gfs2-02.patch:+ RETRY_MALLOC(bd = kmem_cache_alloc(gfs2_bufdata_cachep, > > GFP_KERNEL), > > -> GFP_NOFAIL > > It looks to me like NOFAIL does nothing for kmem_cache_alloc(). > Am I seeing that wrong? It is passed to the page allocator just like with kmalloc() which uses __cache_alloc() too. Pekka ^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: GFS 2005-08-08 10:57 ` GFS Pekka J Enberg @ 2005-08-08 11:39 ` David Teigland 0 siblings, 0 replies; 79+ messages in thread From: David Teigland @ 2005-08-08 11:39 UTC (permalink / raw) To: Pekka J Enberg; +Cc: Pekka Enberg, akpm, linux-kernel, linux-cluster On Mon, Aug 08, 2005 at 01:57:55PM +0300, Pekka J Enberg wrote: > David Teigland writes: > >> but why can't you return NULL here on failure like you do for > >> find_lock_page()? > > > >because create is set > > Yes, but looking at (some of the) top-level callers, there's no real reason > why create must not fail. Am I missing something here? I'll trace the callers back farther and see about dealing with errors. > >> gfs2-02.patch:+ RETRY_MALLOC(bd = kmem_cache_alloc(gfs2_bufdata_cachep, > > It is passed to the page allocator just like with kmalloc() which uses > __cache_alloc() too. Yes, I read it wrongly, looks like NOFAIL should work fine. I think we can get rid of the RETRY macro entirely. Thanks, Dave ^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: GFS 2005-08-08 9:57 ` David Teigland ` (2 preceding siblings ...) 2005-08-08 10:18 ` GFS Pekka J Enberg @ 2005-08-08 10:34 ` Pekka J Enberg 2005-08-09 14:55 ` GFS Pekka J Enberg 2005-08-10 7:40 ` GFS Pekka J Enberg 5 siblings, 0 replies; 79+ messages in thread From: Pekka J Enberg @ 2005-08-08 10:34 UTC (permalink / raw) To: David Teigland; +Cc: Pekka Enberg, akpm, linux-kernel, linux-cluster David Teigland writes: > > Is there a reason why you cannot use <linux/hash.h> or <linux/jhash.h>? > > See gfs2_hash_more() and comment; we hash discontiguous regions. jhash() takes an initial value. Isn't that sufficient? Pekka ^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: GFS 2005-08-08 9:57 ` David Teigland ` (3 preceding siblings ...) 2005-08-08 10:34 ` GFS Pekka J Enberg @ 2005-08-09 14:55 ` Pekka J Enberg 2005-08-10 7:40 ` GFS Pekka J Enberg 5 siblings, 0 replies; 79+ messages in thread From: Pekka J Enberg @ 2005-08-09 14:55 UTC (permalink / raw) To: David Teigland; +Cc: akpm, linux-kernel, linux-cluster Hi David, Here are some more comments. Pekka +/************************************************************************** **** > +******************************************************************************* > +** > +** Copyright (C) Sistina Software, Inc. 1997-2003 All rights reserved. > +** Copyright (C) 2004-2005 Red Hat, Inc. All rights reserved. > +** > +** This copyrighted material is made available to anyone wishing to use, > +** modify, copy, or redistribute it subject to the terms and conditions > +** of the GNU General Public License v.2. > +** > +******************************************************************************* > +******************************************************************************/ Do you really need this verbose banner? > +#define NO_CREATE 0 > +#define CREATE 1 > + > +#define NO_WAIT 0 > +#define WAIT 1 > + > +#define NO_FORCE 0 > +#define FORCE 1 The code seems to interchangeably use FORCE and NO_FORCE together with TRUE and FALSE. Perhaps they could be dropped? > +#define GLF_PLUG 0 > +#define GLF_LOCK 1 > +#define GLF_STICKY 2 > +#define GLF_PREFETCH 3 > +#define GLF_SYNC 4 > +#define GLF_DIRTY 5 > +#define GLF_SKIP_WAITERS2 6 > +#define GLF_GREEDY 7 Would be nice if these were either enums or had a comment linking them to the struct member they are used for. > +#define GIF_MIN_INIT 0 > +#define GIF_QD_LOCKED 1 > +#define GIF_PAGED 2 > +#define GIF_SW_PAGED 3 Same here and in few other places too. > +#define LO_BEFORE_COMMIT(sdp) \ > +do { \ > + int __lops_x; \ > + for (__lops_x = 0; gfs2_log_ops[__lops_x]; __lops_x++) \ > + if (gfs2_log_ops[__lops_x]->lo_before_commit) \ > + gfs2_log_ops[__lops_x]->lo_before_commit((sdp)); \ > +} while (0) > + > +#define LO_AFTER_COMMIT(sdp, ai) \ > +do { \ > + int __lops_x; \ > + for (__lops_x = 0; gfs2_log_ops[__lops_x]; __lops_x++) \ > + if (gfs2_log_ops[__lops_x]->lo_after_commit) \ > + gfs2_log_ops[__lops_x]->lo_after_commit((sdp), (ai)); \ > +} while (0) > + > +#define LO_BEFORE_SCAN(jd, head, pass) \ > +do \ > +{ \ > + int __lops_x; \ > + for (__lops_x = 0; gfs2_log_ops[__lops_x]; __lops_x++) \ > + if (gfs2_log_ops[__lops_x]->lo_before_scan) \ > + gfs2_log_ops[__lops_x]->lo_before_scan((jd), (head), (pass)); \ > +} \ > +while (0) static inline functions, please. > +static inline int LO_SCAN_ELEMENTS(struct gfs2_jdesc *jd, unsigned int start, > + struct gfs2_log_descriptor *ld, > + unsigned int pass) Lower case name, please. > +{ > + unsigned int x; > + int error; > + > + for (x = 0; gfs2_log_ops[x]; x++) > + if (gfs2_log_ops[x]->lo_scan_elements) { > + error = gfs2_log_ops[x]->lo_scan_elements(jd, start, > + ld, pass); > + if (error) > + return error; > + } > + > + return 0; > +} > + > +#define LO_AFTER_SCAN(jd, error, pass) \ > +do \ > +{ \ > + int __lops_x; \ > + for (__lops_x = 0; gfs2_log_ops[__lops_x]; __lops_x++) \ > + if (gfs2_log_ops[__lops_x]->lo_before_scan) \ > + gfs2_log_ops[__lops_x]->lo_after_scan((jd), (error), (pass)); \ > +} \ > +while (0) static inline function, please. > + > +#include <linux/sched.h> > +#include <linux/slab.h> > +#include <linux/smp_lock.h> > +#include <linux/spinlock.h> > +#include <asm/semaphore.h> > +#include <linux/completion.h> > +#include <linux/buffer_head.h> > +#include <asm/uaccess.h> > +#include <linux/pagemap.h> > +#include <linux/uio.h> > +#include <linux/blkdev.h> > +#include <linux/mm.h> > +#include <asm/uaccess.h> > +#include <linux/gfs2_ioctl.h> Preferred order is to include linux/ first and asm/ after that. > +#define vma2state(vma) \ > +((((vma)->vm_flags & (VM_MAYWRITE | VM_MAYSHARE)) == \ > + (VM_MAYWRITE | VM_MAYSHARE)) ? \ > + LM_ST_EXCLUSIVE : LM_ST_SHARED) \ static inline function, please. The above is completely unreadable. > +struct inode *gfs2_ip2v(struct gfs2_inode *ip, int create) > +{ > + struct inode *inode = NULL, *tmp; > + > + gfs2_assert_warn(ip->i_sbd, > + test_bit(GIF_MIN_INIT, &ip->i_flags)); > + > + spin_lock(&ip->i_spin); > + if (ip->i_vnode) > + inode = igrab(ip->i_vnode); > + spin_unlock(&ip->i_spin); Suggestion: make the above a separate function __gfs2_lookup_inode(), use it here and where you pass NO_CREATE to get rid of the create parameter. > + > + if (inode || !create) > + return inode; > + > + tmp = new_inode(ip->i_sbd->sd_vfs); > + if (!tmp) > + return NULL; [snip] > + entries = gfs2_tune_get(sdp, gt_entries_per_readdir); > + size = sizeof(struct filldir_bad) + > + entries * (sizeof(struct filldir_bad_entry) + GFS2_FAST_NAME_SIZE); > + > + fdb = kmalloc(size, GFP_KERNEL); > + if (!fdb) > + return -ENOMEM; > + memset(fdb, 0, size); kzalloc(), which is in 2.6.13-rc6-mm5 please. Appears in other places as well. > + if (error) { > + printk("GFS2: fsid=%s: can't make FS RW: %d\n", > + sdp->sd_fsname, error); > + goto fail_proc; > + } > + } > + > + gfs2_glock_dq_uninit(&mount_gh); > + > + return 0; > + > + fail_proc: > + gfs2_proc_fs_del(sdp); > + init_threads(sdp, UNDO); Please provide a release_threads instead and make it deal with partial initialization. The above is very confusing. > + parent, > + strlen(system_utsname.nodename)); > + else if (gfs2_filecmp(&dentry->d_name, "@mach", 5)) > + new = lookup_one_len(system_utsname.machine, > + parent, > + strlen(system_utsname.machine)); > + else if (gfs2_filecmp(&dentry->d_name, "@os", 3)) > + new = lookup_one_len(system_utsname.sysname, > + parent, > + strlen(system_utsname.sysname)); > + else if (gfs2_filecmp(&dentry->d_name, "@uid", 4)) > + new = lookup_one_len(buf, > + parent, > + sprintf(buf, "%u", current->fsuid)); > + else if (gfs2_filecmp(&dentry->d_name, "@gid", 4)) > + new = lookup_one_len(buf, > + parent, > + sprintf(buf, "%u", current->fsgid)); > + else if (gfs2_filecmp(&dentry->d_name, "@sys", 4)) > + new = lookup_one_len(buf, > + parent, > + sprintf(buf, "%s_%s", > + system_utsname.machine, > + system_utsname.sysname)); > + else if (gfs2_filecmp(&dentry->d_name, "@jid", 4)) > + new = lookup_one_len(buf, > + parent, > + sprintf(buf, "%u", > + sdp->sd_jdesc->jd_jid)); Smells like policy in the kernel. Why can't this be done in the userspace? > + parent, > + strlen(system_utsname.nodename)); > + else if (gfs2_filecmp(&dentry->d_name, "{mach}", 6)) > + new = lookup_one_len(system_utsname.machine, > + parent, > + strlen(system_utsname.machine)); > + else if (gfs2_filecmp(&dentry->d_name, "{os}", 4)) > + new = lookup_one_len(system_utsname.sysname, > + parent, > + strlen(system_utsname.sysname)); > + else if (gfs2_filecmp(&dentry->d_name, "{uid}", 5)) > + new = lookup_one_len(buf, > + parent, > + sprintf(buf, "%u", current->fsuid)); > + else if (gfs2_filecmp(&dentry->d_name, "{gid}", 5)) > + new = lookup_one_len(buf, > + parent, > + sprintf(buf, "%u", current->fsgid)); > + else if (gfs2_filecmp(&dentry->d_name, "{sys}", 5)) > + new = lookup_one_len(buf, > + parent, > + sprintf(buf, "%s_%s", > + system_utsname.machine, > + system_utsname.sysname)); > + else if (gfs2_filecmp(&dentry->d_name, "{jid}", 5)) > + new = lookup_one_len(buf, > + parent, > + sprintf(buf, "%u", > + sdp->sd_jdesc->jd_jid)); Ditto. > +int gfs2_statfs_slow(struct gfs2_sbd *sdp, struct gfs2_statfs_change *sc) > +{ > + struct gfs2_holder ri_gh; > + struct gfs2_rgrpd *rgd_next; > + struct gfs2_holder *gha, *gh; > + unsigned int slots = 64; > + unsigned int x; > + int done; > + int error = 0, err; > + > + memset(sc, 0, sizeof(struct gfs2_statfs_change)); > + gha = kmalloc(slots * sizeof(struct gfs2_holder), GFP_KERNEL); > + if (!gha) > + return -ENOMEM; > + memset(gha, 0, slots * sizeof(struct gfs2_holder)); kcalloc, please > + line = kmalloc(256, GFP_KERNEL); > + if (!line) > + return -ENOMEM; > + > + len = snprintf(line, 256, "GFS2: fsid=%s: quota %s for %s %u\r\n", > + sdp->sd_fsname, type, > + (test_bit(QDF_USER, &qd->qd_flags)) ? "user" : "group", > + qd->qd_id); Please use constant instead of magic number 256. > +struct lm_lockops gdlm_ops = { > + lm_proto_name:"lock_dlm", > + lm_mount:gdlm_mount, > + lm_others_may_mount:gdlm_others_may_mount, > + lm_unmount:gdlm_unmount, > + lm_withdraw:gdlm_withdraw, > + lm_get_lock:gdlm_get_lock, > + lm_put_lock:gdlm_put_lock, > + lm_lock:gdlm_lock, > + lm_unlock:gdlm_unlock, > + lm_plock:gdlm_plock, > + lm_punlock:gdlm_punlock, > + lm_plock_get:gdlm_plock_get, > + lm_cancel:gdlm_cancel, > + lm_hold_lvb:gdlm_hold_lvb, > + lm_unhold_lvb:gdlm_unhold_lvb, > + lm_sync_lvb:gdlm_sync_lvb, > + lm_recovery_done:gdlm_recovery_done, > + lm_owner:THIS_MODULE, > +}; C99 initializers, please. ^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: GFS 2005-08-08 9:57 ` David Teigland ` (4 preceding siblings ...) 2005-08-09 14:55 ` GFS Pekka J Enberg @ 2005-08-10 7:40 ` Pekka J Enberg 2005-08-10 7:43 ` GFS Christoph Hellwig 5 siblings, 1 reply; 79+ messages in thread From: Pekka J Enberg @ 2005-08-10 7:40 UTC (permalink / raw) To: David Teigland; +Cc: akpm, linux-kernel, linux-cluster Hi David, > + return -EINVAL; > + if (!access_ok(VERIFY_WRITE, buf, size)) > + return -EFAULT; > + > + if (!(file->f_flags & O_LARGEFILE)) { > + if (*offset >= 0x7FFFFFFFull) > + return -EFBIG; > + if (*offset + size > 0x7FFFFFFFull) > + size = 0x7FFFFFFFull - *offset; Please use a constant instead for 0x7FFFFFFFull. (Appears in various other places as well.) Pekka ^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: GFS 2005-08-10 7:40 ` GFS Pekka J Enberg @ 2005-08-10 7:43 ` Christoph Hellwig 0 siblings, 0 replies; 79+ messages in thread From: Christoph Hellwig @ 2005-08-10 7:43 UTC (permalink / raw) To: Pekka J Enberg; +Cc: David Teigland, akpm, linux-kernel, linux-cluster On Wed, Aug 10, 2005 at 10:40:37AM +0300, Pekka J Enberg wrote: > Hi David, > > >+ return -EINVAL; > >+ if (!access_ok(VERIFY_WRITE, buf, size)) > >+ return -EFAULT; > >+ > >+ if (!(file->f_flags & O_LARGEFILE)) { > >+ if (*offset >= 0x7FFFFFFFull) > >+ return -EFBIG; > >+ if (*offset + size > 0x7FFFFFFFull) > >+ size = 0x7FFFFFFFull - *offset; > > Please use a constant instead for 0x7FFFFFFFull. (Appears in various other > places as well.) In fact this very much looks like it's duplicating generic_write_checks(). Folks, please use common code. ^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 00/14] GFS 2005-08-02 7:18 [PATCH 00/14] GFS David Teigland ` (2 preceding siblings ...) 2005-08-03 6:44 ` [PATCH 00/14] GFS Pekka Enberg @ 2005-08-09 15:20 ` Al Viro 2005-08-10 7:03 ` Christoph Hellwig 2005-08-11 8:17 ` GFS - updated patches David Teigland 4 siblings, 1 reply; 79+ messages in thread From: Al Viro @ 2005-08-09 15:20 UTC (permalink / raw) To: David Teigland; +Cc: akpm, linux-kernel, linux-cluster On Tue, Aug 02, 2005 at 03:18:28PM +0800, David Teigland wrote: > Hi, GFS (Global File System) is a cluster file system that we'd like to > see added to the kernel. The 14 patches total about 900K so I won't send > them to the list unless that's requested. Comments and suggestions are > welcome. Thanks > > http://redhat.com/~teigland/gfs2/20050801/gfs2-full.patch > http://redhat.com/~teigland/gfs2/20050801/broken-out/ Kindly lose the "Context Dependent Pathname" crap. ^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 00/14] GFS 2005-08-09 15:20 ` [PATCH 00/14] GFS Al Viro @ 2005-08-10 7:03 ` Christoph Hellwig 2005-08-10 10:30 ` Lars Marowsky-Bree 0 siblings, 1 reply; 79+ messages in thread From: Christoph Hellwig @ 2005-08-10 7:03 UTC (permalink / raw) To: Al Viro; +Cc: David Teigland, akpm, linux-kernel, linux-cluster On Tue, Aug 09, 2005 at 04:20:45PM +0100, Al Viro wrote: > On Tue, Aug 02, 2005 at 03:18:28PM +0800, David Teigland wrote: > > Hi, GFS (Global File System) is a cluster file system that we'd like to > > see added to the kernel. The 14 patches total about 900K so I won't send > > them to the list unless that's requested. Comments and suggestions are > > welcome. Thanks > > > > http://redhat.com/~teigland/gfs2/20050801/gfs2-full.patch > > http://redhat.com/~teigland/gfs2/20050801/broken-out/ > > Kindly lose the "Context Dependent Pathname" crap. Same for ocfs2. ^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 00/14] GFS 2005-08-10 7:03 ` Christoph Hellwig @ 2005-08-10 10:30 ` Lars Marowsky-Bree 2005-08-10 10:32 ` Christoph Hellwig 0 siblings, 1 reply; 79+ messages in thread From: Lars Marowsky-Bree @ 2005-08-10 10:30 UTC (permalink / raw) To: Christoph Hellwig, Al Viro, David Teigland, akpm, linux-kernel, linux-cluster On 2005-08-10T08:03:09, Christoph Hellwig <hch@infradead.org> wrote: > > Kindly lose the "Context Dependent Pathname" crap. > Same for ocfs2. Would a generic implementation of that higher up in the VFS be more acceptable? It's not like context-dependent symlinks are an arbitary feature, but rather very useful in practice. Sincerely, Lars Marowsky-Brée <lmb@suse.de> -- High Availability & Clustering SUSE Labs, Research and Development SUSE LINUX Products GmbH - A Novell Business -- Charles Darwin "Ignorance more frequently begets confidence than does knowledge" ^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 00/14] GFS 2005-08-10 10:30 ` Lars Marowsky-Bree @ 2005-08-10 10:32 ` Christoph Hellwig 2005-08-10 10:34 ` Lars Marowsky-Bree 0 siblings, 1 reply; 79+ messages in thread From: Christoph Hellwig @ 2005-08-10 10:32 UTC (permalink / raw) To: Lars Marowsky-Bree Cc: Christoph Hellwig, Al Viro, David Teigland, akpm, linux-kernel, linux-cluster On Wed, Aug 10, 2005 at 12:30:41PM +0200, Lars Marowsky-Bree wrote: > On 2005-08-10T08:03:09, Christoph Hellwig <hch@infradead.org> wrote: > > > > Kindly lose the "Context Dependent Pathname" crap. > > Same for ocfs2. > > Would a generic implementation of that higher up in the VFS be more > acceptable? No. Use mount --bind ^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 00/14] GFS 2005-08-10 10:32 ` Christoph Hellwig @ 2005-08-10 10:34 ` Lars Marowsky-Bree 2005-08-10 10:54 ` Christoph Hellwig 0 siblings, 1 reply; 79+ messages in thread From: Lars Marowsky-Bree @ 2005-08-10 10:34 UTC (permalink / raw) To: Christoph Hellwig, Al Viro, David Teigland, akpm, linux-kernel, linux-cluster On 2005-08-10T11:32:56, Christoph Hellwig <hch@infradead.org> wrote: > > Would a generic implementation of that higher up in the VFS be more > > acceptable? > No. Use mount --bind That's a working and less complex alternative for upto how many places at once? That works for non-root users how...? Sincerely, Lars Marowsky-Brée <lmb@suse.de> -- High Availability & Clustering SUSE Labs, Research and Development SUSE LINUX Products GmbH - A Novell Business -- Charles Darwin "Ignorance more frequently begets confidence than does knowledge" ^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 00/14] GFS 2005-08-10 10:34 ` Lars Marowsky-Bree @ 2005-08-10 10:54 ` Christoph Hellwig 2005-08-10 11:02 ` Lars Marowsky-Bree 0 siblings, 1 reply; 79+ messages in thread From: Christoph Hellwig @ 2005-08-10 10:54 UTC (permalink / raw) To: Lars Marowsky-Bree Cc: Christoph Hellwig, Al Viro, David Teigland, akpm, linux-kernel, linux-cluster On Wed, Aug 10, 2005 at 12:34:24PM +0200, Lars Marowsky-Bree wrote: > On 2005-08-10T11:32:56, Christoph Hellwig <hch@infradead.org> wrote: > > > > Would a generic implementation of that higher up in the VFS be more > > > acceptable? > > No. Use mount --bind > > That's a working and less complex alternative for upto how many places > at once? That works for non-root users how...? It works now. Unlike context link which steal totally valid symlink targets for magic mushroom bullshit. ^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 00/14] GFS 2005-08-10 10:54 ` Christoph Hellwig @ 2005-08-10 11:02 ` Lars Marowsky-Bree 2005-08-10 11:05 ` Christoph Hellwig 0 siblings, 1 reply; 79+ messages in thread From: Lars Marowsky-Bree @ 2005-08-10 11:02 UTC (permalink / raw) To: Christoph Hellwig, Al Viro, David Teigland, akpm, linux-kernel, linux-cluster On 2005-08-10T11:54:50, Christoph Hellwig <hch@infradead.org> wrote: > It works now. Unlike context link which steal totally valid symlink > targets for magic mushroom bullshit. Right, that is a valid concern. Avoiding context dependent symlinks entirely certainly is one possible path around this. But, let's just for the sake of this discussion continue the other path for a bit, to explore the options available for implementing CPS which don't result in shivers running down the spine, because I believe CPS do have some applications in which bind mounts are not entirely adequate replacements. (Unless, of course, you want a bind mount for each homedirectory which might include architecture-specific subdirectories or for every host-specific configuration file.) What would a syntax look like which in your opinion does not remove totally valid symlink targets for magic mushroom bullshit? Prefix with // (which, according to POSIX, allows for implementation-defined behaviour)? Something else, not allowed in a regular pathname? If we can't find an acceptable way of implementing them, maybe it's time to grab some magic mushrooms and come up with a new approach, then ;-) Sincerely, Lars Marowsky-Brée <lmb@suse.de> -- High Availability & Clustering SUSE Labs, Research and Development SUSE LINUX Products GmbH - A Novell Business -- Charles Darwin "Ignorance more frequently begets confidence than does knowledge" ^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 00/14] GFS 2005-08-10 11:02 ` Lars Marowsky-Bree @ 2005-08-10 11:05 ` Christoph Hellwig 2005-08-10 11:09 ` Lars Marowsky-Bree 0 siblings, 1 reply; 79+ messages in thread From: Christoph Hellwig @ 2005-08-10 11:05 UTC (permalink / raw) To: Lars Marowsky-Bree Cc: Christoph Hellwig, Al Viro, David Teigland, akpm, linux-kernel, linux-cluster On Wed, Aug 10, 2005 at 01:02:59PM +0200, Lars Marowsky-Bree wrote: > What would a syntax look like which in your opinion does not remove > totally valid symlink targets for magic mushroom bullshit? Prefix with > // (which, according to POSIX, allows for implementation-defined > behaviour)? Something else, not allowed in a regular pathname? None. just don't do it. Use bindmount, they're cheap and have sane defined semtantics. ^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 00/14] GFS 2005-08-10 11:05 ` Christoph Hellwig @ 2005-08-10 11:09 ` Lars Marowsky-Bree 2005-08-10 11:11 ` Christoph Hellwig 0 siblings, 1 reply; 79+ messages in thread From: Lars Marowsky-Bree @ 2005-08-10 11:09 UTC (permalink / raw) To: Christoph Hellwig, Al Viro, David Teigland, akpm, linux-kernel, linux-cluster On 2005-08-10T12:05:11, Christoph Hellwig <hch@infradead.org> wrote: > > What would a syntax look like which in your opinion does not remove > > totally valid symlink targets for magic mushroom bullshit? Prefix with > > // (which, according to POSIX, allows for implementation-defined > > behaviour)? Something else, not allowed in a regular pathname? > None. just don't do it. Use bindmount, they're cheap and have sane > defined semtantics. So for every directoy hiearchy on a shared filesystem, each user needs to have the complete list of bindmounts needed, and automatically resync that across all nodes when a new one is added or removed? And then have that executed by root, because a regular user can't? Sure. Very cheap and sane. I'm buying. Sincerely, Lars Marowsky-Brée <lmb@suse.de> -- High Availability & Clustering SUSE Labs, Research and Development SUSE LINUX Products GmbH - A Novell Business -- Charles Darwin "Ignorance more frequently begets confidence than does knowledge" ^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 00/14] GFS 2005-08-10 11:09 ` Lars Marowsky-Bree @ 2005-08-10 11:11 ` Christoph Hellwig 2005-08-10 13:26 ` [Linux-cluster] " AJ Lewis 0 siblings, 1 reply; 79+ messages in thread From: Christoph Hellwig @ 2005-08-10 11:11 UTC (permalink / raw) To: Lars Marowsky-Bree Cc: Christoph Hellwig, Al Viro, David Teigland, akpm, linux-kernel, linux-cluster On Wed, Aug 10, 2005 at 01:09:17PM +0200, Lars Marowsky-Bree wrote: > On 2005-08-10T12:05:11, Christoph Hellwig <hch@infradead.org> wrote: > > > > What would a syntax look like which in your opinion does not remove > > > totally valid symlink targets for magic mushroom bullshit? Prefix with > > > // (which, according to POSIX, allows for implementation-defined > > > behaviour)? Something else, not allowed in a regular pathname? > > None. just don't do it. Use bindmount, they're cheap and have sane > > defined semtantics. > > So for every directoy hiearchy on a shared filesystem, each user needs > to have the complete list of bindmounts needed, and automatically resync > that across all nodes when a new one is added or removed? And then have > that executed by root, because a regular user can't? Do it in an initscripts and let users simply not do it, they shouldn't even know what kind of filesystem they are on. ^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [Linux-cluster] Re: [PATCH 00/14] GFS 2005-08-10 11:11 ` Christoph Hellwig @ 2005-08-10 13:26 ` AJ Lewis 2005-08-10 15:43 ` Kyle Moffett 0 siblings, 1 reply; 79+ messages in thread From: AJ Lewis @ 2005-08-10 13:26 UTC (permalink / raw) To: Christoph Hellwig, Lars Marowsky-Bree, Al Viro, David Teigland, akpm, linux-kernel, linux-cluster [-- Attachment #1: Type: text/plain, Size: 1151 bytes --] On Wed, Aug 10, 2005 at 12:11:10PM +0100, Christoph Hellwig wrote: > On Wed, Aug 10, 2005 at 01:09:17PM +0200, Lars Marowsky-Bree wrote: > > So for every directoy hiearchy on a shared filesystem, each user needs > > to have the complete list of bindmounts needed, and automatically resync > > that across all nodes when a new one is added or removed? And then have > > that executed by root, because a regular user can't? > > Do it in an initscripts and let users simply not do it, they shouldn't > even know what kind of filesystem they are on. I'm just thinking of a 100-node cluster that has different mounts on different nodes, and trying to update the bind mounts in a sane and efficient manner without clobbering the various mount setups. Ouch. -- AJ Lewis Voice: 612-638-0500 Red Hat E-Mail: alewis@redhat.com One Main Street SE, Suite 209 Minneapolis, MN 55414 Current GPG fingerprint = D9F8 EDCE 4242 855F A03D 9B63 F50C 54A8 578C 8715 Grab the key at: http://people.redhat.com/alewis/gpg.html or one of the many keyservers out there... [-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --] ^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [Linux-cluster] Re: [PATCH 00/14] GFS 2005-08-10 13:26 ` [Linux-cluster] " AJ Lewis @ 2005-08-10 15:43 ` Kyle Moffett 0 siblings, 0 replies; 79+ messages in thread From: Kyle Moffett @ 2005-08-10 15:43 UTC (permalink / raw) To: AJ Lewis Cc: Christoph Hellwig, Lars Marowsky-Bree, Al Viro, David Teigland, akpm, linux-kernel, linux-cluster On Aug 10, 2005, at 09:26:26, AJ Lewis wrote: > On Wed, Aug 10, 2005 at 12:11:10PM +0100, Christoph Hellwig wrote: > >> On Wed, Aug 10, 2005 at 01:09:17PM +0200, Lars Marowsky-Bree wrote: >> >>> So for every directory hierarchy on a shared filesystem, each >>> user needs >>> to have the complete list of bindmounts needed, and automatically >>> resync >>> that across all nodes when a new one is added or removed? And >>> then have >>> that executed by root, because a regular user can't? >> >> Do it in an initscripts and let users simply not do it, they >> shouldn't >> even know what kind of filesystem they are on. > > I'm just thinking of a 100-node cluster that has different mounts > on different > nodes, and trying to update the bind mounts in a sane and efficient > manner > without clobbering the various mount setups. Ouch. How about something like the following: cpslink() => Create a Context Dependent Symlink readcpslink() => Return the Context Dependent path data readlink() => Return the path of the Context Dependent Symlink as it would be evaluated in the current context, basically as a normal symlink. lstat() => Return information on the Context Dependent Symlink in the same format as a regular symlink. unlink() => Delete the Context Dependent Symlink. You would need an extra userspace tool that understands cpslink/ readcpslink to create and get information on the links for now, but ls and ln could eventually be updated, and until then the would provide sane behavior. Perhaps this should be extended into a new API for some of the strange things several filesystems want to do in the VFS: extlink() => Create an extended filesystem link (with type specified) readextlink() => Return the path (and type) for the link The filesystem could define how each type of link acts with respect to other syscalls. OpenAFS could use extlink() instead of their symlink magic for adjusting the AFS volume hierarchy. The new in-kernel AFS client could use it in similar fashion (It has no method to adjust hierarchy, because it's still read-only). GFS could use it for their Context Dependent Symlinks. Since it would pass the type in as well, it would be possible to use it for different kinds of links on the same filesystem. Cheers, Kyle Moffett -- Simple things should be simple and complex things should be possible -- Alan Kay ^ permalink raw reply [flat|nested] 79+ messages in thread
* GFS - updated patches 2005-08-02 7:18 [PATCH 00/14] GFS David Teigland ` (3 preceding siblings ...) 2005-08-09 15:20 ` [PATCH 00/14] GFS Al Viro @ 2005-08-11 8:17 ` David Teigland 2005-08-11 8:21 ` [Linux-cluster] " Michael ` (2 more replies) 4 siblings, 3 replies; 79+ messages in thread From: David Teigland @ 2005-08-11 8:17 UTC (permalink / raw) To: akpm, linux-kernel; +Cc: linux-cluster Thanks for all the review and comments. This is a new set of patches that incorporates the suggestions we've received. http://redhat.com/~teigland/gfs2/20050811/gfs2-full.patch http://redhat.com/~teigland/gfs2/20050811/broken-out/ Dave ^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [Linux-cluster] GFS - updated patches 2005-08-11 8:17 ` GFS - updated patches David Teigland @ 2005-08-11 8:21 ` Michael 2005-08-11 8:46 ` David Teigland 2005-08-11 8:32 ` Arjan van de Ven 2005-08-11 9:54 ` [Linux-cluster] " Michael 2 siblings, 1 reply; 79+ messages in thread From: Michael @ 2005-08-11 8:21 UTC (permalink / raw) To: linux clustering; +Cc: akpm, linux-kernel I have the same question as I asked before, how can I see GFS in "make menuconfig", after I patch gfs2-full.patch into a 2.6.12.2 kernel? Michael On 8/11/05, David Teigland <teigland@redhat.com> wrote: > Thanks for all the review and comments. This is a new set of patches that > incorporates the suggestions we've received. > > http://redhat.com/~teigland/gfs2/20050811/gfs2-full.patch > http://redhat.com/~teigland/gfs2/20050811/broken-out/ > > Dave > > -- > Linux-cluster mailing list > Linux-cluster@redhat.com > http://www.redhat.com/mailman/listinfo/linux-cluster > ^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: GFS - updated patches 2005-08-11 8:21 ` [Linux-cluster] " Michael @ 2005-08-11 8:46 ` David Teigland 2005-08-11 8:49 ` Michael 0 siblings, 1 reply; 79+ messages in thread From: David Teigland @ 2005-08-11 8:46 UTC (permalink / raw) To: Michael, linux-cluster; +Cc: linux-kernel On Thu, Aug 11, 2005 at 04:21:04PM +0800, Michael wrote: > I have the same question as I asked before, how can I see GFS in "make > menuconfig", after I patch gfs2-full.patch into a 2.6.12.2 kernel? You need to select the dlm under drivers. It's in -mm, or apply http://redhat.com/~teigland/dlm.patch ^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: GFS - updated patches 2005-08-11 8:46 ` David Teigland @ 2005-08-11 8:49 ` Michael 0 siblings, 0 replies; 79+ messages in thread From: Michael @ 2005-08-11 8:49 UTC (permalink / raw) To: David Teigland; +Cc: linux-cluster, linux-kernel yes, after apply dlm.patch, I saw it! although I don't know what's "-mm". Thanks, Michael On 8/11/05, David Teigland <teigland@redhat.com> wrote: > On Thu, Aug 11, 2005 at 04:21:04PM +0800, Michael wrote: > > I have the same question as I asked before, how can I see GFS in "make > > menuconfig", after I patch gfs2-full.patch into a 2.6.12.2 kernel? > > You need to select the dlm under drivers. It's in -mm, or apply > http://redhat.com/~teigland/dlm.patch > > ^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: GFS - updated patches 2005-08-11 8:17 ` GFS - updated patches David Teigland 2005-08-11 8:21 ` [Linux-cluster] " Michael @ 2005-08-11 8:32 ` Arjan van de Ven 2005-08-11 8:50 ` David Teigland 2005-08-11 9:54 ` [Linux-cluster] " Michael 2 siblings, 1 reply; 79+ messages in thread From: Arjan van de Ven @ 2005-08-11 8:32 UTC (permalink / raw) To: David Teigland; +Cc: akpm, linux-kernel, linux-cluster On Thu, 2005-08-11 at 16:17 +0800, David Teigland wrote: > Thanks for all the review and comments. This is a new set of patches that > incorporates the suggestions we've received. all of them or only a subset? ^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: GFS - updated patches 2005-08-11 8:32 ` Arjan van de Ven @ 2005-08-11 8:50 ` David Teigland 2005-08-11 8:50 ` Arjan van de Ven 0 siblings, 1 reply; 79+ messages in thread From: David Teigland @ 2005-08-11 8:50 UTC (permalink / raw) To: Arjan van de Ven; +Cc: akpm, linux-kernel, linux-cluster On Thu, Aug 11, 2005 at 10:32:38AM +0200, Arjan van de Ven wrote: > On Thu, 2005-08-11 at 16:17 +0800, David Teigland wrote: > > Thanks for all the review and comments. This is a new set of patches that > > incorporates the suggestions we've received. > > all of them or only a subset? All patches, now 01-13 (what was patch 08 disappeared entirely) ^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: GFS - updated patches 2005-08-11 8:50 ` David Teigland @ 2005-08-11 8:50 ` Arjan van de Ven 2005-08-11 9:16 ` David Teigland 0 siblings, 1 reply; 79+ messages in thread From: Arjan van de Ven @ 2005-08-11 8:50 UTC (permalink / raw) To: David Teigland; +Cc: akpm, linux-kernel, linux-cluster On Thu, 2005-08-11 at 16:50 +0800, David Teigland wrote: > On Thu, Aug 11, 2005 at 10:32:38AM +0200, Arjan van de Ven wrote: > > On Thu, 2005-08-11 at 16:17 +0800, David Teigland wrote: > > > Thanks for all the review and comments. This is a new set of patches that > > > incorporates the suggestions we've received. > > > > all of them or only a subset? > > All patches, now 01-13 (what was patch 08 disappeared entirely) with them I meant the suggestions not the patches ;) ^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: GFS - updated patches 2005-08-11 8:50 ` Arjan van de Ven @ 2005-08-11 9:16 ` David Teigland 2005-08-11 10:04 ` Pekka Enberg 0 siblings, 1 reply; 79+ messages in thread From: David Teigland @ 2005-08-11 9:16 UTC (permalink / raw) To: Arjan van de Ven; +Cc: akpm, linux-kernel, linux-cluster On Thu, Aug 11, 2005 at 10:50:32AM +0200, Arjan van de Ven wrote: > > > > Thanks for all the review and comments. This is a new set of > > > > patches that incorporates the suggestions we've received. > > > > > > all of them or only a subset? > > with them I meant the suggestions not the patches ;) The large majority, and I think all that people care about. If we ignored something that someone thinks is important, a reminder would be useful. ^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: GFS - updated patches 2005-08-11 9:16 ` David Teigland @ 2005-08-11 10:04 ` Pekka Enberg 0 siblings, 0 replies; 79+ messages in thread From: Pekka Enberg @ 2005-08-11 10:04 UTC (permalink / raw) To: David Teigland Cc: Arjan van de Ven, akpm, linux-kernel, linux-cluster, Pekka Enberg Hi, On 8/11/05, David Teigland <teigland@redhat.com> wrote: > The large majority, and I think all that people care about. If we ignored > something that someone thinks is important, a reminder would be useful. The only remaining issue for me is the vma walk. Thanks, David! Pekka ^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [Linux-cluster] GFS - updated patches 2005-08-11 8:17 ` GFS - updated patches David Teigland 2005-08-11 8:21 ` [Linux-cluster] " Michael 2005-08-11 8:32 ` Arjan van de Ven @ 2005-08-11 9:54 ` Michael 2005-08-11 10:00 ` Pekka Enberg 2 siblings, 1 reply; 79+ messages in thread From: Michael @ 2005-08-11 9:54 UTC (permalink / raw) To: linux clustering; +Cc: akpm, linux-kernel [-- Attachment #1: Type: text/plain, Size: 627 bytes --] Hi, Dave, I quickly applied gfs2 and dlm patches in kernel 2.6.12.2, it passed compiling but has some warning log, see attachment. maybe helpful to you. Thanks, Michael On 8/11/05, David Teigland <teigland@redhat.com> wrote: > Thanks for all the review and comments. This is a new set of patches that > incorporates the suggestions we've received. > > http://redhat.com/~teigland/gfs2/20050811/gfs2-full.patch > http://redhat.com/~teigland/gfs2/20050811/broken-out/ > > Dave > > -- > Linux-cluster mailing list > Linux-cluster@redhat.com > http://www.redhat.com/mailman/listinfo/linux-cluster > [-- Attachment #2: gfs2_and_linux-2.6.12.2.txt --] [-- Type: text/plain, Size: 4815 bytes --] [michael@localhost kernel-gfs2-full-2.6.12.2]$ make SUBDIRS=fs/gfs2 LD fs/gfs2/built-in.o CC [M] fs/gfs2/acl.o CC [M] fs/gfs2/bits.o CC [M] fs/gfs2/bmap.o fs/gfs2/bmap.c: In function `find_metapath': fs/gfs2/bmap.c:320: warning: implicit declaration of function `kzalloc' fs/gfs2/bmap.c:320: warning: assignment makes pointer from integer without a cast CC [M] fs/gfs2/daemon.o CC [M] fs/gfs2/dir.o fs/gfs2/dir.c: In function `leaf_dealloc': fs/gfs2/dir.c:1910: warning: implicit declaration of function `kzalloc' fs/gfs2/dir.c:1910: warning: assignment makes pointer from integer without a cast CC [M] fs/gfs2/eaops.o CC [M] fs/gfs2/eattr.o CC [M] fs/gfs2/glock.o CC [M] fs/gfs2/glops.o CC [M] fs/gfs2/inode.o CC [M] fs/gfs2/ioctl.o CC [M] fs/gfs2/jdata.o CC [M] fs/gfs2/lm.o CC [M] fs/gfs2/log.o fs/gfs2/log.c: In function `gfs2_log_get_buf': fs/gfs2/log.c:363: warning: implicit declaration of function `kzalloc' fs/gfs2/log.c:363: warning: assignment makes pointer from integer without a cast fs/gfs2/log.c: In function `gfs2_log_fake_buf': fs/gfs2/log.c:393: warning: assignment makes pointer from integer without a cast fs/gfs2/log.c: In function `gfs2_log_flush_i': fs/gfs2/log.c:524: warning: assignment makes pointer from integer without a cast CC [M] fs/gfs2/lops.o CC [M] fs/gfs2/lvb.o CC [M] fs/gfs2/main.o CC [M] fs/gfs2/meta_io.o CC [M] fs/gfs2/mount.o CC [M] fs/gfs2/ondisk.o CC [M] fs/gfs2/ops_address.o CC [M] fs/gfs2/ops_dentry.o CC [M] fs/gfs2/ops_export.o CC [M] fs/gfs2/ops_file.o fs/gfs2/ops_file.c: In function `readdir_bad': fs/gfs2/ops_file.c:1052: warning: implicit declaration of function `kzalloc' fs/gfs2/ops_file.c:1052: warning: assignment makes pointer from integer without a cast fs/gfs2/ops_file.c: In function `gfs2_open': fs/gfs2/ops_file.c:1218: warning: assignment makes pointer from integer without a cast CC [M] fs/gfs2/ops_fstype.o CC [M] fs/gfs2/ops_inode.o CC [M] fs/gfs2/ops_super.o CC [M] fs/gfs2/ops_vm.o CC [M] fs/gfs2/page.o CC [M] fs/gfs2/proc.o CC [M] fs/gfs2/quota.o fs/gfs2/quota.c: In function `qd_alloc': fs/gfs2/quota.c:51: warning: implicit declaration of function `kzalloc' fs/gfs2/quota.c:51: warning: assignment makes pointer from integer without a cast fs/gfs2/quota.c: In function `gfs2_quota_init': fs/gfs2/quota.c:1058: warning: assignment makes pointer from integer without a cast CC [M] fs/gfs2/resize.o CC [M] fs/gfs2/recovery.o CC [M] fs/gfs2/rgrp.o fs/gfs2/rgrp.c: In function `gfs2_ri_update': fs/gfs2/rgrp.c:300: warning: implicit declaration of function `kzalloc' fs/gfs2/rgrp.c:300: warning: assignment makes pointer from integer without a cast fs/gfs2/rgrp.c: In function `gfs2_alloc_get': fs/gfs2/rgrp.c:530: warning: assignment makes pointer from integer without a cast CC [M] fs/gfs2/super.o fs/gfs2/super.c: In function `gfs2_jindex_hold': fs/gfs2/super.c:306: warning: implicit declaration of function `kzalloc' fs/gfs2/super.c:306: warning: assignment makes pointer from integer without a cast CC [M] fs/gfs2/trans.o fs/gfs2/trans.c: In function `gfs2_trans_begin_i': fs/gfs2/trans.c:38: warning: implicit declaration of function `kzalloc' fs/gfs2/trans.c:38: warning: assignment makes pointer from integer without a cast CC [M] fs/gfs2/unlinked.o fs/gfs2/unlinked.c: In function `ul_alloc': fs/gfs2/unlinked.c:154: warning: implicit declaration of function `kzalloc' fs/gfs2/unlinked.c:154: warning: assignment makes pointer from integer without a cast fs/gfs2/unlinked.c: In function `gfs2_unlinked_init': fs/gfs2/unlinked.c:342: warning: assignment makes pointer from integer without a cast CC [M] fs/gfs2/util.o LD [M] fs/gfs2/gfs2.o LD fs/gfs2/locking/dlm/built-in.o CC [M] fs/gfs2/locking/dlm/lock.o CC [M] fs/gfs2/locking/dlm/main.o CC [M] fs/gfs2/locking/dlm/mount.o CC [M] fs/gfs2/locking/dlm/sysfs.o CC [M] fs/gfs2/locking/dlm/thread.o LD [M] fs/gfs2/locking/dlm/lock_dlm.o LD fs/gfs2/locking/harness/built-in.o CC [M] fs/gfs2/locking/harness/main.o LD [M] fs/gfs2/locking/harness/lock_harness.o LD fs/gfs2/locking/nolock/built-in.o CC [M] fs/gfs2/locking/nolock/main.o LD [M] fs/gfs2/locking/nolock/lock_nolock.o Building modules, stage 2. MODPOST *** Warning: "kzalloc" [fs/gfs2/gfs2.ko] undefined! CC fs/gfs2/gfs2.mod.o LD [M] fs/gfs2/gfs2.ko CC fs/gfs2/locking/dlm/lock_dlm.mod.o LD [M] fs/gfs2/locking/dlm/lock_dlm.ko CC fs/gfs2/locking/harness/lock_harness.mod.o LD [M] fs/gfs2/locking/harness/lock_harness.ko CC fs/gfs2/locking/nolock/lock_nolock.mod.o LD [M] fs/gfs2/locking/nolock/lock_nolock.ko ^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [Linux-cluster] GFS - updated patches 2005-08-11 9:54 ` [Linux-cluster] " Michael @ 2005-08-11 10:00 ` Pekka Enberg 0 siblings, 0 replies; 79+ messages in thread From: Pekka Enberg @ 2005-08-11 10:00 UTC (permalink / raw) To: Michael; +Cc: linux clustering, akpm, linux-kernel On 8/11/05, Michael <mikore.li@gmail.com> wrote: > Hi, Dave, > > I quickly applied gfs2 and dlm patches in kernel 2.6.12.2, it passed > compiling but has some warning log, see attachment. maybe helpful to > you. kzalloc is not in Linus' tree yet. Try with 2.6.13-rc5-mm1. Pekka ^ permalink raw reply [flat|nested] 79+ messages in thread
end of thread, other threads:[~2005-08-11 10:04 UTC | newest] Thread overview: 79+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2005-08-02 7:18 [PATCH 00/14] GFS David Teigland 2005-08-02 7:45 ` Arjan van de Ven 2005-08-02 14:57 ` Jan Engelhardt 2005-08-02 15:02 ` Arjan van de Ven 2005-08-03 1:00 ` Hans Reiser 2005-08-03 4:07 ` Kyle Moffett 2005-08-03 6:37 ` Jan Engelhardt 2005-08-03 9:09 ` Arjan van de Ven 2005-08-03 3:56 ` David Teigland 2005-08-03 9:17 ` Arjan van de Ven 2005-08-03 10:08 ` David Teigland 2005-08-03 10:37 ` Lars Marowsky-Bree 2005-08-03 18:54 ` Mark Fasheh 2005-08-05 7:14 ` David Teigland 2005-08-05 7:27 ` [Linux-cluster] " Mike Christie 2005-08-05 7:30 ` Mike Christie 2005-08-05 7:34 ` Arjan van de Ven 2005-08-05 9:44 ` David Teigland 2005-08-05 10:07 ` Jörn Engel 2005-08-05 10:31 ` David Teigland 2005-08-05 8:28 ` Jan Engelhardt 2005-08-05 8:34 ` Arjan van de Ven 2005-08-08 6:26 ` David Teigland 2005-08-11 6:06 ` David Teigland 2005-08-11 6:55 ` Arjan van de Ven 2005-08-02 10:16 ` Pekka Enberg 2005-08-03 6:36 ` David Teigland 2005-08-08 14:14 ` GFS Pekka J Enberg 2005-08-08 18:32 ` GFS Zach Brown 2005-08-09 14:49 ` GFS Pekka Enberg 2005-08-09 17:17 ` GFS Zach Brown 2005-08-09 18:35 ` GFS Pekka J Enberg 2005-08-10 4:48 ` GFS Pekka J Enberg 2005-08-10 7:21 ` GFS Christoph Hellwig 2005-08-10 7:31 ` GFS Pekka J Enberg 2005-08-10 16:26 ` GFS Mark Fasheh 2005-08-10 16:57 ` GFS Pekka J Enberg 2005-08-10 18:21 ` GFS Mark Fasheh 2005-08-10 20:18 ` GFS Pekka J Enberg 2005-08-10 22:07 ` GFS Mark Fasheh 2005-08-11 4:41 ` GFS Pekka J Enberg 2005-08-10 5:59 ` GFS David Teigland 2005-08-10 6:06 ` GFS Pekka J Enberg 2005-08-03 6:44 ` [PATCH 00/14] GFS Pekka Enberg 2005-08-08 9:57 ` David Teigland 2005-08-08 10:00 ` GFS Pekka J Enberg 2005-08-08 10:05 ` [PATCH 00/14] GFS Arjan van de Ven 2005-08-08 10:20 ` Jörn Engel 2005-08-08 10:18 ` GFS Pekka J Enberg 2005-08-08 10:56 ` GFS David Teigland 2005-08-08 10:57 ` GFS Pekka J Enberg 2005-08-08 11:39 ` GFS David Teigland 2005-08-08 10:34 ` GFS Pekka J Enberg 2005-08-09 14:55 ` GFS Pekka J Enberg 2005-08-10 7:40 ` GFS Pekka J Enberg 2005-08-10 7:43 ` GFS Christoph Hellwig 2005-08-09 15:20 ` [PATCH 00/14] GFS Al Viro 2005-08-10 7:03 ` Christoph Hellwig 2005-08-10 10:30 ` Lars Marowsky-Bree 2005-08-10 10:32 ` Christoph Hellwig 2005-08-10 10:34 ` Lars Marowsky-Bree 2005-08-10 10:54 ` Christoph Hellwig 2005-08-10 11:02 ` Lars Marowsky-Bree 2005-08-10 11:05 ` Christoph Hellwig 2005-08-10 11:09 ` Lars Marowsky-Bree 2005-08-10 11:11 ` Christoph Hellwig 2005-08-10 13:26 ` [Linux-cluster] " AJ Lewis 2005-08-10 15:43 ` Kyle Moffett 2005-08-11 8:17 ` GFS - updated patches David Teigland 2005-08-11 8:21 ` [Linux-cluster] " Michael 2005-08-11 8:46 ` David Teigland 2005-08-11 8:49 ` Michael 2005-08-11 8:32 ` Arjan van de Ven 2005-08-11 8:50 ` David Teigland 2005-08-11 8:50 ` Arjan van de Ven 2005-08-11 9:16 ` David Teigland 2005-08-11 10:04 ` Pekka Enberg 2005-08-11 9:54 ` [Linux-cluster] " Michael 2005-08-11 10:00 ` Pekka Enberg
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox