From mboxrd@z Thu Jan 1 00:00:00 1970 From: Bob Peterson Date: Wed, 15 Jan 2020 08:19:17 -0500 (EST) Subject: [Cluster-devel] [PATCH] Move struct gfs2_rgrp_lvb out of gfs2_ondisk.h In-Reply-To: <62faa428-a933-4848-d897-deb038078ac3@redhat.com> References: <20200115084956.7405-1-agruenba@redhat.com> <62faa428-a933-4848-d897-deb038078ac3@redhat.com> Message-ID: <1946139195.2641427.1579094357901.JavaMail.zimbra@redhat.com> List-Id: To: cluster-devel.redhat.com MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit ----- Original Message ----- > Hi, > > On 15/01/2020 09:24, Andreas Gruenbacher wrote: > > On Wed, Jan 15, 2020 at 9:58 AM Steven Whitehouse > > wrote: > >> On 15/01/2020 08:49, Andreas Gruenbacher wrote: > >>> There's no point in sharing the internal structure of lock value blocks > >>> with user space. > >> The reason that is in ondisk is that changing that structure is > >> something that needs to follow the same rules as changing the on disk > >> structures. So it is there as a reminder of that, > > I can see a point in that. The reason I've posted this is because Bob > > was complaining that changes to include/uapi/linux/gfs2_ondisk.h break > > his out-of-tree module build process. (One of the patches I'm working > > on adds an inode LVB.) The same would be true of on-disk format > > changes as well of course, and those definitely need to be shared with > > user space. I'm not usually building gfs2 out of tree, so I'm > > indifferent to this change. > > > > Thanks, > > Andreas > > > Why would we need to be able to build gfs2 (at least I assume it is > gfs2) out of tree anyway? > > Steve. Simply for productivity. The difference is this procedure, which literally takes 10 seconds, if done simultaneously on all nodes using something like cssh: make -C /usr/src/kernels/4.18.0-165.el8.x86_64 modules M=$PWD rmmod gfs2 insmod gfs2.ko Compared to a procedure like this, which takes at least 30 minutes: make (a new kernel .src.rpm) scp or rsync the .src.rpm to a build machine cd ~/rpmbuild/ rpm --force -i --nodeps /home/bob/*kernel-4.18.0*.src.rpm &> /dev/null echo $? rpmbuild --target=x86_64 -ba SPECS/kernel.spec ( -or- submit a "real" kernel build) then wait for the kernel build Pull down all necessary kernel rpms scp to all the nodes in the cluster rpm --force -i --nodeps /sbin/reboot all the nodes in the cluster wait for all the nodes to reboot, the cluster to stabilize, etc. Regards, Bob Peterson Red Hat File Systems