From mboxrd@z Thu Jan 1 00:00:00 1970 From: Andrew Price Date: Wed, 15 Jan 2020 15:26:54 +0000 Subject: [Cluster-devel] [PATCH] Move struct gfs2_rgrp_lvb out of gfs2_ondisk.h In-Reply-To: <1946139195.2641427.1579094357901.JavaMail.zimbra@redhat.com> References: <20200115084956.7405-1-agruenba@redhat.com> <62faa428-a933-4848-d897-deb038078ac3@redhat.com> <1946139195.2641427.1579094357901.JavaMail.zimbra@redhat.com> Message-ID: <4d437cff-13b5-fbeb-6f17-e5ac1cc08441@redhat.com> List-Id: To: cluster-devel.redhat.com MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit On 15/01/2020 13:19, Bob Peterson wrote: > ----- Original Message ----- >> Hi, >> >> On 15/01/2020 09:24, Andreas Gruenbacher wrote: >>> On Wed, Jan 15, 2020 at 9:58 AM Steven Whitehouse >>> wrote: >>>> On 15/01/2020 08:49, Andreas Gruenbacher wrote: >>>>> There's no point in sharing the internal structure of lock value blocks >>>>> with user space. >>>> The reason that is in ondisk is that changing that structure is >>>> something that needs to follow the same rules as changing the on disk >>>> structures. So it is there as a reminder of that, >>> I can see a point in that. The reason I've posted this is because Bob >>> was complaining that changes to include/uapi/linux/gfs2_ondisk.h break >>> his out-of-tree module build process. (One of the patches I'm working >>> on adds an inode LVB.) The same would be true of on-disk format >>> changes as well of course, and those definitely need to be shared with >>> user space. I'm not usually building gfs2 out of tree, so I'm >>> indifferent to this change. >>> >>> Thanks, >>> Andreas >>> >> Why would we need to be able to build gfs2 (at least I assume it is >> gfs2) out of tree anyway? >> >> Steve. > > Simply for productivity. The difference is this procedure, which literally takes 10 seconds, > if done simultaneously on all nodes using something like cssh: > > make -C /usr/src/kernels/4.18.0-165.el8.x86_64 modules M=$PWD I'd be concerned about this generating "chimera" modules that produce invalid test results. > rmmod gfs2 > insmod gfs2.ko > > Compared to a procedure like this, which takes at least 30 minutes: > > make (a new kernel .src.rpm) > scp or rsync the .src.rpm to a build machine > cd ~/rpmbuild/ > rpm --force -i --nodeps /home/bob/*kernel-4.18.0*.src.rpm &> /dev/null > echo $? > rpmbuild --target=x86_64 -ba SPECS/kernel.spec > ( -or- submit a "real" kernel build) > then wait for the kernel build > Pull down all necessary kernel rpms > scp to all the nodes in the cluster > rpm --force -i --nodeps > /sbin/reboot all the nodes in the cluster > wait for all the nodes to reboot, the cluster to stabilize, etc. Isn't the next-best alternative just building the modules in-tree and copying them to the test machines? I'm not sure I understand the complication. Perhaps we need cluster_install and cluster_modules_install rules in the build system :) Andy