linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [ANNOUNCE] util-linux-ng v2.17.1
@ 2010-02-22 10:30 Karel Zak
  2010-02-26  1:13 ` Andreas Dilger
  0 siblings, 1 reply; 10+ messages in thread
From: Karel Zak @ 2010-02-22 10:30 UTC (permalink / raw)
  To: linux-kernel, linux-fsdevel, util-linux-ng


The stable util-linux-ng 2.17.1 release is available at

   ftp://ftp.kernel.org/pub/linux/utils/util-linux-ng/v2.17/

Feedback and bug reports, as always, are welcomed.

	Karel


Util-linux-ng 2.17.1 Release Notes
==================================

Release highlights
------------------

fdisk:
   - supports new command line option "-c" to disable DOS-compatible mode from
     command line.

     The DOS-compatible mode is DEPRECATED and will be disabled by default
     in the next major release. Currently, the DOS mode is enabled by default
     for backward compatibility only. 

     The cylinders as display units are DEPRECATED. It's recommended to use
     "-u" command line option or "u" fdisk command and use sectors as display
     units.

   Note that the new support for 4K-sector disks is useless in DOS-compatible
   mode. The fdisk command prints warning(s) when startup in DOS mode.


Changes since v2.17
-------------------

 For more details see ChangeLog files at:
 ftp://ftp.kernel.org/pub/linux/utils/util-linux-ng/v2.17/


blkid:
   - add newline when only one value is printed  [Karel Zak]
   - fix #ifdef HAVE_TERMIO[S]_H  [Karel Zak]
   - probe for PT, don't probe for FS on small whole-disks  [Karel Zak]
   - report open() errors in low-level probing  [Karel Zak]
build-sys:
   - add missing tests for libuuid and libblkid  [Karel Zak]
   - release++ (v2.17.1-rc1)  [Karel Zak]
   - remove duplicate #includes  [Karel Zak]
cal:
   - fix first day of the week calculation on BE systems  [Karel Zak]
cfdisk:
   - set '[New]' as default item on menu for non allocated space instead of '[Help]'.  [Francesco Cosoleto]
   - set '[Quit]' as default menu item on first run instead of '[Bootable]'.  [Francesco Cosoleto]
docs:
   - add v2.17.1 ReleaseNotes  [Karel Zak]
   - update AUTHORS file  [Karel Zak]
fdisk:
   - add -c option (switch off DOS mode)  [Karel Zak]
   - cleanup alignment, default to 1MiB offset  [Karel Zak]
   - cleanup help, add -h option  [Karel Zak]
   - cleanup warnings  [Karel Zak]
   - don't check alignment_offset against geometry  [Karel Zak]
   - don't include scsi.h  [Karel Zak]
   - don't use 1MiB grain on small devices  [Karel Zak]
   - fallback for topology values  [Karel Zak]
   - fix ALIGN_UP  [Karel Zak]
   - fix check_alignment()  [Karel Zak]
   - fix default first sector  [Karel Zak]
   - swap VTOC values for warning messages  [Karel Zak]
   - use "optimal I/O size" in warnings  [Karel Zak]
   - use 1MiB offset and grain always when possible  [Karel Zak]
   - use more elegant way to count and check alignment  [Karel Zak]
   - use optimal_io_size  [Karel Zak]
include:
   - add min/max macros  [Karel Zak]
libblkid:
   - add minimal sizes for OCFS and GFS  [Karel Zak]
   - add sanity checks for FAT to DOS PT parser  [Karel Zak]
   - call read() per FAT root dir entry  [Karel Zak]
   - disable read-ahead when probing device files  [Linus Torvalds]
   - don't call read() per FAT dir-entry on large disks  [Karel Zak]
   - don't probe for GPT and Unixware PT on floppies  [Karel Zak]
   - don't return error on empty files  [Karel Zak]
   - fix ZSF detection  [Andreas Dilger]
   - fix segfault in drdb  [Matthias König]
   - more robust minix probing  [Karel Zak]
   - read whole SB buffer (69kB) on large disks  [Karel Zak]
   - read() optimization for small devices  [Karel Zak]
   - restrict RAID/FS proving for small devices (1.4MiB)  [Karel Zak]
   - rewrite blkid_probe_get_buffer()  [Karel Zak]
   - set minimal size for jfs, reiser, swap and zfs  [Karel Zak]
login:
   - check that after tty reopen we still work with a terminal  [Karel Zak]
   - don't link PAMed version with libcrypt  [Karel Zak]
   - use fd instead of pathname for update tty's owner and permissions  [Yann Droneaud]
mount:
   - advise users to use "modprobe", not "insmod"  [Karel Zak]
   - update documentation about barrier mount options  [Jan Kara]
   - warn users that mtab is read-only  [Karel Zak]
namei:
   - fix man page formatting  [Vladimir Brednikov]
po:
   - merge changes  [Karel Zak]
   - update cs.po (from translationproject.org)  [Petr Pisar]
   - update eu.po (from translationproject.org)  [Mikel Olasagasti Uranga]
   - update id.po (from translationproject.org)  [Arif E. Nugroho]
   - update pl.po (from translationproject.org)  [Jakub Bogusz]
   - update vi.po (from translationproject.org)  [Clytie Siddall]
sfdisk:
   - make sure writes make it to disk in write_partitions()  [Bryn M. Reeves]
swapon:
   - fix swapsize calculation  [Karel Zak]
tests:
   - add fdisk alignment tests  [Karel Zak]
   - fix RAIDs tests  [Karel Zak]
   - fix and update old fdisk tests  [Karel Zak]
   - update FS test images  [Karel Zak]
   - update fdisk tests  [Karel Zak]
   - update fdisk tests (add whitespaces)  [Karel Zak]
wipefs:
   - ignore devices with partition table  [Karel Zak]
-- 
 Karel Zak  <kzak@redhat.com>
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [ANNOUNCE] util-linux-ng v2.17.1
  2010-02-22 10:30 [ANNOUNCE] util-linux-ng v2.17.1 Karel Zak
@ 2010-02-26  1:13 ` Andreas Dilger
       [not found]   ` <A5AF32A3-C8A4-4785-8886-9CF79EFA4C57-xsfywfwIY+M@public.gmane.org>
  0 siblings, 1 reply; 10+ messages in thread
From: Andreas Dilger @ 2010-02-26  1:13 UTC (permalink / raw)
  To: Karel Zak; +Cc: Ricardo M. Correia, Brian Behlendorf, linux-fsdevel

[-- Attachment #1: Type: text/plain, Size: 1531 bytes --]

On 2010-02-22, at 03:30, Karel Zak wrote:
> The stable util-linux-ng 2.17.1 release is available at
>
>   ftp://ftp.kernel.org/pub/linux/utils/util-linux-ng/v2.17/
>
> Feedback and bug reports, as always, are welcomed.

Hi Karel,
attached is an updated version of the ZFS device detection.  It  
includes support for extracting the pool name (LABEL) along with the  
pool_guid (UUID).

One question I had was regarding naming of the TYPE.  Currently we are  
using "zfs" for this, but the current code is only really detecting  
the volume information and has noting to do with mountable  
filesystems.  Extracting filesystem names/mountpoints/guids is  
basically impossible at this stage w/o actually having ZFS active (in  
a similar manner that extracting ext2/3/4 filesystem information from  
an unconfigured LVM PV is impossible).

Should we rename the TYPE to be "zfs_vdev" (ZFS equivalent to  
"lvm2pv") instead of the current "zfs"?  It is probably more desirable  
to keep "zfs" for future filesystem mountpoint identification.  For  
now I've left it as "zfs" but wouldn't mind changing it now while  
there are not any real users of this.

Have you considered adding CONTAINER or similar identification to the  
blkid.tab file, so that it is possible to determine that the  
filesystem with LABEL="home" is on CONTAINER="39u4yr-f5WW-dtD7-jDfr- 
usGd-pYWf-qy6xKE", which in turn is the UUID of an lvm2pv on /dev/sda2?

Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc.

[-- Attachment #2: util-linux-blkid-zfs.diff --]
[-- Type: application/octet-stream, Size: 6971 bytes --]

Improve ZFS uberblock detection to loop over multiple uberblocks,
and detect at least 4 magic values, to avoid random collisions.
It doesn't yet probe the VDEV LABEL at the end of the device, though
it wouldn't be too hard to add it at this point if needed.

Add extraction of the pool name (as LABEL) and pool GUID (as UUID)
from the nvlist in the VDEV LABEL.

Signed-off-by: Andreas Dilger <adilger@sun.com>

--- shlibs/blkid/src/superblocks/zfs.c.orig	2010-02-16 01:58:56.000000000 -0700
+++ shlibs/blkid/src/superblocks/zfs.c	2010-02-19 01:45:17.000000000 -0700
@@ -16,6 +16,10 @@
 #include "superblocks.h"
 
 /* #include <sys/uberblock_impl.h> */
+#define VDEV_LABEL_UBERBLOCK	(128 * 1024ULL)
+#define VDEV_LABEL_NVPAIR	( 16 * 1024ULL)
+#define VDEV_LABEL_SIZE		(256 * 1024ULL)
+
 #define UBERBLOCK_MAGIC         0x00bab10c              /* oo-ba-bloc!  */
 struct zfs_uberblock {
 	uint64_t	ub_magic;	/* UBERBLOCK_MAGIC		*/
@@ -26,26 +30,154 @@ struct zfs_uberblock {
 	/*blkptr_t	ub_rootbp;*/	/* MOS objset_phys_t		*/
 } __attribute__((packed));
 
+#define ZFS_TRIES	64
+#define ZFS_WANT	 4
+
+#define DATA_TYPE_UINT64 8
+#define DATA_TYPE_STRING 9
+
+struct nvpair {
+	uint32_t	nvp_size;
+	uint32_t	nvp_unkown;
+	uint32_t	nvp_namelen;
+	char		nvp_name[0]; /* aligned to 4 bytes */
+	/* aligned ptr array for string arrays */
+	/* aligned array of data for value */
+};
+
+struct nvstring {
+	uint32_t	nvs_type;
+	uint32_t	nvs_elem;
+	uint32_t	nvs_strlen;
+	unsigned char	nvs_string[0];
+};
+
+struct nvuint64 {
+	uint32_t	nvu_type;
+	uint32_t	nvu_elem;
+	uint64_t	nvu_value;
+};
+
+struct nvlist {
+	uint32_t	nvl_unknown[3];
+	struct nvpair	nvl_nvpair;
+};
+
+#define nvdebug(fmt, ...) do { } while(0)
+
+static void zfs_extract_guid_name(blkid_probe pr, loff_t offset)
+{
+	struct nvlist *nvl;
+	struct nvpair *nvp;
+	int left = 4096;
+	int found = 0;
+
+	offset = (offset & ~(VDEV_LABEL_SIZE - 1)) + VDEV_LABEL_NVPAIR;
+	nvl = (struct nvlist *)blkid_probe_get_buffer(pr, offset, left);
+	if (nvl == NULL)
+		return;
+
+	nvdebug("zfs_extract: nvlist offset %llu\n", offset);
+
+	nvp = &nvl->nvl_nvpair;
+	nvdebug("left %u, nvp_size %u\n", left, nvp->nvp_size);
+	while (left > sizeof(*nvp) && nvp->nvp_size != 0 && found < 2) {
+		int namesize;
+
+		nvp->nvp_size = be32_to_cpu(nvp->nvp_size);
+		nvp->nvp_namelen = be32_to_cpu(nvp->nvp_namelen);
+
+		nvdebug("left %u, nvp_size %u\n", left, nvp->nvp_size);
+		if (left < nvp->nvp_size)
+			break;
+
+		namesize = (nvp->nvp_namelen + 3) & ~3;
+
+		nvdebug("nvlist: size %u, namelen %u, name %*s\n",
+			nvp->nvp_size, nvp->nvp_namelen, nvp->nvp_namelen,
+			nvp->nvp_name);
+		if (strncmp(nvp->nvp_name, "name", nvp->nvp_namelen) == 0) {
+			struct nvstring *nvs = (void *)(nvp->nvp_name+namesize);
+
+			nvs->nvs_type = be32_to_cpu(nvs->nvs_type);
+			nvs->nvs_strlen = be32_to_cpu(nvs->nvs_strlen);
+			nvdebug("nvstring: type %u string %*s\n", nvs->nvs_type,
+				nvs->nvs_strlen, nvs->nvs_string);
+			if (nvs->nvs_type == DATA_TYPE_STRING) /* should be */
+				blkid_probe_set_label(pr, nvs->nvs_string,
+						      nvs->nvs_strlen);
+			found++;
+		} else if (strncmp(nvp->nvp_name, "pool_guid",
+				   nvp->nvp_namelen) == 0) {
+			struct nvuint64 *nvu = (void *)(nvp->nvp_name+namesize);
+			uint64_t nvu_value;
+
+			memcpy(&nvu_value, &nvu->nvu_value, sizeof(nvu_value));
+			nvu->nvu_type = be32_to_cpu(nvu->nvu_type);
+			nvu_value = be64_to_cpu(nvu_value);
+			nvdebug("nvuint64: type %u value %llu\n",
+				nvu->nvu_type, nvu_value);
+			if (nvu->nvu_type == DATA_TYPE_UINT64) /* should be */
+				blkid_probe_sprintf_uuid(pr, (unsigned char *)
+							 &nvu_value,
+							 sizeof(nvu_value),
+							 "%llu", nvu_value);
+			found++;
+		}
+		left -= nvp->nvp_size;
+		nvp = (struct nvpair *)((char *)nvp + nvp->nvp_size);
+	}
+}
+
+#define zdebug(fmt, ...) do {} while(0)
+
+/* ZFS has 128x1kB host-endian root blocks, stored in 2 areas at the start
+ * of the disk, and 2 areas at the end of the disk.  Check only some of them...
+ * #4 (@ 132kB) is the first one written on a new filesystem. */
 static int probe_zfs(blkid_probe pr, const struct blkid_idmag *mag)
 {
+	uint64_t swab_magic = swab64(UBERBLOCK_MAGIC);
 	struct zfs_uberblock *ub;
 	int swab_endian;
-	uint64_t spa_version;
+	loff_t offset;
+	int tried;
+	int found;
+
+	zdebug("probe_zfs\n");
+	/* Look for at least 4 uberblocks to ensure a positive match */
+	for (tried = found = 0, offset = VDEV_LABEL_UBERBLOCK;
+	     tried < ZFS_TRIES && found < ZFS_WANT;
+	     tried++, offset += 4096) {
+		/* also try the second uberblock copy */
+		if (tried == (ZFS_TRIES / 2))
+			offset = VDEV_LABEL_SIZE + VDEV_LABEL_UBERBLOCK;
+
+		ub = (struct zfs_uberblock *)
+			blkid_probe_get_buffer(pr, offset,
+					       sizeof(struct zfs_uberblock));
+		if (ub == NULL)
+			return -1;
+
+		if (ub->ub_magic == UBERBLOCK_MAGIC)
+			found++;
 
-	ub = blkid_probe_get_sb(pr, mag, struct zfs_uberblock);
-	if (!ub)
+		if ((swab_endian = (ub->ub_magic == swab_magic)))
+			found++;
+
+		zdebug("probe_zfs: found %s-endian uberblock at %llu\n",
+		       swab_endian ? "big" : "little", offset >> 10);
+	}
+
+	if (found < 4)
 		return -1;
 
-	swab_endian = (ub->ub_magic == swab64(UBERBLOCK_MAGIC));
-	spa_version = swab_endian ? swab64(ub->ub_version) : ub->ub_version;
+	/* If we found the 4th uberblock, then we will have exited from the
+	 * scanning loop immediately, and ub will be a valid uberblock. */
+	blkid_probe_sprintf_version(pr, "%" PRIu64, swab_endian ?
+				    swab64(ub->ub_version) : ub->ub_version);
+
+	zfs_extract_guid_name(pr, offset);
 
-	blkid_probe_sprintf_version(pr, "%" PRIu64, spa_version);
-#if 0
-	/* read nvpair data for pool name, pool GUID from the MOS, but
-	 * unfortunately this is more complex than it could be */
-	blkid_probe_set_label(pr, pool_name, pool_len));
-	blkid_probe_set_uuid(pr, pool_guid);
-#endif
 	return 0;
 }
 
@@ -55,13 +187,6 @@ const struct blkid_idinfo zfs_idinfo =
 	.usage		= BLKID_USAGE_FILESYSTEM,
 	.probefunc	= probe_zfs,
 	.minsz		= 64 * 1024 * 1024,
-	.magics		=
-	{
-		{ .magic = "\0\0\x02\xf5\xb0\x07\xb1\x0c", .len = 8, .kboff = 8 },
-		{ .magic = "\x1c\xb1\x07\xb0\xf5\x02\0\0", .len = 8, .kboff = 8 },
-		{ .magic = "\0\0\x02\xf5\xb0\x07\xb1\x0c", .len = 8, .kboff = 264 },
-		{ .magic = "\x0c\xb1\x07\xb0\xf5\x02\0\0", .len = 8, .kboff = 264 },
-		{ NULL }
-	}
+	.magics		= BLKID_NONE_MAGIC
 };
 
--- tests/expected/blkid/low-probe-zfs.orig	2010-02-04 04:53:59.000000000 -0700
+++ tests/expected/blkid/low-probe-zfs	2010-02-19 01:46:45.000000000 -0700
@@ -1,3 +1,7 @@
+ID_FS_LABEL_ENC=tank
+ID_FS_LABEL=tank
 ID_FS_TYPE=zfs
 ID_FS_USAGE=filesystem
-ID_FS_VERSION=1
+ID_FS_UUID=1782036546311300980
+ID_FS_UUID_ENC=1782036546311300980
+ID_FS_VERSION=8

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [ANNOUNCE] util-linux-ng v2.17.1
       [not found]   ` <A5AF32A3-C8A4-4785-8886-9CF79EFA4C57-xsfywfwIY+M@public.gmane.org>
@ 2010-02-26 13:52     ` Karel Zak
  2010-02-26 14:18       ` Ricardo M. Correia
       [not found]       ` <20100226135203.GC8702-sHeGUpI7y9L/9pzu0YdTqQ@public.gmane.org>
  0 siblings, 2 replies; 10+ messages in thread
From: Karel Zak @ 2010-02-26 13:52 UTC (permalink / raw)
  To: Andreas Dilger
  Cc: Ricardo M. Correia, Brian Behlendorf,
	linux-fsdevel-u79uwXL29TY76Z2rM5mHXA, Emmanuel Anne,
	util-linux-ng-u79uwXL29TY76Z2rM5mHXA


 Hi Andreas,

On Thu, Feb 25, 2010 at 06:13:50PM -0700, Andreas Dilger wrote:
> On 2010-02-22, at 03:30, Karel Zak wrote:
>> The stable util-linux-ng 2.17.1 release is available at
>>
>>   ftp://ftp.kernel.org/pub/linux/utils/util-linux-ng/v2.17/
>>
>> Feedback and bug reports, as always, are welcomed.
>
> Hi Karel,
> attached is an updated version of the ZFS device detection.  It includes 
> support for extracting the pool name (LABEL) along with the pool_guid 
> (UUID).

 Thanks! I'll review & commit it later.

> One question I had was regarding naming of the TYPE.  Currently we are  
> using "zfs" for this, but the current code is only really detecting the 
> volume information and has noting to do with mountable filesystems.

 The TYPE is used by mount(8) or fsck(8) if the fstype is not
 explicitly defined by user.

 I don't know if anything depends on the TYPE, but I don't see
 /sbin/mount.zfs, so it seems that zfs-fuse guys use something other.

> Extracting filesystem names/mountpoints/guids is basically impossible at 
> this stage w/o actually having ZFS active (in a similar manner that 
> extracting ext2/3/4 filesystem information from an unconfigured LVM PV is 
> impossible).
>

 See for example vmfs.c where we have "VMFS" (mountable FS) and also
 "VMFS_volume_member" (storage). The both TYPEs are completely
 independent and you can selectively probe for FS or for the special
 volume rather than probe always for both. I think this concept is
 better that add a new identifier (e.g. CONTAINER). 
 
 Note, we have USAGE identifier to specify kind of the type, for
 example raid, filesystem, crypto, etc. This is necessary for
 udevd and some desktop tools. (try: blkid -p -o udev <device>).

> Should we rename the TYPE to be "zfs_vdev" (ZFS equivalent to "lvm2pv") 
> instead of the current "zfs"?  It is probably more desirable to keep 

 Yes, TYPE="zfs" (mountable FS) and TYPE="zfs_volume_member" makes
 sense. (The "_volume_member" is horribly long, but we use it for
 compatibility with udev world.)

> "zfs" for future filesystem mountpoint identification.  For now I've left 
> it as "zfs" but wouldn't mind changing it now while there are not any 
> real users of this.

> Have you considered adding CONTAINER or similar identification to the  
> blkid.tab file, so that it is possible to determine that the filesystem 
> with LABEL="home" is on CONTAINER="39u4yr-f5WW-dtD7-jDfr- 
> usGd-pYWf-qy6xKE", which in turn is the UUID of an lvm2pv on /dev/sda2?

 I'd like to avoid this if possible.

    Karel

-- 
 Karel Zak  <kzak-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
--
To unsubscribe from this list: send the line "unsubscribe util-linux-ng" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [ANNOUNCE] util-linux-ng v2.17.1
  2010-02-26 13:52     ` Karel Zak
@ 2010-02-26 14:18       ` Ricardo M. Correia
       [not found]         ` <1267193930.10440.44.camel-RKLrhfLVcY7jLjhPMhbwMQ@public.gmane.org>
       [not found]       ` <20100226135203.GC8702-sHeGUpI7y9L/9pzu0YdTqQ@public.gmane.org>
  1 sibling, 1 reply; 10+ messages in thread
From: Ricardo M. Correia @ 2010-02-26 14:18 UTC (permalink / raw)
  To: Karel Zak
  Cc: Andreas Dilger, Brian Behlendorf, linux-fsdevel, Emmanuel Anne,
	util-linux-ng

On Sex, 2010-02-26 at 14:52 +0100, Karel Zak wrote:
> Hi Andreas,
>  The TYPE is used by mount(8) or fsck(8) if the fstype is not
>  explicitly defined by user.
> 
>  I don't know if anything depends on the TYPE, but I don't see
>  /sbin/mount.zfs, so it seems that zfs-fuse guys use something other.

Right, ZFS filesystems are mounted in zfs-fuse automatically when a ZFS
pool is imported into the system or manually with the "zfs" command. The
latter calls into the zfs-fuse daemon, which issues a fuse_mount() call.
This mimics the behavior in the Solaris ZFS implementation.

I would expect the /sbin/mount.zfs command to only work when the
mountpoint property of a ZFS filesystem is set to 'legacy', otherwise
ZFS will usually mount the filesystem by itself in the proper place
(which depends on the mountpoint property and the dataset hierarchy
within the pool).

Most importantly, I don't think it would be easy to determine which
filesystems are inside of a ZFS pool. This would require traversing the
dataset hierarchy within a pool, which is very difficult to implement if
you don't use the existing ZFS code, especially when you have
RAID-Z/Z2/Z3 pools. We'd be better off using the 'zdb' command (which
contains an entire implementation of ZFS's DMU code in userspace).

As for fsck, there is none for ZFS and I doubt there will be such a tool
in the foreseeable future... Usually one would use 'zpool scrub' to
verify the consistency of the pool.

Not sure if this helps or not for this discussion (more information is
never bad, right?) :-)

Cheers,
Ricardo



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [ANNOUNCE] util-linux-ng v2.17.1
       [not found]         ` <1267193930.10440.44.camel-RKLrhfLVcY7jLjhPMhbwMQ@public.gmane.org>
@ 2010-02-26 15:16           ` Karel Zak
       [not found]             ` <20100226151618.GE8702-sHeGUpI7y9L/9pzu0YdTqQ@public.gmane.org>
  0 siblings, 1 reply; 10+ messages in thread
From: Karel Zak @ 2010-02-26 15:16 UTC (permalink / raw)
  To: Ricardo M. Correia
  Cc: Andreas Dilger, Brian Behlendorf,
	linux-fsdevel-u79uwXL29TY76Z2rM5mHXA, Emmanuel Anne,
	util-linux-ng-u79uwXL29TY76Z2rM5mHXA

On Fri, Feb 26, 2010 at 02:18:50PM +0000, Ricardo M. Correia wrote:
> On Sex, 2010-02-26 at 14:52 +0100, Karel Zak wrote:
> > Hi Andreas,
> >  The TYPE is used by mount(8) or fsck(8) if the fstype is not
> >  explicitly defined by user.
> > 
> >  I don't know if anything depends on the TYPE, but I don't see
> >  /sbin/mount.zfs, so it seems that zfs-fuse guys use something other.
> 
> Right, ZFS filesystems are mounted in zfs-fuse automatically when a ZFS
> pool is imported into the system or manually with the "zfs" command. The
> latter calls into the zfs-fuse daemon, which issues a fuse_mount() call.
> This mimics the behavior in the Solaris ZFS implementation.

 Hmm.. we have udevd, in an ideal world zfs-fuse would be integrated
 with udev. 

> I would expect the /sbin/mount.zfs command to only work when the
> mountpoint property of a ZFS filesystem is set to 'legacy', otherwise
> ZFS will usually mount the filesystem by itself in the proper place
> (which depends on the mountpoint property and the dataset hierarchy
> within the pool).
> 
> Most importantly, I don't think it would be easy to determine which
> filesystems are inside of a ZFS pool. This would require traversing the
> dataset hierarchy within a pool, which is very difficult to implement if
> you don't use the existing ZFS code, especially when you have
> RAID-Z/Z2/Z3 pools. We'd be better off using the 'zdb' command (which
> contains an entire implementation of ZFS's DMU code in userspace).

 Yes, the same "problem" we have with DM/MD/... the solution is to
 detect that there is any "volume_member" and then use specific tools
 (dmsetup, cryptsetup, mdadm, ...) to create a virtual mountable
 device. 

> Not sure if this helps or not for this discussion (more information is
> never bad, right?) :-)

 Right. BTW, I assume the same discussion for btrfs ;-)

    Karel

-- 
 Karel Zak  <kzak-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
--
To unsubscribe from this list: send the line "unsubscribe util-linux-ng" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [ANNOUNCE] util-linux-ng v2.17.1
       [not found]             ` <20100226151618.GE8702-sHeGUpI7y9L/9pzu0YdTqQ@public.gmane.org>
@ 2010-02-26 15:42               ` Ricardo M. Correia
  0 siblings, 0 replies; 10+ messages in thread
From: Ricardo M. Correia @ 2010-02-26 15:42 UTC (permalink / raw)
  To: Karel Zak
  Cc: Andreas Dilger, Brian Behlendorf,
	linux-fsdevel-u79uwXL29TY76Z2rM5mHXA, Emmanuel Anne,
	util-linux-ng-u79uwXL29TY76Z2rM5mHXA

On Sex, 2010-02-26 at 16:16 +0100, Karel Zak wrote:
> On Fri, Feb 26, 2010 at 02:18:50PM +0000, Ricardo M. Correia wrote:
> > On Sex, 2010-02-26 at 14:52 +0100, Karel Zak wrote:
> > > Hi Andreas,
> > >  The TYPE is used by mount(8) or fsck(8) if the fstype is not
> > >  explicitly defined by user.
> > > 
> > >  I don't know if anything depends on the TYPE, but I don't see
> > >  /sbin/mount.zfs, so it seems that zfs-fuse guys use something other.
> > 
> > Right, ZFS filesystems are mounted in zfs-fuse automatically when a ZFS
> > pool is imported into the system or manually with the "zfs" command. The
> > latter calls into the zfs-fuse daemon, which issues a fuse_mount() call.
> > This mimics the behavior in the Solaris ZFS implementation.
> 
>  Hmm.. we have udevd, in an ideal world zfs-fuse would be integrated
>  with udev. 

You mean that udev would create a block device for the logical volume
where the filesystem is mounted?

I think this may not be possible or useful, see below.

> > I would expect the /sbin/mount.zfs command to only work when the
> > mountpoint property of a ZFS filesystem is set to 'legacy', otherwise
> > ZFS will usually mount the filesystem by itself in the proper place
> > (which depends on the mountpoint property and the dataset hierarchy
> > within the pool).
> > 
> > Most importantly, I don't think it would be easy to determine which
> > filesystems are inside of a ZFS pool. This would require traversing the
> > dataset hierarchy within a pool, which is very difficult to implement if
> > you don't use the existing ZFS code, especially when you have
> > RAID-Z/Z2/Z3 pools. We'd be better off using the 'zdb' command (which
> > contains an entire implementation of ZFS's DMU code in userspace).
> 
>  Yes, the same "problem" we have with DM/MD/... the solution is to
>  detect that there is any "volume_member" and then use specific tools
>  (dmsetup, cryptsetup, mdadm, ...) to create a virtual mountable
>  device. 

Unfortunately the storage abstraction where ZFS filesystems are created
on don't have the same semantics as logical volumes (in the sense of DM,
LVM, etc), e.g. it has no fixed size (it grows and shrinks as the
filesystem grows and shrinks) and you'd have no way of mapping a logical
offset within the virtual device into a physical offset in the pool
(this is accomplished by block pointers).

The idea is that a ZFS filesystem allocates and deallocates space from a
ZFS pool every time it needs to allocate or free a block in the
filesystem. Each block can have a size from 512 bytes up to 128 KB and
it may be allocated anywhere in the pool, and the way a filesystem
accesses its data is by following block pointers.

So I think there is really no way of presenting a virtual block device
other than the entire pool, since you couldn't even map these virtual
device offsets into anything meaningful (other than the space of the
entire pool..).

> > Not sure if this helps or not for this discussion (more information is
> > never bad, right?) :-)
> 
>  Right. BTW, I assume the same discussion for btrfs ;-)

I have no idea about btrfs.. :)

Thanks,
Ricardo


--
To unsubscribe from this list: send the line "unsubscribe util-linux-ng" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [ANNOUNCE] util-linux-ng v2.17.1
       [not found]       ` <20100226135203.GC8702-sHeGUpI7y9L/9pzu0YdTqQ@public.gmane.org>
@ 2010-02-26 20:07         ` Andreas Dilger
       [not found]           ` <CB4C887A-11B7-4A88-A911-102B34728B8A-xsfywfwIY+M@public.gmane.org>
  0 siblings, 1 reply; 10+ messages in thread
From: Andreas Dilger @ 2010-02-26 20:07 UTC (permalink / raw)
  To: Karel Zak
  Cc: Ricardo M. Correia, Brian Behlendorf,
	linux-fsdevel-u79uwXL29TY76Z2rM5mHXA, Emmanuel Anne,
	util-linux-ng-u79uwXL29TY76Z2rM5mHXA

On 2010-02-26, at 06:52, Karel Zak wrote:
> On Thu, Feb 25, 2010 at 06:13:50PM -0700, Andreas Dilger wrote:
>> One question I had was regarding naming of the TYPE.  Currently we  
>> are
>> using "zfs" for this, but the current code is only really detecting  
>> the
>> volume information and has noting to do with mountable filesystems.
>
> The TYPE is used by mount(8) or fsck(8) if the fstype is not
> explicitly defined by user.
>
> I don't know if anything depends on the TYPE, but I don't see
> /sbin/mount.zfs, so it seems that zfs-fuse guys use something other.

To me it seems better that if we expect mount.zfs to mount a single  
filesystem from within the pool, then the TYPE of the disk that blkid  
is detecting today is that of the "volume", so the TYPE should  
probably not be "zfs".  After the ZFS pool is imported it may be that  
we would identify datasets with type "zfs", but that is a long way  
away from where it is today.  It probably also makes sense to change  
this from BLKID_USAGE_FILESYSTEM to BLKID_USAGE_RAID.

> See for example vmfs.c where we have "VMFS" (mountable FS) and also
> "VMFS_volume_member" (storage). The both TYPEs are completely
> independent and you can selectively probe for FS or for the special
> volume rather than probe always for both. I think this concept is
> better that add a new identifier (e.g. CONTAINER).
>
> Note, we have USAGE identifier to specify kind of the type, for
> example raid, filesystem, crypto, etc. This is necessary for
> udevd and some desktop tools. (try: blkid -p -o udev <device>).
>
>> Should we rename the TYPE to be "zfs_vdev" (ZFS equivalent to  
>> "lvm2pv")
>> instead of the current "zfs"?  It is probably more desirable to keep
>
> Yes, TYPE="zfs" (mountable FS) and TYPE="zfs_volume_member" makes
> sense. (The "_volume_member" is horribly long, but we use it for
> compatibility with udev world.)

The only type that has "_volume_member" is VMFS.  In ZFS terms, the  
aggregate is called a "pool", and a component member is a "VDEV", and  
I'd prefer to stick to that if possible, just like MD uses  
"linux_raid_member" and LVM uses "LVM2_member" for their component  
devices.  It seems "zfs_pool_member" or simply "zfs_member" would be  
OK?  I'm not dead set against "zfs_volume_member" if there is a real  
reason for it.

The other question that has come up is whether the "UUID" for a  
component device should be the UUID of the component device itself, or  
that of the volume?  It seems to be for the component device, so is  
there any standard for the UUID/LABEL of the volume?  For ZFS it makes  
sense to use the pool name as the LABEL, and for LVM it would make  
sense to use the VG name as the LABEL.

I'm updating the patch, and will resend based on feedback here.

>> Have you considered adding CONTAINER or similar identification to the
>> blkid.tab file, so that it is possible to determine that the  
>> filesystem
>> with LABEL="home" is on CONTAINER="39u4yr-f5WW-dtD7-jDfr-
>> usGd-pYWf-qy6xKE", which in turn is the UUID of an lvm2pv on /dev/ 
>> sda2?
>
> I'd like to avoid this if possible.


The reason I was thinking about this is if, for example, I want to  
mount "LABEL=home", I can see from blkid.tab that this is in an device  
called /dev/mapper/vgroot-lvhome, but if that volume is not currently  
set up, I have no way to know where it is or how to configure it  
(other than possibly very weak heuristics based on the device name).   
If, instead, it has a CONTAINER which matches the UUID of /dev/sda2  
that device can be probed based on its TYPE.  In SAN configurations  
where there are many devices that _might_ be available to this node  
(e.g. high-availability with multiple servers) configuring every  
device/volume that the node can see is probably a bad idea (e.g. the  
LVM or MD RAID configuration may have changed).

Instead, doing the probing at startup (so the mappings are available),  
but having the volumes configured on demand when some filesystem/ 
device within it is being mounted makes more sense.  Storing this  
hierarchical dependency in a central place (blkid.tab) makes sense.   
That way, udev/blkid can be told "I want to mount LABEL=home", it  
resolves this is in CONTAINER={UUID} (e.g. LV UUID), then blkid  
locates $CONTAINER, and if it resolves to a device that is not yet  
active we can at least know that it is TYPE=LVM2_member, ask lvm2 to  
probe and configure the device(s) on which $CONTAINER resides, repeat  
as necessary for e.g. MD member sub-devices.

To note, this discussion is not strictly related to the ZFS case, it's  
just something that I thought about while looking at the code.

Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc.

--
To unsubscribe from this list: send the line "unsubscribe util-linux-ng" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [ANNOUNCE] util-linux-ng v2.17.1
       [not found]           ` <CB4C887A-11B7-4A88-A911-102B34728B8A-xsfywfwIY+M@public.gmane.org>
@ 2010-02-26 22:47             ` Karel Zak
       [not found]               ` <20100226224706.GH8702-sHeGUpI7y9L/9pzu0YdTqQ@public.gmane.org>
  0 siblings, 1 reply; 10+ messages in thread
From: Karel Zak @ 2010-02-26 22:47 UTC (permalink / raw)
  To: Andreas Dilger
  Cc: Ricardo M. Correia, Brian Behlendorf,
	linux-fsdevel-u79uwXL29TY76Z2rM5mHXA, Emmanuel Anne,
	util-linux-ng-u79uwXL29TY76Z2rM5mHXA

On Fri, Feb 26, 2010 at 01:07:36PM -0700, Andreas Dilger wrote:
> On 2010-02-26, at 06:52, Karel Zak wrote:
>> On Thu, Feb 25, 2010 at 06:13:50PM -0700, Andreas Dilger wrote:
>>> One question I had was regarding naming of the TYPE.  Currently we  
>>> are
>>> using "zfs" for this, but the current code is only really detecting  
>>> the
>>> volume information and has noting to do with mountable filesystems.
>>
>> The TYPE is used by mount(8) or fsck(8) if the fstype is not
>> explicitly defined by user.
>>
>> I don't know if anything depends on the TYPE, but I don't see
>> /sbin/mount.zfs, so it seems that zfs-fuse guys use something other.
>
> To me it seems better that if we expect mount.zfs to mount a single  
> filesystem from within the pool, then the TYPE of the disk that blkid is 
> detecting today is that of the "volume", so the TYPE should probably not 
> be "zfs".  After the ZFS pool is imported it may be that we would 
> identify datasets with type "zfs", but that is a long way away from where 
> it is today.  It probably also makes sense to change this from 
> BLKID_USAGE_FILESYSTEM to BLKID_USAGE_RAID.

 Yes.

>> See for example vmfs.c where we have "VMFS" (mountable FS) and also
>> "VMFS_volume_member" (storage). The both TYPEs are completely
>> independent and you can selectively probe for FS or for the special
>> volume rather than probe always for both. I think this concept is
>> better that add a new identifier (e.g. CONTAINER).
>>
>> Note, we have USAGE identifier to specify kind of the type, for
>> example raid, filesystem, crypto, etc. This is necessary for
>> udevd and some desktop tools. (try: blkid -p -o udev <device>).
>>
>>> Should we rename the TYPE to be "zfs_vdev" (ZFS equivalent to  
>>> "lvm2pv")
>>> instead of the current "zfs"?  It is probably more desirable to keep
>>
>> Yes, TYPE="zfs" (mountable FS) and TYPE="zfs_volume_member" makes
>> sense. (The "_volume_member" is horribly long, but we use it for
>> compatibility with udev world.)
>
> The only type that has "_volume_member" is VMFS.  In ZFS terms, the  
> aggregate is called a "pool", and a component member is a "VDEV", and  
> I'd prefer to stick to that if possible, just like MD uses  
> "linux_raid_member" and LVM uses "LVM2_member" for their component  
> devices.  It seems "zfs_pool_member" or simply "zfs_member" would be OK?  

 OK, "zfs_member" sounds good.

> The other question that has come up is whether the "UUID" for a  
> component device should be the UUID of the component device itself, or  
> that of the volume?  It seems to be for the component device, so is  

 Good question.

 Unfortunately, we don't have any rules for these things. See btrfs.c
 (that's better example than vmfs.c ;-) where we have "UUID" for
 filesystem and "UUID_SUB" (subvolume uuid) for the device.

 The udev is able to use these UUIDs to create a hierarchy of symlinks,
 something like:

   /dev/btrfs/<UUID>/<UUID_SUB>

 For more details see:
 http://www.mail-archive.com/linux-btrfs-u79uwXL29TY76Z2rM5mHXA@public.gmane.org/msg01048.html

> I'm updating the patch, and will resend based on feedback here.
>
>>> Have you considered adding CONTAINER or similar identification to the
>>> blkid.tab file, so that it is possible to determine that the  
>>> filesystem
>>> with LABEL="home" is on CONTAINER="39u4yr-f5WW-dtD7-jDfr-
>>> usGd-pYWf-qy6xKE", which in turn is the UUID of an lvm2pv on /dev/ 
>>> sda2?
>>
>> I'd like to avoid this if possible.

 Hmm... I'm wrong here. (As Ricardo said, there is nothing like a
 virtual device with ZFS or btrfs. It means we can call blkid for
 component devices only.)

 You're right that for things like ZFS or btrfs it makes sense to have
 two UUIDs (device and FS) for each libblkid entry.

> The reason I was thinking about this is if, for example, I want to mount 
> "LABEL=home", I can see from blkid.tab that this is in an device called 
> /dev/mapper/vgroot-lvhome, but if that volume is not currently set up, I 
> have no way to know where it is or how to configure it (other than 
> possibly very weak heuristics based on the device name).  If, instead, it 
> has a CONTAINER which matches the UUID of /dev/sda2 that device can be 
> probed based on its TYPE.  In SAN configurations where there are many 

 Yes. I understand.

> devices that _might_ be available to this node (e.g. high-availability 
> with multiple servers) configuring every device/volume that the node can 
> see is probably a bad idea (e.g. the LVM or MD RAID configuration may 
> have changed).
>
> Instead, doing the probing at startup (so the mappings are available),  
> but having the volumes configured on demand when some filesystem/device 
> within it is being mounted makes more sense.  Storing this hierarchical 
> dependency in a central place (blkid.tab) makes sense.

 Yes, this is not first time when someone is asking for a central
 place where we can store configuration for all dependencies between
 block devices. Unfortunately, this is not so simple.

> That way, 
> udev/blkid can be told "I want to mount LABEL=home", it resolves this is 
> in CONTAINER={UUID} (e.g. LV UUID), then blkid locates $CONTAINER, and if 
> it resolves to a device that is not yet active we can at least know that 
> it is TYPE=LVM2_member, ask lvm2 to probe and configure the device(s) on 
> which $CONTAINER resides, repeat as necessary for e.g. MD member 
> sub-devices.

 This is udev job. The udev should be able to detect a new device and call
 dmsetup, lvm, mdadm, etc.

> To note, this discussion is not strictly related to the ZFS case, it's  
> just something that I thought about while looking at the code.

 You have good questions :-)

    Karel

-- 
 Karel Zak  <kzak-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
--
To unsubscribe from this list: send the line "unsubscribe util-linux-ng" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [ANNOUNCE] util-linux-ng v2.17.1
       [not found]               ` <20100226224706.GH8702-sHeGUpI7y9L/9pzu0YdTqQ@public.gmane.org>
@ 2010-03-01 23:40                 ` Andreas Dilger
       [not found]                   ` <8DFF5500-85B9-4E6F-83E6-FC99A06A14C1-xsfywfwIY+M@public.gmane.org>
  0 siblings, 1 reply; 10+ messages in thread
From: Andreas Dilger @ 2010-03-01 23:40 UTC (permalink / raw)
  To: Karel Zak
  Cc: Ricardo M. Correia, Brian Behlendorf,
	linux-fsdevel-u79uwXL29TY76Z2rM5mHXA, Emmanuel Anne, Chris Mason,
	util-linux-ng-u79uwXL29TY76Z2rM5mHXA

On 2010-02-26, at 15:47, Karel Zak wrote:
> On Fri, Feb 26, 2010 at 01:07:36PM -0700, Andreas Dilger wrote:
>>
>> The only type that has "_volume_member" is VMFS.  In ZFS terms, the
>> aggregate is called a "pool", and a component member is a "VDEV", and
>> I'd prefer to stick to that if possible, just like MD uses
>> "linux_raid_member" and LVM uses "LVM2_member" for their component
>> devices.  It seems "zfs_pool_member" or simply "zfs_member" would  
>> be OK?
>
> OK, "zfs_member" sounds good.

OK.

>> The other question that has come up is whether the "UUID" for a
>> component device should be the UUID of the component device itself,  
>> or
>> that of the volume?  It seems to be for the component device, so is
>
> Unfortunately, we don't have any rules for these things. See btrfs.c
> (that's better example than vmfs.c ;-) where we have "UUID" for
> filesystem and "UUID_SUB" (subvolume uuid) for the device.
>
> The udev is able to use these UUIDs to create a hierarchy of symlinks,
> something like:
>
>   /dev/btrfs/<UUID>/<UUID_SUB>

This is itself a bit confusing, because AFAIK btrfs can have multiple  
"filesystems" within the same volume, so it would seem that "UUID" is  
for the whole volume?

It would be good if we can come to some consensus for this, since in  
the case of LVM2 it is using "UUID" for the disk UUID (seems this  
should be "UUID_SUB"), and the volume group UUID is not printed at all  
(seems this should be "UUID", maybe with LABEL={vgname}), and the LV  
UUID is completely ignored.

I'm ready with a ZFS patch to use "UUID" for the volume, and  
"UUID_SUB" for the disks, once we agree that this is correct.

Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc.

--
To unsubscribe from this list: send the line "unsubscribe util-linux-ng" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [ANNOUNCE] util-linux-ng v2.17.1
       [not found]                   ` <8DFF5500-85B9-4E6F-83E6-FC99A06A14C1-xsfywfwIY+M@public.gmane.org>
@ 2010-03-02 14:40                     ` Karel Zak
  0 siblings, 0 replies; 10+ messages in thread
From: Karel Zak @ 2010-03-02 14:40 UTC (permalink / raw)
  To: Andreas Dilger
  Cc: Ricardo M. Correia, Brian Behlendorf,
	linux-fsdevel-u79uwXL29TY76Z2rM5mHXA, Emmanuel Anne, Chris Mason,
	util-linux-ng-u79uwXL29TY76Z2rM5mHXA

On Mon, Mar 01, 2010 at 04:40:43PM -0700, Andreas Dilger wrote:
>> Unfortunately, we don't have any rules for these things. See btrfs.c
>> (that's better example than vmfs.c ;-) where we have "UUID" for
>> filesystem and "UUID_SUB" (subvolume uuid) for the device.
>>
>> The udev is able to use these UUIDs to create a hierarchy of symlinks,
>> something like:
>>
>>   /dev/btrfs/<UUID>/<UUID_SUB>
>
> This is itself a bit confusing, because AFAIK btrfs can have multiple  
> "filesystems" within the same volume, so it would seem that "UUID" is  
> for the whole volume?

 Good point. Now, libblkid uses:
 
    UUID     = btrfs_super_block->fsid
    UUID_SUB = btrfs_super_block->btrfs_dev_item->uuid

 but I also see

   btrfs_super_block->btrfs_dev_item->fsid

which is uuid of FS who owns this device. Maybe it would be better to
use the "fsid" from btrfs_dev_item as UUID. Not sure.

> It would be good if we can come to some consensus for this, since in the 
> case of LVM2 it is using "UUID" for the disk UUID (seems this should be 
> "UUID_SUB"), and the volume group UUID is not printed at all (seems this 
> should be "UUID", maybe with LABEL={vgname}), and the LV UUID is 
> completely ignored.

 Well, the PV UUID is stored in PV label, the others information about
 LG and LV are in the metadata area in text format. The metadata are
 optional and versioned, etc. Currently, nobody asks for such
 information and I very happy that we don't have to parse the metadata :-)

 Don't forget that the libblkid library is used to identify block
 device content. For additional information/operations we usually call
 RAID or volume-manager specific tools by udevd.

 The libblkid library probes for PV and nothing other, so I think that
 using the "UUID" is good enough, because we don't have to
 differentiate between more UUIDs here.

> I'm ready with a ZFS patch to use "UUID" for the volume, and "UUID_SUB" 
> for the disks, once we agree that this is correct.

 Yes, makes sense.

    Karel

-- 
 Karel Zak  <kzak-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
--
To unsubscribe from this list: send the line "unsubscribe util-linux-ng" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2010-03-02 14:40 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-02-22 10:30 [ANNOUNCE] util-linux-ng v2.17.1 Karel Zak
2010-02-26  1:13 ` Andreas Dilger
     [not found]   ` <A5AF32A3-C8A4-4785-8886-9CF79EFA4C57-xsfywfwIY+M@public.gmane.org>
2010-02-26 13:52     ` Karel Zak
2010-02-26 14:18       ` Ricardo M. Correia
     [not found]         ` <1267193930.10440.44.camel-RKLrhfLVcY7jLjhPMhbwMQ@public.gmane.org>
2010-02-26 15:16           ` Karel Zak
     [not found]             ` <20100226151618.GE8702-sHeGUpI7y9L/9pzu0YdTqQ@public.gmane.org>
2010-02-26 15:42               ` Ricardo M. Correia
     [not found]       ` <20100226135203.GC8702-sHeGUpI7y9L/9pzu0YdTqQ@public.gmane.org>
2010-02-26 20:07         ` Andreas Dilger
     [not found]           ` <CB4C887A-11B7-4A88-A911-102B34728B8A-xsfywfwIY+M@public.gmane.org>
2010-02-26 22:47             ` Karel Zak
     [not found]               ` <20100226224706.GH8702-sHeGUpI7y9L/9pzu0YdTqQ@public.gmane.org>
2010-03-01 23:40                 ` Andreas Dilger
     [not found]                   ` <8DFF5500-85B9-4E6F-83E6-FC99A06A14C1-xsfywfwIY+M@public.gmane.org>
2010-03-02 14:40                     ` Karel Zak

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).