From: Mike Fleetwood <mike.fleetwood@googlemail.com>
To: Vyacheslav Dubeyko <slava@dubeyko.com>
Cc: Hin-Tak Leung <htl10@users.sourceforge.net>,
linux-fsdevel@vger.kernel.org
Subject: Re: [PATCH] hfsplus: fix FS driver name in printks
Date: Tue, 29 Jan 2013 09:22:02 +0000 [thread overview]
Message-ID: <20130129092201.GA453@gmail.com> (raw)
In-Reply-To: <1359438565.2868.9.camel@slavad-ubuntu>
On Tue, Jan 29, 2013 at 09:49:25AM +0400, Vyacheslav Dubeyko wrote:
> On Mon, 2013-01-28 at 20:23 +0000, Mike Fleetwood wrote:
> > Correct the name of the hfsplus FS driver as used in printk calls.
> > "hfs:" -> "hfsplus:".
> >
> > Signed-off-by: Mike Fleetwood <mike.fleetwood@googlemail.com>
> > ---
> >
> > Hi,
> >
> > Is there a current reason why the hfsplus FS driver uses "hfs:" almost
> > exclusively rather than "hfsplus:" as its name in printk calls?
>
> There are as minimum two reason for leaving "hfs:" prefix in peace: (1)
> historical - it is like code style of "old" library; (2) the prefix
> "hfs:" is shorter - so, it gives opportunity to make more descriptive
> comments by means of one line under 80 symbols kernel code style
> requirement.
>
> By the way, did you check your patch by scripts/checkpatch.pl script?
>
> Moreover, there are hfsplus driver's patches in linux-next that uses
> "hfs:" prefix.
>
> I doubt that this patch can improve hfsplus driver quality. It looks
> like changes in many places without changing anything in essence.
>
> With the best regards,
> Vyacheslav Dubeyko.
>
In terms of line length I was applying the exception in the CodingStyle
which says "never break user-visible strings such as printk messages,
because that breaks the ability to grep for them." to allow lines be
longer than 80 characters.
I did use checkpatch.pl. It reported this for every printk:
WARNING: Prefer netdev_err(netdev, ... then dev_err(dev, ... then pr_err(... to printk(KERN_ERR ...
After seeing that ext2/3/4, btrfs and xfs use printk and not any of
those functions I followed the majority. It also reported a couple of:
WARNING: line over 80 characters
I was applying the above exception.
Absoutely this patch doesn't fix any faults and is not for 3.8.0-rc* but
for linux next. I just though that it would be useful for users to be
told the name of the FS driver generating the message rather than a
different one.
Would an equlivant patch be accepted for linux-next?
How do I send it to linux-next?
Thank you Vyacheslav for reviewing my patch,
Mike
>
> > Assuming not here's a patch to fix.
> >
> > (Any code which may have been copied between hfs and hfsplus has since
> > diverged significantly).
> >
> > Thanks,
> > Mike
> >
> > ---
> > fs/hfsplus/bfind.c | 2 +-
> > fs/hfsplus/bnode.c | 4 ++--
> > fs/hfsplus/brec.c | 7 ++++---
> > fs/hfsplus/btree.c | 24 ++++++++++++------------
> > fs/hfsplus/catalog.c | 4 ++--
> > fs/hfsplus/dir.c | 14 +++++++-------
> > fs/hfsplus/extents.c | 6 +++---
> > fs/hfsplus/inode.c | 2 +-
> > fs/hfsplus/options.c | 22 +++++++++++-----------
> > fs/hfsplus/super.c | 34 +++++++++++++++++-----------------
> > fs/hfsplus/wrapper.c | 6 +++---
> > 11 files changed, 63 insertions(+), 62 deletions(-)
> >
> > diff --git a/fs/hfsplus/bfind.c b/fs/hfsplus/bfind.c
> > index 5d799c1..b00e446 100644
> > --- a/fs/hfsplus/bfind.c
> > +++ b/fs/hfsplus/bfind.c
> > @@ -137,7 +137,7 @@ int hfs_brec_find(struct hfs_find_data *fd)
> > return res;
> >
> > invalid:
> > - printk(KERN_ERR "hfs: inconsistency in B*Tree (%d,%d,%d,%u,%u)\n",
> > + printk(KERN_ERR "hfsplus: inconsistency in B*Tree (%d,%d,%d,%u,%u)\n",
> > height, bnode->height, bnode->type, nidx, parent);
> > res = -EIO;
> > release:
> > diff --git a/fs/hfsplus/bnode.c b/fs/hfsplus/bnode.c
> > index 1c42cc5..db020bf 100644
> > --- a/fs/hfsplus/bnode.c
> > +++ b/fs/hfsplus/bnode.c
> > @@ -384,7 +384,7 @@ struct hfs_bnode *hfs_bnode_findhash(struct hfs_btree *tree, u32 cnid)
> > struct hfs_bnode *node;
> >
> > if (cnid >= tree->node_count) {
> > - printk(KERN_ERR "hfs: request for non-existent node "
> > + printk(KERN_ERR "hfsplus: request for non-existent node "
> > "%d in B*Tree\n",
> > cnid);
> > return NULL;
> > @@ -407,7 +407,7 @@ static struct hfs_bnode *__hfs_bnode_create(struct hfs_btree *tree, u32 cnid)
> > loff_t off;
> >
> > if (cnid >= tree->node_count) {
> > - printk(KERN_ERR "hfs: request for non-existent node "
> > + printk(KERN_ERR "hfsplus: request for non-existent node "
> > "%d in B*Tree\n",
> > cnid);
> > return NULL;
> > diff --git a/fs/hfsplus/brec.c b/fs/hfsplus/brec.c
> > index 2a734cf..c512fc3 100644
> > --- a/fs/hfsplus/brec.c
> > +++ b/fs/hfsplus/brec.c
> > @@ -44,13 +44,14 @@ u16 hfs_brec_keylen(struct hfs_bnode *node, u16 rec)
> > if (!recoff)
> > return 0;
> > if (recoff > node->tree->node_size - 2) {
> > - printk(KERN_ERR "hfs: recoff %d too large\n", recoff);
> > + printk(KERN_ERR "hfsplus: recoff %d too large\n",
> > + recoff);
> > return 0;
> > }
> >
> > retval = hfs_bnode_read_u16(node, recoff) + 2;
> > if (retval > node->tree->max_key_len + 2) {
> > - printk(KERN_ERR "hfs: keylen %d too large\n",
> > + printk(KERN_ERR "hfsplus: keylen %d too large\n",
> > retval);
> > retval = 0;
> > }
> > @@ -388,7 +389,7 @@ again:
> > end_off = hfs_bnode_read_u16(parent, end_rec_off);
> > if (end_rec_off - end_off < diff) {
> >
> > - dprint(DBG_BNODE_MOD, "hfs: splitting index node.\n");
> > + dprint(DBG_BNODE_MOD, "hfsplus: splitting index node.\n");
> > fd->bnode = parent;
> > new_node = hfs_bnode_split(fd);
> > if (IS_ERR(new_node))
> > diff --git a/fs/hfsplus/btree.c b/fs/hfsplus/btree.c
> > index 685d07d..cc13ccf 100644
> > --- a/fs/hfsplus/btree.c
> > +++ b/fs/hfsplus/btree.c
> > @@ -41,7 +41,7 @@ struct hfs_btree *hfs_btree_open(struct super_block *sb, u32 id)
> >
> > if (!HFSPLUS_I(tree->inode)->first_blocks) {
> > printk(KERN_ERR
> > - "hfs: invalid btree extent records (0 size).\n");
> > + "hfsplus: invalid btree extent records (0 size).\n");
> > goto free_inode;
> > }
> >
> > @@ -68,12 +68,12 @@ struct hfs_btree *hfs_btree_open(struct super_block *sb, u32 id)
> > switch (id) {
> > case HFSPLUS_EXT_CNID:
> > if (tree->max_key_len != HFSPLUS_EXT_KEYLEN - sizeof(u16)) {
> > - printk(KERN_ERR "hfs: invalid extent max_key_len %d\n",
> > + printk(KERN_ERR "hfsplus: invalid extent max_key_len %d\n",
> > tree->max_key_len);
> > goto fail_page;
> > }
> > if (tree->attributes & HFS_TREE_VARIDXKEYS) {
> > - printk(KERN_ERR "hfs: invalid extent btree flag\n");
> > + printk(KERN_ERR "hfsplus: invalid extent btree flag\n");
> > goto fail_page;
> > }
> >
> > @@ -81,12 +81,12 @@ struct hfs_btree *hfs_btree_open(struct super_block *sb, u32 id)
> > break;
> > case HFSPLUS_CAT_CNID:
> > if (tree->max_key_len != HFSPLUS_CAT_KEYLEN - sizeof(u16)) {
> > - printk(KERN_ERR "hfs: invalid catalog max_key_len %d\n",
> > + printk(KERN_ERR "hfsplus: invalid catalog max_key_len %d\n",
> > tree->max_key_len);
> > goto fail_page;
> > }
> > if (!(tree->attributes & HFS_TREE_VARIDXKEYS)) {
> > - printk(KERN_ERR "hfs: invalid catalog btree flag\n");
> > + printk(KERN_ERR "hfsplus: invalid catalog btree flag\n");
> > goto fail_page;
> > }
> >
> > @@ -99,12 +99,12 @@ struct hfs_btree *hfs_btree_open(struct super_block *sb, u32 id)
> > }
> > break;
> > default:
> > - printk(KERN_ERR "hfs: unknown B*Tree requested\n");
> > + printk(KERN_ERR "hfsplus: unknown B*Tree requested\n");
> > goto fail_page;
> > }
> >
> > if (!(tree->attributes & HFS_TREE_BIGKEYS)) {
> > - printk(KERN_ERR "hfs: invalid btree flag\n");
> > + printk(KERN_ERR "hfsplus: invalid btree flag\n");
> > goto fail_page;
> > }
> >
> > @@ -147,7 +147,7 @@ void hfs_btree_close(struct hfs_btree *tree)
> > while ((node = tree->node_hash[i])) {
> > tree->node_hash[i] = node->next_hash;
> > if (atomic_read(&node->refcnt))
> > - printk(KERN_CRIT "hfs: node %d:%d "
> > + printk(KERN_CRIT "hfsplus: node %d:%d "
> > "still has %d user(s)!\n",
> > node->tree->cnid, node->this,
> > atomic_read(&node->refcnt));
> > @@ -295,7 +295,7 @@ struct hfs_bnode *hfs_bmap_alloc(struct hfs_btree *tree)
> > kunmap(*pagep);
> > nidx = node->next;
> > if (!nidx) {
> > - dprint(DBG_BNODE_MOD, "hfs: create new bmap node.\n");
> > + dprint(DBG_BNODE_MOD, "hfsplus: create new bmap node.\n");
> > next_node = hfs_bmap_new_bmap(node, idx);
> > } else
> > next_node = hfs_bnode_find(tree, nidx);
> > @@ -337,7 +337,7 @@ void hfs_bmap_free(struct hfs_bnode *node)
> > hfs_bnode_put(node);
> > if (!i) {
> > /* panic */;
> > - printk(KERN_CRIT "hfs: unable to free bnode %u. "
> > + printk(KERN_CRIT "hfsplus: unable to free bnode %u. "
> > "bmap not found!\n",
> > node->this);
> > return;
> > @@ -347,7 +347,7 @@ void hfs_bmap_free(struct hfs_bnode *node)
> > return;
> > if (node->type != HFS_NODE_MAP) {
> > /* panic */;
> > - printk(KERN_CRIT "hfs: invalid bmap found! "
> > + printk(KERN_CRIT "hfsplus: invalid bmap found! "
> > "(%u,%d)\n",
> > node->this, node->type);
> > hfs_bnode_put(node);
> > @@ -362,7 +362,7 @@ void hfs_bmap_free(struct hfs_bnode *node)
> > m = 1 << (~nidx & 7);
> > byte = data[off];
> > if (!(byte & m)) {
> > - printk(KERN_CRIT "hfs: trying to free free bnode "
> > + printk(KERN_CRIT "hfsplus: trying to free free bnode "
> > "%u(%d)\n",
> > node->this, node->type);
> > kunmap(page);
> > diff --git a/fs/hfsplus/catalog.c b/fs/hfsplus/catalog.c
> > index 798d9c4..f2178e7 100644
> > --- a/fs/hfsplus/catalog.c
> > +++ b/fs/hfsplus/catalog.c
> > @@ -186,12 +186,12 @@ int hfsplus_find_cat(struct super_block *sb, u32 cnid,
> >
> > type = be16_to_cpu(tmp.type);
> > if (type != HFSPLUS_FOLDER_THREAD && type != HFSPLUS_FILE_THREAD) {
> > - printk(KERN_ERR "hfs: found bad thread record in catalog\n");
> > + printk(KERN_ERR "hfsplus: found bad thread record in catalog\n");
> > return -EIO;
> > }
> >
> > if (be16_to_cpu(tmp.thread.nodeName.length) > 255) {
> > - printk(KERN_ERR "hfs: catalog name length corrupted\n");
> > + printk(KERN_ERR "hfsplus: catalog name length corrupted\n");
> > return -EIO;
> > }
> >
> > diff --git a/fs/hfsplus/dir.c b/fs/hfsplus/dir.c
> > index 6b9f921..8f133ab 100644
> > --- a/fs/hfsplus/dir.c
> > +++ b/fs/hfsplus/dir.c
> > @@ -102,7 +102,7 @@ again:
> > } else if (!dentry->d_fsdata)
> > dentry->d_fsdata = (void *)(unsigned long)cnid;
> > } else {
> > - printk(KERN_ERR "hfs: invalid catalog entry type in lookup\n");
> > + printk(KERN_ERR "hfsplus: invalid catalog entry type in lookup\n");
> > err = -EIO;
> > goto fail;
> > }
> > @@ -158,12 +158,12 @@ static int hfsplus_readdir(struct file *filp, void *dirent, filldir_t filldir)
> > hfs_bnode_read(fd.bnode, &entry, fd.entryoffset,
> > fd.entrylength);
> > if (be16_to_cpu(entry.type) != HFSPLUS_FOLDER_THREAD) {
> > - printk(KERN_ERR "hfs: bad catalog folder thread\n");
> > + printk(KERN_ERR "hfsplus: bad catalog folder thread\n");
> > err = -EIO;
> > goto out;
> > }
> > if (fd.entrylength < HFSPLUS_MIN_THREAD_SZ) {
> > - printk(KERN_ERR "hfs: truncated catalog thread\n");
> > + printk(KERN_ERR "hfsplus: truncated catalog thread\n");
> > err = -EIO;
> > goto out;
> > }
> > @@ -182,7 +182,7 @@ static int hfsplus_readdir(struct file *filp, void *dirent, filldir_t filldir)
> >
> > for (;;) {
> > if (be32_to_cpu(fd.key->cat.parent) != inode->i_ino) {
> > - printk(KERN_ERR "hfs: walked past end of dir\n");
> > + printk(KERN_ERR "hfsplus: walked past end of dir\n");
> > err = -EIO;
> > goto out;
> > }
> > @@ -202,7 +202,7 @@ static int hfsplus_readdir(struct file *filp, void *dirent, filldir_t filldir)
> > if (type == HFSPLUS_FOLDER) {
> > if (fd.entrylength <
> > sizeof(struct hfsplus_cat_folder)) {
> > - printk(KERN_ERR "hfs: small dir entry\n");
> > + printk(KERN_ERR "hfsplus: small dir entry\n");
> > err = -EIO;
> > goto out;
> > }
> > @@ -215,7 +215,7 @@ static int hfsplus_readdir(struct file *filp, void *dirent, filldir_t filldir)
> > break;
> > } else if (type == HFSPLUS_FILE) {
> > if (fd.entrylength < sizeof(struct hfsplus_cat_file)) {
> > - printk(KERN_ERR "hfs: small file entry\n");
> > + printk(KERN_ERR "hfsplus: small file entry\n");
> > err = -EIO;
> > goto out;
> > }
> > @@ -223,7 +223,7 @@ static int hfsplus_readdir(struct file *filp, void *dirent, filldir_t filldir)
> > be32_to_cpu(entry.file.id), DT_REG))
> > break;
> > } else {
> > - printk(KERN_ERR "hfs: bad catalog entry type\n");
> > + printk(KERN_ERR "hfsplus: bad catalog entry type\n");
> > err = -EIO;
> > goto out;
> > }
> > diff --git a/fs/hfsplus/extents.c b/fs/hfsplus/extents.c
> > index eba76ea..df17086 100644
> > --- a/fs/hfsplus/extents.c
> > +++ b/fs/hfsplus/extents.c
> > @@ -348,7 +348,7 @@ found:
> > if (count <= block_nr) {
> > err = hfsplus_block_free(sb, start, count);
> > if (err) {
> > - printk(KERN_ERR "hfs: can't free extent\n");
> > + printk(KERN_ERR "hfsplus: can't free extent\n");
> > dprint(DBG_EXTENT, " start: %u count: %u\n",
> > start, count);
> > }
> > @@ -359,7 +359,7 @@ found:
> > count -= block_nr;
> > err = hfsplus_block_free(sb, start + count, block_nr);
> > if (err) {
> > - printk(KERN_ERR "hfs: can't free extent\n");
> > + printk(KERN_ERR "hfsplus: can't free extent\n");
> > dprint(DBG_EXTENT, " start: %u count: %u\n",
> > start, count);
> > }
> > @@ -432,7 +432,7 @@ int hfsplus_file_extend(struct inode *inode)
> > if (sbi->alloc_file->i_size * 8 <
> > sbi->total_blocks - sbi->free_blocks + 8) {
> > /* extend alloc file */
> > - printk(KERN_ERR "hfs: extend alloc file! "
> > + printk(KERN_ERR "hfsplus: extend alloc file! "
> > "(%llu,%u,%u)\n",
> > sbi->alloc_file->i_size * 8,
> > sbi->total_blocks, sbi->free_blocks);
> > diff --git a/fs/hfsplus/inode.c b/fs/hfsplus/inode.c
> > index 799b336..324dcda 100644
> > --- a/fs/hfsplus/inode.c
> > +++ b/fs/hfsplus/inode.c
> > @@ -559,7 +559,7 @@ int hfsplus_cat_read_inode(struct inode *inode, struct hfs_find_data *fd)
> > inode->i_ctime = hfsp_mt2ut(file->attribute_mod_date);
> > HFSPLUS_I(inode)->create_date = file->create_date;
> > } else {
> > - printk(KERN_ERR "hfs: bad catalog entry used to create inode\n");
> > + printk(KERN_ERR "hfsplus: bad catalog entry used to create inode\n");
> > res = -EIO;
> > }
> > return res;
> > diff --git a/fs/hfsplus/options.c b/fs/hfsplus/options.c
> > index ed257c6..9e5264d 100644
> > --- a/fs/hfsplus/options.c
> > +++ b/fs/hfsplus/options.c
> > @@ -113,67 +113,67 @@ int hfsplus_parse_options(char *input, struct hfsplus_sb_info *sbi)
> > switch (token) {
> > case opt_creator:
> > if (match_fourchar(&args[0], &sbi->creator)) {
> > - printk(KERN_ERR "hfs: creator requires a 4 character value\n");
> > + printk(KERN_ERR "hfsplus: creator requires a 4 character value\n");
> > return 0;
> > }
> > break;
> > case opt_type:
> > if (match_fourchar(&args[0], &sbi->type)) {
> > - printk(KERN_ERR "hfs: type requires a 4 character value\n");
> > + printk(KERN_ERR "hfsplus: type requires a 4 character value\n");
> > return 0;
> > }
> > break;
> > case opt_umask:
> > if (match_octal(&args[0], &tmp)) {
> > - printk(KERN_ERR "hfs: umask requires a value\n");
> > + printk(KERN_ERR "hfsplus: umask requires a value\n");
> > return 0;
> > }
> > sbi->umask = (umode_t)tmp;
> > break;
> > case opt_uid:
> > if (match_int(&args[0], &tmp)) {
> > - printk(KERN_ERR "hfs: uid requires an argument\n");
> > + printk(KERN_ERR "hfsplus: uid requires an argument\n");
> > return 0;
> > }
> > sbi->uid = make_kuid(current_user_ns(), (uid_t)tmp);
> > if (!uid_valid(sbi->uid)) {
> > - printk(KERN_ERR "hfs: invalid uid specified\n");
> > + printk(KERN_ERR "hfsplus: invalid uid specified\n");
> > return 0;
> > }
> > break;
> > case opt_gid:
> > if (match_int(&args[0], &tmp)) {
> > - printk(KERN_ERR "hfs: gid requires an argument\n");
> > + printk(KERN_ERR "hfsplus: gid requires an argument\n");
> > return 0;
> > }
> > sbi->gid = make_kgid(current_user_ns(), (gid_t)tmp);
> > if (!gid_valid(sbi->gid)) {
> > - printk(KERN_ERR "hfs: invalid gid specified\n");
> > + printk(KERN_ERR "hfsplus: invalid gid specified\n");
> > return 0;
> > }
> > break;
> > case opt_part:
> > if (match_int(&args[0], &sbi->part)) {
> > - printk(KERN_ERR "hfs: part requires an argument\n");
> > + printk(KERN_ERR "hfsplus: part requires an argument\n");
> > return 0;
> > }
> > break;
> > case opt_session:
> > if (match_int(&args[0], &sbi->session)) {
> > - printk(KERN_ERR "hfs: session requires an argument\n");
> > + printk(KERN_ERR "hfsplus: session requires an argument\n");
> > return 0;
> > }
> > break;
> > case opt_nls:
> > if (sbi->nls) {
> > - printk(KERN_ERR "hfs: unable to change nls mapping\n");
> > + printk(KERN_ERR "hfsplus: unable to change nls mapping\n");
> > return 0;
> > }
> > p = match_strdup(&args[0]);
> > if (p)
> > sbi->nls = load_nls(p);
> > if (!sbi->nls) {
> > - printk(KERN_ERR "hfs: unable to load "
> > + printk(KERN_ERR "hfsplus: unable to load "
> > "nls mapping \"%s\"\n",
> > p);
> > kfree(p);
> > diff --git a/fs/hfsplus/super.c b/fs/hfsplus/super.c
> > index 796198d..31cbf46 100644
> > --- a/fs/hfsplus/super.c
> > +++ b/fs/hfsplus/super.c
> > @@ -130,7 +130,7 @@ static int hfsplus_system_write_inode(struct inode *inode)
> > if (tree) {
> > int err = hfs_btree_write(tree);
> > if (err) {
> > - printk(KERN_ERR "hfs: b-tree write err: %d, ino %lu\n",
> > + printk(KERN_ERR "hfsplus: b-tree write err: %d, ino %lu\n",
> > err, inode->i_ino);
> > return err;
> > }
> > @@ -243,7 +243,7 @@ static void delayed_sync_fs(struct work_struct *work)
> >
> > err = hfsplus_sync_fs(sbi->alloc_file->i_sb, 1);
> > if (err)
> > - printk(KERN_ERR "hfs: delayed sync fs err %d\n", err);
> > + printk(KERN_ERR "hfsplus: delayed sync fs err %d\n", err);
> > }
> >
> > void hfsplus_mark_mdb_dirty(struct super_block *sb)
> > @@ -324,7 +324,7 @@ static int hfsplus_remount(struct super_block *sb, int *flags, char *data)
> > return -EINVAL;
> >
> > if (!(vhdr->attributes & cpu_to_be32(HFSPLUS_VOL_UNMNT))) {
> > - printk(KERN_WARNING "hfs: filesystem was "
> > + printk(KERN_WARNING "hfsplus: filesystem was "
> > "not cleanly unmounted, "
> > "running fsck.hfsplus is recommended. "
> > "leaving read-only.\n");
> > @@ -334,13 +334,13 @@ static int hfsplus_remount(struct super_block *sb, int *flags, char *data)
> > /* nothing */
> > } else if (vhdr->attributes &
> > cpu_to_be32(HFSPLUS_VOL_SOFTLOCK)) {
> > - printk(KERN_WARNING "hfs: filesystem is marked locked, "
> > + printk(KERN_WARNING "hfsplus: filesystem is marked locked, "
> > "leaving read-only.\n");
> > sb->s_flags |= MS_RDONLY;
> > *flags |= MS_RDONLY;
> > } else if (vhdr->attributes &
> > cpu_to_be32(HFSPLUS_VOL_JOURNALED)) {
> > - printk(KERN_WARNING "hfs: filesystem is "
> > + printk(KERN_WARNING "hfsplus: filesystem is "
> > "marked journaled, "
> > "leaving read-only.\n");
> > sb->s_flags |= MS_RDONLY;
> > @@ -388,7 +388,7 @@ static int hfsplus_fill_super(struct super_block *sb, void *data, int silent)
> >
> > err = -EINVAL;
> > if (!hfsplus_parse_options(data, sbi)) {
> > - printk(KERN_ERR "hfs: unable to parse mount options\n");
> > + printk(KERN_ERR "hfsplus: unable to parse mount options\n");
> > goto out_unload_nls;
> > }
> >
> > @@ -396,14 +396,14 @@ static int hfsplus_fill_super(struct super_block *sb, void *data, int silent)
> > nls = sbi->nls;
> > sbi->nls = load_nls("utf8");
> > if (!sbi->nls) {
> > - printk(KERN_ERR "hfs: unable to load nls for utf8\n");
> > + printk(KERN_ERR "hfsplus: unable to load nls for utf8\n");
> > goto out_unload_nls;
> > }
> >
> > /* Grab the volume header */
> > if (hfsplus_read_wrapper(sb)) {
> > if (!silent)
> > - printk(KERN_WARNING "hfs: unable to find HFS+ superblock\n");
> > + printk(KERN_WARNING "hfsplus: unable to find HFS+ superblock\n");
> > goto out_unload_nls;
> > }
> > vhdr = sbi->s_vhdr;
> > @@ -412,7 +412,7 @@ static int hfsplus_fill_super(struct super_block *sb, void *data, int silent)
> > sb->s_magic = HFSPLUS_VOLHEAD_SIG;
> > if (be16_to_cpu(vhdr->version) < HFSPLUS_MIN_VERSION ||
> > be16_to_cpu(vhdr->version) > HFSPLUS_CURRENT_VERSION) {
> > - printk(KERN_ERR "hfs: wrong filesystem version\n");
> > + printk(KERN_ERR "hfsplus: wrong filesystem version\n");
> > goto out_free_vhdr;
> > }
> > sbi->total_blocks = be32_to_cpu(vhdr->total_blocks);
> > @@ -436,7 +436,7 @@ static int hfsplus_fill_super(struct super_block *sb, void *data, int silent)
> >
> > if ((last_fs_block > (sector_t)(~0ULL) >> (sbi->alloc_blksz_shift - 9)) ||
> > (last_fs_page > (pgoff_t)(~0ULL))) {
> > - printk(KERN_ERR "hfs: filesystem size too large.\n");
> > + printk(KERN_ERR "hfsplus: filesystem size too large.\n");
> > goto out_free_vhdr;
> > }
> >
> > @@ -445,7 +445,7 @@ static int hfsplus_fill_super(struct super_block *sb, void *data, int silent)
> > sb->s_maxbytes = MAX_LFS_FILESIZE;
> >
> > if (!(vhdr->attributes & cpu_to_be32(HFSPLUS_VOL_UNMNT))) {
> > - printk(KERN_WARNING "hfs: Filesystem was "
> > + printk(KERN_WARNING "hfsplus: Filesystem was "
> > "not cleanly unmounted, "
> > "running fsck.hfsplus is recommended. "
> > "mounting read-only.\n");
> > @@ -453,11 +453,11 @@ static int hfsplus_fill_super(struct super_block *sb, void *data, int silent)
> > } else if (test_and_clear_bit(HFSPLUS_SB_FORCE, &sbi->flags)) {
> > /* nothing */
> > } else if (vhdr->attributes & cpu_to_be32(HFSPLUS_VOL_SOFTLOCK)) {
> > - printk(KERN_WARNING "hfs: Filesystem is marked locked, mounting read-only.\n");
> > + printk(KERN_WARNING "hfsplus: Filesystem is marked locked, mounting read-only.\n");
> > sb->s_flags |= MS_RDONLY;
> > } else if ((vhdr->attributes & cpu_to_be32(HFSPLUS_VOL_JOURNALED)) &&
> > !(sb->s_flags & MS_RDONLY)) {
> > - printk(KERN_WARNING "hfs: write access to "
> > + printk(KERN_WARNING "hfsplus: write access to "
> > "a journaled filesystem is not supported, "
> > "use the force option at your own risk, "
> > "mounting read-only.\n");
> > @@ -469,18 +469,18 @@ static int hfsplus_fill_super(struct super_block *sb, void *data, int silent)
> > /* Load metadata objects (B*Trees) */
> > sbi->ext_tree = hfs_btree_open(sb, HFSPLUS_EXT_CNID);
> > if (!sbi->ext_tree) {
> > - printk(KERN_ERR "hfs: failed to load extents file\n");
> > + printk(KERN_ERR "hfsplus: failed to load extents file\n");
> > goto out_free_vhdr;
> > }
> > sbi->cat_tree = hfs_btree_open(sb, HFSPLUS_CAT_CNID);
> > if (!sbi->cat_tree) {
> > - printk(KERN_ERR "hfs: failed to load catalog file\n");
> > + printk(KERN_ERR "hfsplus: failed to load catalog file\n");
> > goto out_close_ext_tree;
> > }
> >
> > inode = hfsplus_iget(sb, HFSPLUS_ALLOC_CNID);
> > if (IS_ERR(inode)) {
> > - printk(KERN_ERR "hfs: failed to load allocation file\n");
> > + printk(KERN_ERR "hfsplus: failed to load allocation file\n");
> > err = PTR_ERR(inode);
> > goto out_close_cat_tree;
> > }
> > @@ -489,7 +489,7 @@ static int hfsplus_fill_super(struct super_block *sb, void *data, int silent)
> > /* Load the root directory */
> > root = hfsplus_iget(sb, HFSPLUS_ROOT_CNID);
> > if (IS_ERR(root)) {
> > - printk(KERN_ERR "hfs: failed to load root directory\n");
> > + printk(KERN_ERR "hfsplus: failed to load root directory\n");
> > err = PTR_ERR(root);
> > goto out_put_alloc_file;
> > }
> > diff --git a/fs/hfsplus/wrapper.c b/fs/hfsplus/wrapper.c
> > index 90effcc..2f1b39b 100644
> > --- a/fs/hfsplus/wrapper.c
> > +++ b/fs/hfsplus/wrapper.c
> > @@ -156,7 +156,7 @@ static int hfsplus_get_last_session(struct super_block *sb,
> > *start = (sector_t)te.cdte_addr.lba << 2;
> > return 0;
> > }
> > - printk(KERN_ERR "hfs: invalid session number or type of track\n");
> > + printk(KERN_ERR "hfsplus: invalid session number or type of track\n");
> > return -EINVAL;
> > }
> > ms_info.addr_format = CDROM_LBA;
> > @@ -235,7 +235,7 @@ reread:
> > error = -EINVAL;
> > if (sbi->s_backup_vhdr->signature != sbi->s_vhdr->signature) {
> > printk(KERN_WARNING
> > - "hfs: invalid secondary volume header\n");
> > + "hfsplus: invalid secondary volume header\n");
> > goto out_free_backup_vhdr;
> > }
> >
> > @@ -259,7 +259,7 @@ reread:
> > blocksize >>= 1;
> >
> > if (sb_set_blocksize(sb, blocksize) != blocksize) {
> > - printk(KERN_ERR "hfs: unable to set blocksize to %u!\n",
> > + printk(KERN_ERR "hfsplus: unable to set blocksize to %u!\n",
> > blocksize);
> > goto out_free_backup_vhdr;
> > }
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
next prev parent reply other threads:[~2013-01-29 9:22 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-01-28 20:23 [PATCH] hfsplus: fix FS driver name in printks Mike Fleetwood
2013-01-29 5:49 ` Vyacheslav Dubeyko
2013-01-29 9:22 ` Mike Fleetwood [this message]
2013-01-29 10:25 ` Vyacheslav Dubeyko
2013-01-29 15:00 ` Hin-Tak Leung
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20130129092201.GA453@gmail.com \
--to=mike.fleetwood@googlemail.com \
--cc=htl10@users.sourceforge.net \
--cc=linux-fsdevel@vger.kernel.org \
--cc=slava@dubeyko.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).