linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Evgeniy Dushistov <dushistov@mail.ru>
To: Alexey Dobriyan <adobriyan@gmail.com>
Cc: Andrew Morton <akpm@osdl.org>,
	linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org
Subject: Re [2]: [PATCH] Mark CONFIG_UFS_FS_WRITE as BROKEN
Date: Wed, 1 Feb 2006 23:04:10 +0300	[thread overview]
Message-ID: <20060201200410.GA11747@rain.homenetwork> (raw)
In-Reply-To: <20060131234634.GA13773@mipter.zuzino.mipt.ru>

On Wed, Feb 01, 2006 at 02:46:34AM +0300, Alexey Dobriyan wrote:
> Copying files over several KB will buy you infinite loop in
> __getblk_slow(). Copying files smaller than 1 KB seems to be OK.
> Sometimes files will be filled with zeros. Sometimes incorrectly copied
> file will reappear after next file with truncated size.
The problem as can I see, in very strange code in
balloc.c:ufs_new_fragments. b_blocknr is changed without "restraint".

This patch just "workaround", not a clear solution. But it helps me
copy files more than 4K. Can you try it and say is it really help?

Signed-off-by: Evgeniy Dushistov <dushistov@mail.ru>

---

--- linux-2.6.16-rc1-mm4/fs/ufs/balloc.c.orig	2006-02-01 22:55:28.245272250 +0300
+++ linux-2.6.16-rc1-mm4/fs/ufs/balloc.c	2006-02-01 22:47:33.455599750 +0300
@@ -241,7 +241,7 @@ unsigned ufs_new_fragments (struct inode
 	struct super_block * sb;
 	struct ufs_sb_private_info * uspi;
 	struct ufs_super_block_first * usb1;
-	struct buffer_head * bh;
+	struct buffer_head * bh, *bh1;
 	unsigned cgno, oldcount, newcount, tmp, request, i, result;
 	
 	UFSD(("ENTER, ino %lu, fragment %u, goal %u, count %u\n", inode->i_ino, fragment, goal, count))
@@ -359,17 +359,23 @@ unsigned ufs_new_fragments (struct inode
 	if (result) {
 		for (i = 0; i < oldcount; i++) {
 			bh = sb_bread(sb, tmp + i);
-			if(bh)
-			{
-				clear_buffer_dirty(bh);
-				bh->b_blocknr = result + i;
+			bh1 = sb_getblk(sb, result+i);
+			if (bh && bh1)	{
+				memcpy(bh1->b_data, bh->b_data, bh->b_size);
+				
 				mark_buffer_dirty (bh);
-				if (IS_SYNC(inode))
+				mark_buffer_dirty(bh1);
+				if (IS_SYNC(inode)) {
 					sync_dirty_buffer(bh);
+					sync_dirty_buffer(bh1);
+				}
 				brelse (bh);
-			}
-			else
-			{
+				brelse(bh1);
+			} else {
+				if (bh)
+					brelse(bh);
+				if (bh1)
+					brelse(bh1);
 				printk(KERN_ERR "ufs_new_fragments: bread fail\n");
 				unlock_super(sb);
 				return 0;


-- 
/Evgeniy

  parent reply	other threads:[~2006-02-01 20:04 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2006-01-31 23:46 [PATCH] Mark CONFIG_UFS_FS_WRITE as BROKEN Alexey Dobriyan
2006-02-01 15:40 ` Evgeniy Dushistov
2006-02-01 20:04 ` Evgeniy Dushistov [this message]
2006-02-02 23:40   ` Re [2]: " Andrew Morton
2006-02-03 17:46   ` Alexey Dobriyan
2006-02-03 22:44     ` Alexey Dobriyan
2006-02-04  1:18     ` [PATCH] ufs: fill i_size at directory creation Alexey Dobriyan
2006-02-04  6:18       ` Evgeniy Dushistov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20060201200410.GA11747@rain.homenetwork \
    --to=dushistov@mail.ru \
    --cc=adobriyan@gmail.com \
    --cc=akpm@osdl.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).