public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Linda Walsh <xfs@tlinx.org>
To: Dave Chinner <david@fromorbit.com>
Cc: xfs-oss <xfs@oss.sgi.com>
Subject: Re: xfs_freeze same as umount?   How is that helpful?
Date: Thu, 04 Oct 2012 17:10:00 -0700	[thread overview]
Message-ID: <506E2558.2050003@tlinx.org> (raw)
In-Reply-To: <20121004233204.GB23644@dastard>

Dave Chinner wrote:
> On Thu, Oct 04, 2012 at 03:39:33PM -0700, Linda Walsh wrote:
>> Greg Freemyer wrote:
>>> Conceptually it is typically:
>>> - quiesce system
>> ----
>> 	Um... it seems that this is equivalent to being
>> able to umount the disk?
> 
> NO, it's not. freeze intentionally leaves the log dirty, whereas
> unmount leaves it clean.
----
	That's what I thought!


> 
>> When I tried xfs_freeze / fs_freeze got fs-busy -- same as I would
>> if I tried to umount it.
> 
> Of course - it's got to write all the dirty data andmetadata in
> memory to disk. Freeze is about providing a stable, consistent disk
> image of the filesystem, so it must flush dirty objects from memory
> to disk to provide that.
----
	But it says the freeze failed.... huh.

Just tried it again .. ( first time after reboot.. froze it no
messages or complaints) ?!?!  I don't get it.


It gave me a file system busy message before and as near as I could tell --
it wouldn't allow me to xfs_freeze it.

Trying the same thing now -- no prob.

(though I am ALSO on a newer kernel -- had another problem I solved
in looking through the logs for hints about the freeze.


> 
>> I thought the point of xfs_freeze was to allow it to be brought to
>> a consistent state without unmounting it?
> 
> Exactly.
> 
>> Coincidentally, after trying a few freezes, the system froze.
> 
> Entirely possible if you froze the root filesystem and something you
> rely on tried to write to the filesystem.
---
	Nep... "/home", and running as root, in root partition with
/home elements removed from PATH.   trying to be careful.  Notice
I did say   'Coincidentally' -- (w no/quotes in original).  If I was
thought there might be a connection or problem, at the very least I would
have put 'coincidentally' in quotes.. :-)..


> 
> Anyway, a one-line "it froze" report doesn't tell us anything about
> the problem you saw. So:
----
	Wasn't sure what I saw or that it was related -- exactly...

A possible theory... but nothing I'd blaim on xfs,  -- last message in log was:


Oct  4 13:52:50 Ishtar kernel: [985735.911825] INFO: task fetchmail:25872 blocke
d for more than 120 seconds.
Oct  4 13:52:50 Ishtar kernel: [985735.918777] "echo 0 > /proc/sys/kernel/hung_t
ask_timeout_secs" disables this message.


My kernel *was* setup to panic on a hung task.... instead it just froze...
but why fetchmail hung... well if the xfs_freeze "partly took" and just issued
the error "because", then that process might have froze trying to write log messages
to /home partition.... but 120 secs?  seems like it might have been going down 
before
I tried anything with xfs_freeze...


But not sure what to report now, as it's not doing the same things.

Was running 3.2.29, am running 3.5.4 now.... but 3.2.29 had been up for over 10
days... so maybe something else was going on there...

Sorry, when I said corrupt... mis-statement on my part... dirty was what
I meant -- but corrupt from the standpoint that the data lacked sufficient integrity
for a blockget type operation.  But dirty would be more accurate in FS-lingo.

(still had my data-integrity hat on)...

So if I xfs-freeze something, then take a snapshot, -- I don't see that any of that
would help in doing an xfs_blockget o get a dump of inodes->blocks, as it sounds
like it would still be dirty...

Hey, I think xfs walks on water, so don't think I'm complaining...just
trying to figure things out.   It's been a good fs for me for over 10 years on
my home systems.


_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

  reply	other threads:[~2012-10-05  0:08 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-10-04 15:30 get filename->inode mappings in bulk for a live fs? Linda Walsh
2012-10-04 18:01 ` Greg Freemyer
2012-10-04 18:29   ` Linda Walsh
2012-10-04 18:59     ` Greg Freemyer
2012-10-04 22:39   ` xfs_freeze same as umount? How is that helpful? Linda Walsh
2012-10-04 23:32     ` Dave Chinner
2012-10-05  0:10       ` Linda Walsh [this message]
2012-10-05  0:36         ` Dave Chinner
2012-10-04 22:49 ` get filename->inode mappings in bulk for a live fs? Dave Chinner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=506E2558.2050003@tlinx.org \
    --to=xfs@tlinx.org \
    --cc=david@fromorbit.com \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox