public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Nathan Scott <nscott@aconex.com>
To: xfs@oss.sgi.com
Subject: [PATCH] make growfs check device size limits too
Date: Thu, 26 Apr 2007 16:30:14 +1000	[thread overview]
Message-ID: <1177569014.6273.367.camel@edge> (raw)

[-- Attachment #1: Type: text/plain, Size: 578 bytes --]

On the mount path we check for a superblock that describes a filesystem
to large for the running kernel to handle.  This catches the case of an
attempt to mount a >16TB filesystem on i386 (where we are limited by the
page->index size, for XFS metadata buffers in xfs_buf.c).

This patch makes similar checks on the growfs code paths for regular and
realtime growth, else we can end up with filesystem corruption, it would
seem (from #xfs chatter).  Untested patch follows; probably better to do
this as a macro, in a header, and call that in each place...?

cheers.

-- 
Nathan

[-- Attachment #2: growfs.patch --]
[-- Type: text/x-patch, Size: 1759 bytes --]

--- fs/xfs/xfs_fsops.c.orig	2007-04-26 16:05:38.126936000 +1000
+++ fs/xfs/xfs_fsops.c	2007-04-26 16:17:03.385762000 +1000
@@ -148,6 +148,20 @@
 		return error;
 	ASSERT(bp);
 	xfs_buf_relse(bp);
+	/*
+	 * Device drivers seem to be pathological liars... so, guess we
+	 * better check that the size isn't something completely insane.
+	 * Same check is done during mount, so we wont create something
+	 * here that we cannot later mount, at least.
+	 */
+#if XFS_BIG_BLKNOS     /* Limited by ULONG_MAX of page cache index */
+	if (unlikely(
+	    (nb >> (PAGE_CACHE_SHIFT - sbp->sb_blocklog)) > ULONG_MAX))
+#else                  /* Limited by UINT_MAX of sectors */
+	if (unlikely(
+	    (nb << (sbp->sb_blocklog - BBSHIFT)) > UINT_MAX))
+#endif
+	    return XFS_ERROR(E2BIG);
 
 	new = nb;	/* use new as a temporary here */
 	nb_mod = do_div(new, mp->m_sb.sb_agblocks);
--- fs/xfs/xfs_rtalloc.c.orig	2007-04-26 16:16:34.695969000 +1000
+++ fs/xfs/xfs_rtalloc.c	2007-04-26 16:22:43.227000750 +1000
@@ -1893,6 +1893,20 @@
 	ASSERT(bp);
 	xfs_buf_relse(bp);
 	/*
+	 * Device drivers seem to be pathological liars... so, guess we
+	 * better check that the size isn't something completely insane.
+	 * Same check is done during mount, so we wont create something
+	 * here that we cannot later mount, at least.
+	 */
+#if XFS_BIG_BLKNOS     /* Limited by ULONG_MAX of page cache index */
+	if (unlikely(
+	    (nrblocks >> (PAGE_CACHE_SHIFT - sbp->sb_blocklog)) > ULONG_MAX))
+#else                  /* Limited by UINT_MAX of sectors */
+	if (unlikely(
+	    (nrblocks << (sbp->sb_blocklog - BBSHIFT)) > UINT_MAX))
+#endif
+		return XFS_ERROR(E2BIG);
+	/*
 	 * Calculate new parameters.  These are the final values to be reached.
 	 */
 	nrextents = nrblocks;

             reply	other threads:[~2007-04-26  6:28 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2007-04-26  6:30 Nathan Scott [this message]
2007-04-26  7:10 ` [PATCH] make growfs check device size limits too Christoph Hellwig
2007-04-26 23:45   ` Nathan Scott
2007-04-27  2:24     ` Eric Sandeen
2007-04-27  6:16     ` David Chinner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1177569014.6273.367.camel@edge \
    --to=nscott@aconex.com \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox