public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH] FS-Cache: Fix bounds check
       [not found] <20110712124622.9331.83332.sendpatchset@squad5-lp1.lab.bos.redhat.com>
@ 2011-07-13 11:47 ` David Howells
  2011-07-13 11:55   ` David Howells
  0 siblings, 1 reply; 3+ messages in thread
From: David Howells @ 2011-07-13 11:47 UTC (permalink / raw)
  To: geert; +Cc: linux-kernel, stable, David Howells

__fscache_uncache_all_inode_pages() has a loop that goes through page index
numbers go up to (loff_t)-1.  This is incorrect.  The limit should be
(pgoff_t)-1 as on a 32-bit machine the pgoff_t is smaller than loff_t.

On m68k the following error is observed:

fs/fscache/page.c: In function '__fscache_uncache_all_inode_pages':
fs/fscache/page.c:979: warning: comparison is always false due to
limited range of data type

[Should there be a PGOFF_T_MAX constant defined?]

Reported-by: Geert Uytterhoeven <geert@linux-m68k.org>
Signed-off-by: David Howells <dhowells@redhat.com>
---

 fs/fscache/page.c |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/fs/fscache/page.c b/fs/fscache/page.c
index 60315b3..112359d 100644
--- a/fs/fscache/page.c
+++ b/fs/fscache/page.c
@@ -993,7 +993,7 @@ void __fscache_uncache_all_inode_pages(struct fscache_cookie *cookie,
 
 	pagevec_init(&pvec, 0);
 	next = 0;
-	while (next <= (loff_t)-1 &&
+	while (next <= (pgoff_t)-1 &&
 	       pagevec_lookup(&pvec, mapping, next, PAGEVEC_SIZE)
 	       ) {
 		for (i = 0; i < pagevec_count(&pvec); i++) {


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* [PATCH] FS-Cache: Fix bounds check
  2011-07-12 19:26 FS-Cache: Add a helper to bulk uncache pages on an inode Geert Uytterhoeven
@ 2011-07-13 11:55 ` David Howells
  0 siblings, 0 replies; 3+ messages in thread
From: David Howells @ 2011-07-13 11:55 UTC (permalink / raw)
  To: geert; +Cc: linux-kernel, stable, David Howells

__fscache_uncache_all_inode_pages() has a loop that goes through page index
numbers go up to (loff_t)-1.  This is incorrect.  The limit should be
(pgoff_t)-1 as on a 32-bit machine the pgoff_t is smaller than loff_t.

On m68k the following error is observed:

fs/fscache/page.c: In function '__fscache_uncache_all_inode_pages':
fs/fscache/page.c:979: warning: comparison is always false due to
limited range of data type

[Should there be a PGOFF_T_MAX constant defined?]

Reported-by: Geert Uytterhoeven <geert@linux-m68k.org>
Signed-off-by: David Howells <dhowells@redhat.com>
---

 fs/fscache/page.c |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/fs/fscache/page.c b/fs/fscache/page.c
index 60315b3..112359d 100644
--- a/fs/fscache/page.c
+++ b/fs/fscache/page.c
@@ -993,7 +993,7 @@ void __fscache_uncache_all_inode_pages(struct fscache_cookie *cookie,
 
 	pagevec_init(&pvec, 0);
 	next = 0;
-	while (next <= (loff_t)-1 &&
+	while (next <= (pgoff_t)-1 &&
 	       pagevec_lookup(&pvec, mapping, next, PAGEVEC_SIZE)
 	       ) {
 		for (i = 0; i < pagevec_count(&pvec); i++) {


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH] FS-Cache: Fix bounds check
  2011-07-13 11:47 ` [PATCH] FS-Cache: Fix bounds check David Howells
@ 2011-07-13 11:55   ` David Howells
  0 siblings, 0 replies; 3+ messages in thread
From: David Howells @ 2011-07-13 11:55 UTC (permalink / raw)
  Cc: dhowells, geert, linux-kernel, stable


Sigh...  In reply to the wrong message; sorry about that.

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2011-07-13 11:56 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <20110712124622.9331.83332.sendpatchset@squad5-lp1.lab.bos.redhat.com>
2011-07-13 11:47 ` [PATCH] FS-Cache: Fix bounds check David Howells
2011-07-13 11:55   ` David Howells
2011-07-12 19:26 FS-Cache: Add a helper to bulk uncache pages on an inode Geert Uytterhoeven
2011-07-13 11:55 ` [PATCH] FS-Cache: Fix bounds check David Howells

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox