* Re: [PATCH 2/2] xfs: fix getbmap vs mmap deadlock
2009-02-24 13:39 [PATCH 2/2] xfs: fix getbmap vs mmap deadlock Christoph Hellwig
@ 2009-03-16 6:30 ` Christoph Hellwig
2009-03-29 7:44 ` Christoph Hellwig
2009-04-06 23:56 ` Eric Sandeen
2009-04-07 0:42 ` Felix Blyakher
2 siblings, 1 reply; 7+ messages in thread
From: Christoph Hellwig @ 2009-03-16 6:30 UTC (permalink / raw)
To: xfs
ping?
On Tue, Feb 24, 2009 at 08:39:02AM -0500, Christoph Hellwig wrote:
> xfs_getbmap (or rather the formatters called by it) copy out the getbmap
> structures under the ilock, which can deadlock against mmap. This has
> been reported via bugzilla a while ago (#717) and has recently also
> shown up via lockdep.
>
> So allocate a temporary buffer to format the kernel getbmap structures
> into and then copy them out after dropping the locks.
>
> A little problem with this is that we limit the number of extents we
> can copy out by the maximum allocation size, but I see no real way
> around that.
>
>
> Signed-off-by: Christoph Hellwig <hch@lst.de>
>
> Index: xfs/fs/xfs/xfs_bmap.c
> ===================================================================
> --- xfs.orig/fs/xfs/xfs_bmap.c 2009-02-23 20:38:27.512925014 +0100
> +++ xfs/fs/xfs/xfs_bmap.c 2009-02-23 20:40:46.720926193 +0100
> @@ -5867,12 +5867,13 @@ xfs_getbmap(
> int nexleft; /* # of user extents left */
> int subnex; /* # of bmapi's can do */
> int nmap; /* number of map entries */
> - struct getbmapx out; /* output structure */
> + struct getbmapx *out; /* output structure */
> int whichfork; /* data or attr fork */
> int prealloced; /* this is a file with
> * preallocated data space */
> int iflags; /* interface flags */
> int bmapi_flags; /* flags for xfs_bmapi */
> + int cur_ext = 0;
>
> mp = ip->i_mount;
> iflags = bmv->bmv_iflags;
> @@ -5948,6 +5949,13 @@ xfs_getbmap(
> return XFS_ERROR(EINVAL);
> bmvend = bmv->bmv_offset + bmv->bmv_length;
>
> +
> + if (bmv->bmv_count > ULONG_MAX / sizeof(struct getbmapx))
> + return XFS_ERROR(ENOMEM);
> + out = kmem_zalloc(bmv->bmv_count * sizeof(struct getbmapx), KM_MAYFAIL);
> + if (!out)
> + return XFS_ERROR(ENOMEM);
> +
> xfs_ilock(ip, XFS_IOLOCK_SHARED);
> if (whichfork == XFS_DATA_FORK && !(iflags & BMV_IF_DELALLOC)) {
> if (ip->i_delayed_blks || ip->i_size > ip->i_d.di_size) {
> @@ -6001,39 +6009,39 @@ xfs_getbmap(
> ASSERT(nmap <= subnex);
>
> for (i = 0; i < nmap && nexleft && bmv->bmv_length; i++) {
> - int full = 0; /* user array is full */
> -
> - out.bmv_oflags = 0;
> + out[cur_ext].bmv_oflags = 0;
> if (map[i].br_state == XFS_EXT_UNWRITTEN)
> - out.bmv_oflags |= BMV_OF_PREALLOC;
> + out[cur_ext].bmv_oflags |= BMV_OF_PREALLOC;
> else if (map[i].br_startblock == DELAYSTARTBLOCK)
> - out.bmv_oflags |= BMV_OF_DELALLOC;
> - out.bmv_offset = XFS_FSB_TO_BB(mp, map[i].br_startoff);
> - out.bmv_length = XFS_FSB_TO_BB(mp, map[i].br_blockcount);
> - out.bmv_unused1 = out.bmv_unused2 = 0;
> + out[cur_ext].bmv_oflags |= BMV_OF_DELALLOC;
> + out[cur_ext].bmv_offset =
> + XFS_FSB_TO_BB(mp, map[i].br_startoff);
> + out[cur_ext].bmv_length =
> + XFS_FSB_TO_BB(mp, map[i].br_blockcount);
> + out[cur_ext].bmv_unused1 = 0;
> + out[cur_ext].bmv_unused2 = 0;
> ASSERT(((iflags & BMV_IF_DELALLOC) != 0) ||
> (map[i].br_startblock != DELAYSTARTBLOCK));
> if (map[i].br_startblock == HOLESTARTBLOCK &&
> whichfork == XFS_ATTR_FORK) {
> /* came to the end of attribute fork */
> - out.bmv_oflags |= BMV_OF_LAST;
> + out[cur_ext].bmv_oflags |= BMV_OF_LAST;
> goto out_free_map;
> }
>
> - if (!xfs_getbmapx_fix_eof_hole(ip, &out, prealloced,
> - bmvend, map[i].br_startblock))
> + if (!xfs_getbmapx_fix_eof_hole(ip, &out[cur_ext],
> + prealloced, bmvend,
> + map[i].br_startblock))
> goto out_free_map;
>
> - /* format results & advance arg */
> - error = formatter(&arg, &out, &full);
> - if (error || full)
> - goto out_free_map;
> nexleft--;
> bmv->bmv_offset =
> - out.bmv_offset + out.bmv_length;
> + out[cur_ext].bmv_offset +
> + out[cur_ext].bmv_length;
> bmv->bmv_length =
> max_t(__int64_t, 0, bmvend - bmv->bmv_offset);
> bmv->bmv_entries++;
> + cur_ext++;
> }
> } while (nmap && nexleft && bmv->bmv_length);
>
> @@ -6043,6 +6051,16 @@ xfs_getbmap(
> xfs_iunlock_map_shared(ip, lock);
> out_unlock_iolock:
> xfs_iunlock(ip, XFS_IOLOCK_SHARED);
> +
> + for (i = 0; i < cur_ext; i++) {
> + int full = 0; /* user array is full */
> +
> + /* format results & advance arg */
> + error = formatter(&arg, &out[i], &full);
> + if (error || full)
> + break;
> + }
> +
> return error;
> }
>
>
> _______________________________________________
> xfs mailing list
> xfs@oss.sgi.com
> http://oss.sgi.com/mailman/listinfo/xfs
---end quoted text---
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 7+ messages in thread* Re: [PATCH 2/2] xfs: fix getbmap vs mmap deadlock
2009-03-16 6:30 ` Christoph Hellwig
@ 2009-03-29 7:44 ` Christoph Hellwig
0 siblings, 0 replies; 7+ messages in thread
From: Christoph Hellwig @ 2009-03-29 7:44 UTC (permalink / raw)
To: xfs
ping^2?
On Mon, Mar 16, 2009 at 02:30:57AM -0400, Christoph Hellwig wrote:
> ping?
>
> On Tue, Feb 24, 2009 at 08:39:02AM -0500, Christoph Hellwig wrote:
> > xfs_getbmap (or rather the formatters called by it) copy out the getbmap
> > structures under the ilock, which can deadlock against mmap. This has
> > been reported via bugzilla a while ago (#717) and has recently also
> > shown up via lockdep.
> >
> > So allocate a temporary buffer to format the kernel getbmap structures
> > into and then copy them out after dropping the locks.
> >
> > A little problem with this is that we limit the number of extents we
> > can copy out by the maximum allocation size, but I see no real way
> > around that.
> >
> >
> > Signed-off-by: Christoph Hellwig <hch@lst.de>
> >
> > Index: xfs/fs/xfs/xfs_bmap.c
> > ===================================================================
> > --- xfs.orig/fs/xfs/xfs_bmap.c 2009-02-23 20:38:27.512925014 +0100
> > +++ xfs/fs/xfs/xfs_bmap.c 2009-02-23 20:40:46.720926193 +0100
> > @@ -5867,12 +5867,13 @@ xfs_getbmap(
> > int nexleft; /* # of user extents left */
> > int subnex; /* # of bmapi's can do */
> > int nmap; /* number of map entries */
> > - struct getbmapx out; /* output structure */
> > + struct getbmapx *out; /* output structure */
> > int whichfork; /* data or attr fork */
> > int prealloced; /* this is a file with
> > * preallocated data space */
> > int iflags; /* interface flags */
> > int bmapi_flags; /* flags for xfs_bmapi */
> > + int cur_ext = 0;
> >
> > mp = ip->i_mount;
> > iflags = bmv->bmv_iflags;
> > @@ -5948,6 +5949,13 @@ xfs_getbmap(
> > return XFS_ERROR(EINVAL);
> > bmvend = bmv->bmv_offset + bmv->bmv_length;
> >
> > +
> > + if (bmv->bmv_count > ULONG_MAX / sizeof(struct getbmapx))
> > + return XFS_ERROR(ENOMEM);
> > + out = kmem_zalloc(bmv->bmv_count * sizeof(struct getbmapx), KM_MAYFAIL);
> > + if (!out)
> > + return XFS_ERROR(ENOMEM);
> > +
> > xfs_ilock(ip, XFS_IOLOCK_SHARED);
> > if (whichfork == XFS_DATA_FORK && !(iflags & BMV_IF_DELALLOC)) {
> > if (ip->i_delayed_blks || ip->i_size > ip->i_d.di_size) {
> > @@ -6001,39 +6009,39 @@ xfs_getbmap(
> > ASSERT(nmap <= subnex);
> >
> > for (i = 0; i < nmap && nexleft && bmv->bmv_length; i++) {
> > - int full = 0; /* user array is full */
> > -
> > - out.bmv_oflags = 0;
> > + out[cur_ext].bmv_oflags = 0;
> > if (map[i].br_state == XFS_EXT_UNWRITTEN)
> > - out.bmv_oflags |= BMV_OF_PREALLOC;
> > + out[cur_ext].bmv_oflags |= BMV_OF_PREALLOC;
> > else if (map[i].br_startblock == DELAYSTARTBLOCK)
> > - out.bmv_oflags |= BMV_OF_DELALLOC;
> > - out.bmv_offset = XFS_FSB_TO_BB(mp, map[i].br_startoff);
> > - out.bmv_length = XFS_FSB_TO_BB(mp, map[i].br_blockcount);
> > - out.bmv_unused1 = out.bmv_unused2 = 0;
> > + out[cur_ext].bmv_oflags |= BMV_OF_DELALLOC;
> > + out[cur_ext].bmv_offset =
> > + XFS_FSB_TO_BB(mp, map[i].br_startoff);
> > + out[cur_ext].bmv_length =
> > + XFS_FSB_TO_BB(mp, map[i].br_blockcount);
> > + out[cur_ext].bmv_unused1 = 0;
> > + out[cur_ext].bmv_unused2 = 0;
> > ASSERT(((iflags & BMV_IF_DELALLOC) != 0) ||
> > (map[i].br_startblock != DELAYSTARTBLOCK));
> > if (map[i].br_startblock == HOLESTARTBLOCK &&
> > whichfork == XFS_ATTR_FORK) {
> > /* came to the end of attribute fork */
> > - out.bmv_oflags |= BMV_OF_LAST;
> > + out[cur_ext].bmv_oflags |= BMV_OF_LAST;
> > goto out_free_map;
> > }
> >
> > - if (!xfs_getbmapx_fix_eof_hole(ip, &out, prealloced,
> > - bmvend, map[i].br_startblock))
> > + if (!xfs_getbmapx_fix_eof_hole(ip, &out[cur_ext],
> > + prealloced, bmvend,
> > + map[i].br_startblock))
> > goto out_free_map;
> >
> > - /* format results & advance arg */
> > - error = formatter(&arg, &out, &full);
> > - if (error || full)
> > - goto out_free_map;
> > nexleft--;
> > bmv->bmv_offset =
> > - out.bmv_offset + out.bmv_length;
> > + out[cur_ext].bmv_offset +
> > + out[cur_ext].bmv_length;
> > bmv->bmv_length =
> > max_t(__int64_t, 0, bmvend - bmv->bmv_offset);
> > bmv->bmv_entries++;
> > + cur_ext++;
> > }
> > } while (nmap && nexleft && bmv->bmv_length);
> >
> > @@ -6043,6 +6051,16 @@ xfs_getbmap(
> > xfs_iunlock_map_shared(ip, lock);
> > out_unlock_iolock:
> > xfs_iunlock(ip, XFS_IOLOCK_SHARED);
> > +
> > + for (i = 0; i < cur_ext; i++) {
> > + int full = 0; /* user array is full */
> > +
> > + /* format results & advance arg */
> > + error = formatter(&arg, &out[i], &full);
> > + if (error || full)
> > + break;
> > + }
> > +
> > return error;
> > }
> >
> >
> > _______________________________________________
> > xfs mailing list
> > xfs@oss.sgi.com
> > http://oss.sgi.com/mailman/listinfo/xfs
> ---end quoted text---
>
> _______________________________________________
> xfs mailing list
> xfs@oss.sgi.com
> http://oss.sgi.com/mailman/listinfo/xfs
---end quoted text---
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH 2/2] xfs: fix getbmap vs mmap deadlock
2009-02-24 13:39 [PATCH 2/2] xfs: fix getbmap vs mmap deadlock Christoph Hellwig
2009-03-16 6:30 ` Christoph Hellwig
@ 2009-04-06 23:56 ` Eric Sandeen
2009-04-07 0:42 ` Felix Blyakher
2 siblings, 0 replies; 7+ messages in thread
From: Eric Sandeen @ 2009-04-06 23:56 UTC (permalink / raw)
To: Christoph Hellwig; +Cc: xfs
Christoph Hellwig wrote:
> xfs_getbmap (or rather the formatters called by it) copy out the getbmap
> structures under the ilock, which can deadlock against mmap. This has
> been reported via bugzilla a while ago (#717) and has recently also
> shown up via lockdep.
>
> So allocate a temporary buffer to format the kernel getbmap structures
> into and then copy them out after dropping the locks.
>
> A little problem with this is that we limit the number of extents we
> can copy out by the maximum allocation size, but I see no real way
> around that.
>
>
> Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Eric Sandeen <sandeen@sandeen.net>
> Index: xfs/fs/xfs/xfs_bmap.c
> ===================================================================
> --- xfs.orig/fs/xfs/xfs_bmap.c 2009-02-23 20:38:27.512925014 +0100
> +++ xfs/fs/xfs/xfs_bmap.c 2009-02-23 20:40:46.720926193 +0100
> @@ -5867,12 +5867,13 @@ xfs_getbmap(
> int nexleft; /* # of user extents left */
> int subnex; /* # of bmapi's can do */
> int nmap; /* number of map entries */
> - struct getbmapx out; /* output structure */
> + struct getbmapx *out; /* output structure */
> int whichfork; /* data or attr fork */
> int prealloced; /* this is a file with
> * preallocated data space */
> int iflags; /* interface flags */
> int bmapi_flags; /* flags for xfs_bmapi */
> + int cur_ext = 0;
>
> mp = ip->i_mount;
> iflags = bmv->bmv_iflags;
> @@ -5948,6 +5949,13 @@ xfs_getbmap(
> return XFS_ERROR(EINVAL);
> bmvend = bmv->bmv_offset + bmv->bmv_length;
>
> +
> + if (bmv->bmv_count > ULONG_MAX / sizeof(struct getbmapx))
> + return XFS_ERROR(ENOMEM);
> + out = kmem_zalloc(bmv->bmv_count * sizeof(struct getbmapx), KM_MAYFAIL);
> + if (!out)
> + return XFS_ERROR(ENOMEM);
> +
> xfs_ilock(ip, XFS_IOLOCK_SHARED);
> if (whichfork == XFS_DATA_FORK && !(iflags & BMV_IF_DELALLOC)) {
> if (ip->i_delayed_blks || ip->i_size > ip->i_d.di_size) {
> @@ -6001,39 +6009,39 @@ xfs_getbmap(
> ASSERT(nmap <= subnex);
>
> for (i = 0; i < nmap && nexleft && bmv->bmv_length; i++) {
> - int full = 0; /* user array is full */
> -
> - out.bmv_oflags = 0;
> + out[cur_ext].bmv_oflags = 0;
> if (map[i].br_state == XFS_EXT_UNWRITTEN)
> - out.bmv_oflags |= BMV_OF_PREALLOC;
> + out[cur_ext].bmv_oflags |= BMV_OF_PREALLOC;
> else if (map[i].br_startblock == DELAYSTARTBLOCK)
> - out.bmv_oflags |= BMV_OF_DELALLOC;
> - out.bmv_offset = XFS_FSB_TO_BB(mp, map[i].br_startoff);
> - out.bmv_length = XFS_FSB_TO_BB(mp, map[i].br_blockcount);
> - out.bmv_unused1 = out.bmv_unused2 = 0;
> + out[cur_ext].bmv_oflags |= BMV_OF_DELALLOC;
> + out[cur_ext].bmv_offset =
> + XFS_FSB_TO_BB(mp, map[i].br_startoff);
> + out[cur_ext].bmv_length =
> + XFS_FSB_TO_BB(mp, map[i].br_blockcount);
> + out[cur_ext].bmv_unused1 = 0;
> + out[cur_ext].bmv_unused2 = 0;
> ASSERT(((iflags & BMV_IF_DELALLOC) != 0) ||
> (map[i].br_startblock != DELAYSTARTBLOCK));
> if (map[i].br_startblock == HOLESTARTBLOCK &&
> whichfork == XFS_ATTR_FORK) {
> /* came to the end of attribute fork */
> - out.bmv_oflags |= BMV_OF_LAST;
> + out[cur_ext].bmv_oflags |= BMV_OF_LAST;
> goto out_free_map;
> }
>
> - if (!xfs_getbmapx_fix_eof_hole(ip, &out, prealloced,
> - bmvend, map[i].br_startblock))
> + if (!xfs_getbmapx_fix_eof_hole(ip, &out[cur_ext],
> + prealloced, bmvend,
> + map[i].br_startblock))
> goto out_free_map;
>
> - /* format results & advance arg */
> - error = formatter(&arg, &out, &full);
> - if (error || full)
> - goto out_free_map;
> nexleft--;
> bmv->bmv_offset =
> - out.bmv_offset + out.bmv_length;
> + out[cur_ext].bmv_offset +
> + out[cur_ext].bmv_length;
> bmv->bmv_length =
> max_t(__int64_t, 0, bmvend - bmv->bmv_offset);
> bmv->bmv_entries++;
> + cur_ext++;
> }
> } while (nmap && nexleft && bmv->bmv_length);
>
> @@ -6043,6 +6051,16 @@ xfs_getbmap(
> xfs_iunlock_map_shared(ip, lock);
> out_unlock_iolock:
> xfs_iunlock(ip, XFS_IOLOCK_SHARED);
> +
> + for (i = 0; i < cur_ext; i++) {
> + int full = 0; /* user array is full */
> +
> + /* format results & advance arg */
> + error = formatter(&arg, &out[i], &full);
> + if (error || full)
> + break;
> + }
> +
> return error;
> }
>
>
> _______________________________________________
> xfs mailing list
> xfs@oss.sgi.com
> http://oss.sgi.com/mailman/listinfo/xfs
>
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 7+ messages in thread* Re: [PATCH 2/2] xfs: fix getbmap vs mmap deadlock
2009-02-24 13:39 [PATCH 2/2] xfs: fix getbmap vs mmap deadlock Christoph Hellwig
2009-03-16 6:30 ` Christoph Hellwig
2009-04-06 23:56 ` Eric Sandeen
@ 2009-04-07 0:42 ` Felix Blyakher
2009-04-09 17:14 ` Christoph Hellwig
2 siblings, 1 reply; 7+ messages in thread
From: Felix Blyakher @ 2009-04-07 0:42 UTC (permalink / raw)
To: Christoph Hellwig; +Cc: xfs
On Feb 24, 2009, at 7:39 AM, Christoph Hellwig wrote:
> xfs_getbmap (or rather the formatters called by it) copy out the
> getbmap
> structures under the ilock, which can deadlock against mmap. This has
> been reported via bugzilla a while ago (#717) and has recently also
> shown up via lockdep.
>
> So allocate a temporary buffer to format the kernel getbmap structures
> into and then copy them out after dropping the locks.
>
> A little problem with this is that we limit the number of extents we
> can copy out by the maximum allocation size,
Actually with the patch we either get all requested extents, or none
if we fail to get memory for them.
Should we teach the callers to expect ENOMEM and repeat the call
to xfs_getbmap with smaller number of extents?
Felix
> but I see no real way
> around that.
>
>
>
> Signed-off-by: Christoph Hellwig <hch@lst.de>
>
> Index: xfs/fs/xfs/xfs_bmap.c
> ===================================================================
> --- xfs.orig/fs/xfs/xfs_bmap.c 2009-02-23 20:38:27.512925014 +0100
> +++ xfs/fs/xfs/xfs_bmap.c 2009-02-23 20:40:46.720926193 +0100
> @@ -5867,12 +5867,13 @@ xfs_getbmap(
> int nexleft; /* # of user extents left */
> int subnex; /* # of bmapi's can do */
> int nmap; /* number of map entries */
> - struct getbmapx out; /* output structure */
> + struct getbmapx *out; /* output structure */
> int whichfork; /* data or attr fork */
> int prealloced; /* this is a file with
> * preallocated data space */
> int iflags; /* interface flags */
> int bmapi_flags; /* flags for xfs_bmapi */
> + int cur_ext = 0;
>
> mp = ip->i_mount;
> iflags = bmv->bmv_iflags;
> @@ -5948,6 +5949,13 @@ xfs_getbmap(
> return XFS_ERROR(EINVAL);
> bmvend = bmv->bmv_offset + bmv->bmv_length;
>
> +
> + if (bmv->bmv_count > ULONG_MAX / sizeof(struct getbmapx))
> + return XFS_ERROR(ENOMEM);
> + out = kmem_zalloc(bmv->bmv_count * sizeof(struct getbmapx),
> KM_MAYFAIL);
> + if (!out)
> + return XFS_ERROR(ENOMEM);
> +
> xfs_ilock(ip, XFS_IOLOCK_SHARED);
> if (whichfork == XFS_DATA_FORK && !(iflags & BMV_IF_DELALLOC)) {
> if (ip->i_delayed_blks || ip->i_size > ip->i_d.di_size) {
> @@ -6001,39 +6009,39 @@ xfs_getbmap(
> ASSERT(nmap <= subnex);
>
> for (i = 0; i < nmap && nexleft && bmv->bmv_length; i++) {
> - int full = 0; /* user array is full */
> -
> - out.bmv_oflags = 0;
> + out[cur_ext].bmv_oflags = 0;
> if (map[i].br_state == XFS_EXT_UNWRITTEN)
> - out.bmv_oflags |= BMV_OF_PREALLOC;
> + out[cur_ext].bmv_oflags |= BMV_OF_PREALLOC;
> else if (map[i].br_startblock == DELAYSTARTBLOCK)
> - out.bmv_oflags |= BMV_OF_DELALLOC;
> - out.bmv_offset = XFS_FSB_TO_BB(mp, map[i].br_startoff);
> - out.bmv_length = XFS_FSB_TO_BB(mp, map[i].br_blockcount);
> - out.bmv_unused1 = out.bmv_unused2 = 0;
> + out[cur_ext].bmv_oflags |= BMV_OF_DELALLOC;
> + out[cur_ext].bmv_offset =
> + XFS_FSB_TO_BB(mp, map[i].br_startoff);
> + out[cur_ext].bmv_length =
> + XFS_FSB_TO_BB(mp, map[i].br_blockcount);
> + out[cur_ext].bmv_unused1 = 0;
> + out[cur_ext].bmv_unused2 = 0;
> ASSERT(((iflags & BMV_IF_DELALLOC) != 0) ||
> (map[i].br_startblock != DELAYSTARTBLOCK));
> if (map[i].br_startblock == HOLESTARTBLOCK &&
> whichfork == XFS_ATTR_FORK) {
> /* came to the end of attribute fork */
> - out.bmv_oflags |= BMV_OF_LAST;
> + out[cur_ext].bmv_oflags |= BMV_OF_LAST;
> goto out_free_map;
> }
>
> - if (!xfs_getbmapx_fix_eof_hole(ip, &out, prealloced,
> - bmvend, map[i].br_startblock))
> + if (!xfs_getbmapx_fix_eof_hole(ip, &out[cur_ext],
> + prealloced, bmvend,
> + map[i].br_startblock))
> goto out_free_map;
>
> - /* format results & advance arg */
> - error = formatter(&arg, &out, &full);
> - if (error || full)
> - goto out_free_map;
> nexleft--;
> bmv->bmv_offset =
> - out.bmv_offset + out.bmv_length;
> + out[cur_ext].bmv_offset +
> + out[cur_ext].bmv_length;
> bmv->bmv_length =
> max_t(__int64_t, 0, bmvend - bmv->bmv_offset);
> bmv->bmv_entries++;
> + cur_ext++;
> }
> } while (nmap && nexleft && bmv->bmv_length);
>
> @@ -6043,6 +6051,16 @@ xfs_getbmap(
> xfs_iunlock_map_shared(ip, lock);
> out_unlock_iolock:
> xfs_iunlock(ip, XFS_IOLOCK_SHARED);
> +
> + for (i = 0; i < cur_ext; i++) {
> + int full = 0; /* user array is full */
> +
> + /* format results & advance arg */
> + error = formatter(&arg, &out[i], &full);
> + if (error || full)
> + break;
> + }
> +
> return error;
> }
>
>
> _______________________________________________
> xfs mailing list
> xfs@oss.sgi.com
> http://oss.sgi.com/mailman/listinfo/xfs
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 7+ messages in thread* Re: [PATCH 2/2] xfs: fix getbmap vs mmap deadlock
2009-04-07 0:42 ` Felix Blyakher
@ 2009-04-09 17:14 ` Christoph Hellwig
2009-04-16 16:05 ` Felix Blyakher
0 siblings, 1 reply; 7+ messages in thread
From: Christoph Hellwig @ 2009-04-09 17:14 UTC (permalink / raw)
To: Felix Blyakher; +Cc: Christoph Hellwig, xfs
On Mon, Apr 06, 2009 at 07:42:57PM -0500, Felix Blyakher wrote:
> Actually with the patch we either get all requested extents, or none
> if we fail to get memory for them.
> Should we teach the callers to expect ENOMEM and repeat the call
> to xfs_getbmap with smaller number of extents?
The problem with any of that is that we don't actually get the exact
extent list but always a racy version.
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH 2/2] xfs: fix getbmap vs mmap deadlock
2009-04-09 17:14 ` Christoph Hellwig
@ 2009-04-16 16:05 ` Felix Blyakher
0 siblings, 0 replies; 7+ messages in thread
From: Felix Blyakher @ 2009-04-16 16:05 UTC (permalink / raw)
To: Christoph Hellwig; +Cc: xfs
On Apr 9, 2009, at 12:14 PM, Christoph Hellwig wrote:
> On Mon, Apr 06, 2009 at 07:42:57PM -0500, Felix Blyakher wrote:
>> Actually with the patch we either get all requested extents, or none
>> if we fail to get memory for them.
>> Should we teach the callers to expect ENOMEM and repeat the call
>> to xfs_getbmap with smaller number of extents?
>
> The problem with any of that is that we don't actually get the exact
> extent list but always a racy version.
Agree, but the alternative is ENOMEM, i.e. no bmap
at all.
Since the callers of xfs_getbmap are only ioctl's from
the user apps, we can move the logic of retrying the
call with the smaller number of extents to the app with
the understanding that it may not be exact snapshot of
the file's extents on a very active file. That would mean
that your change in kernel is good as is, but the API for
the GETBMAP[X] is changing by allowing ENOMEM returned.
We'll need to update the man page, and add ENOMEM handling
to xfs_bmap/xfs_io.
Felix
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 7+ messages in thread