public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
* weird quota issue
@ 2014-12-19 21:26 Weber, Charles (NIH/NIA/IRP) [E]
  2014-12-22 16:34 ` Charles Weber
  2014-12-22 20:48 ` Dave Chinner
  0 siblings, 2 replies; 13+ messages in thread
From: Weber, Charles (NIH/NIA/IRP) [E] @ 2014-12-19 21:26 UTC (permalink / raw)
  To: xfs@oss.sgi.com

HI everyone, long time xfs/quota user with new server and problem
hardware is HP BL460 G7 blade, qlogic fiber channel and 3Par 7200 storage
3 16TB vols exported from 3Par to server via FC. These are thin volumes, but plenty of available backing storage.

Server runs current patched CentOS 6.6
kernel 2.6.32-504.3.3.el6.x86_64
xfsprogs 2.1.1-16.el6
Default mkfs.xfs options for volumes

mount options for logical volumes  home_lv 39TB imap_lv 4.6TB
/dev/mapper/irphome_vg-home_lv on /home type xfs (rw,delaylog,inode64,nobarrier,logbsize=256k,uquota,prjquota)
/dev/mapper/irphome_vg-imap_lv on /mail type xfs (rw,delaylog,inode64,nobarrier,logbsize=256k,uquota,prjquota)

Users are from large AD via winbind set to not enumerate. I saw the bug with xfs_quota report not listing winbind defined user names. Yes this happens to me.
I can assign project quota to smaller volume. xfs_quota will not report it. I cannot assign a project quota to larger volume. I get this error: xfs_quota: cannot set limits: Function not implemented.

xfs_quota -x -c 'report -uh' /mail
User quota on /mail (/dev/mapper/irphome_vg-imap_lv)
                        Blocks
User ID      Used   Soft   Hard Warn/Grace
---------- ---------------------------------
root         2.2G      0      0  00 [------]

[xfs_quota -x -c 'report -uh' /home

nothing is returned

I can set user and project quotas on /mail but cannot see them. I have not tested them yet.
I cannot set user or project quotas on /home.
At one time I could definitely set usr quotas on /home. I did so and verified it worked.

Any ideas what is messed up on the /home volume?



Weber, Charles (NIH/NIA/IRP)
weberc@mail.nih.gov<mailto:weberc@mail.nih.gov>
p: 410-558-8001
c: 443-473-6493
251 Bayview Blvd
Baltimore MD 21224
NCTS performance comments and survey at:
https://niairpkiosk.irp.nia.nih.gov/content/ncts-user-survey






_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: weird quota issue
  2014-12-19 21:26 weird quota issue Weber, Charles (NIH/NIA/IRP) [E]
@ 2014-12-22 16:34 ` Charles Weber
  2014-12-22 20:48 ` Dave Chinner
  1 sibling, 0 replies; 13+ messages in thread
From: Charles Weber @ 2014-12-22 16:34 UTC (permalink / raw)
  To: xfs


[-- Attachment #1.1: Type: text/plain, Size: 2661 bytes --]

Odd fix but there you go.
1. I saw in earlier list mail to look at /proc/self/mounts. 
Indeed my /home was mounted there with the noquota option, even though the fstab clearly stated the quota user and project options. 
This persisted even if I dismounted and ran xfs_repair of the partition in question.

2. I rebooted with a forcefsck file. After the reboot/fsck, proc and fstab now match. Both list the partition with valid quota options.


> On Dec 19, 2014, at 4:26 PM, Weber, Charles (NIH/NIA/IRP) [E] <WeberC@grc.nia.nih.gov> wrote:
> 
> HI everyone, long time xfs/quota user with new server and problem
> hardware is HP BL460 G7 blade, qlogic fiber channel and 3Par 7200 storage
> 3 16TB vols exported from 3Par to server via FC. These are thin volumes, but plenty of available backing storage.
> 
> Server runs current patched CentOS 6.6
> kernel 2.6.32-504.3.3.el6.x86_64
> xfsprogs 2.1.1-16.el6
> Default mkfs.xfs options for volumes
> 
> mount options for logical volumes  home_lv 39TB imap_lv 4.6TB
> /dev/mapper/irphome_vg-home_lv on /home type xfs (rw,delaylog,inode64,nobarrier,logbsize=256k,uquota,prjquota)
> /dev/mapper/irphome_vg-imap_lv on /mail type xfs (rw,delaylog,inode64,nobarrier,logbsize=256k,uquota,prjquota)
> 
> Users are from large AD via winbind set to not enumerate. I saw the bug with xfs_quota report not listing winbind defined user names. Yes this happens to me.
> I can assign project quota to smaller volume. xfs_quota will not report it. I cannot assign a project quota to larger volume. I get this error: xfs_quota: cannot set limits: Function not implemented.
> 
> xfs_quota -x -c 'report -uh' /mail
> User quota on /mail (/dev/mapper/irphome_vg-imap_lv)
>                         Blocks              
> User ID      Used   Soft   Hard Warn/Grace   
> ---------- --------------------------------- 
> root         2.2G      0      0  00 [------]
> 
> [xfs_quota -x -c 'report -uh' /home
> 
> nothing is returned
> 
> I can set user and project quotas on /mail but cannot see them. I have not tested them yet.
> I cannot set user or project quotas on /home.
> At one time I could definitely set usr quotas on /home. I did so and verified it worked. 
> 
> Any ideas what is messed up on the /home volume?
> 
> 
> 
> Weber, Charles (NIH/NIA/IRP) 
> weberc@mail.nih.gov <mailto:weberc@mail.nih.gov>
> p: 410-558-8001
> c: 443-473-6493
> 251 Bayview Blvd
> Baltimore MD 21224
> NCTS performance comments and survey at:
> https://niairpkiosk.irp.nia.nih.gov/content/ncts-user-survey <https://niairpkiosk.irp.nia.nih.gov/content/ncts-user-survey>
> 
> 
> 
> 
> 
> 


[-- Attachment #1.2: Type: text/html, Size: 7000 bytes --]

[-- Attachment #2: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: weird quota issue
  2014-12-19 21:26 weird quota issue Weber, Charles (NIH/NIA/IRP) [E]
  2014-12-22 16:34 ` Charles Weber
@ 2014-12-22 20:48 ` Dave Chinner
  2014-12-22 22:13   ` Weber, Charles (NIH/NIA/IRP) [C]
  1 sibling, 1 reply; 13+ messages in thread
From: Dave Chinner @ 2014-12-22 20:48 UTC (permalink / raw)
  To: Weber, Charles (NIH/NIA/IRP) [E]; +Cc: xfs@oss.sgi.com

On Fri, Dec 19, 2014 at 09:26:12PM +0000, Weber, Charles (NIH/NIA/IRP) [E] wrote:
> HI everyone, long time xfs/quota user with new server and problem
> hardware is HP BL460 G7 blade, qlogic fiber channel and 3Par 7200 storage
> 3 16TB vols exported from 3Par to server via FC. These are thin volumes, but plenty of available backing storage.
> 
> Server runs current patched CentOS 6.6
> kernel 2.6.32-504.3.3.el6.x86_64
> xfsprogs 2.1.1-16.el6
> Default mkfs.xfs options for volumes
> 
> mount options for logical volumes  home_lv 39TB imap_lv 4.6TB
> /dev/mapper/irphome_vg-home_lv on /home type xfs (rw,delaylog,inode64,nobarrier,logbsize=256k,uquota,prjquota)
> /dev/mapper/irphome_vg-imap_lv on /mail type xfs (rw,delaylog,inode64,nobarrier,logbsize=256k,uquota,prjquota)
> 
> Users are from large AD via winbind set to not enumerate. I saw
> the bug with xfs_quota report not listing winbind defined user
> names. Yes this happens to me.

So just enumerate them by uid. (report -un)

> I can assign project quota to smaller volume. xfs_quota will not
> report it. I cannot assign a project quota to larger volume. I get
> this error: xfs_quota: cannot set limits: Function not
> implemented.

You need to be more specific and document all your quota setup.

http://xfs.org/index.php/XFS_FAQ#Q:_What_information_should_I_include_when_reporting_a_problem.3F

> xfs_quota -x -c 'report -uh' /mail
> User quota on /mail (/dev/mapper/irphome_vg-imap_lv)
>                         Blocks
> User ID      Used   Soft   Hard Warn/Grace
> ---------- ---------------------------------
> root         2.2G      0      0  00 [------]
> 
> [xfs_quota -x -c 'report -uh' /home
> 
> nothing is returned
> 
> I can set user and project quotas on /mail but cannot see them. I have not tested them yet.
> I cannot set user or project quotas on /home.
> At one time I could definitely set usr quotas on /home. I did so and verified it worked.
> 
> Any ideas what is messed up on the /home volume?

Not without knowing a bunch more about your project quota setup.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: weird quota issue
  2014-12-22 20:48 ` Dave Chinner
@ 2014-12-22 22:13   ` Weber, Charles (NIH/NIA/IRP) [C]
  2014-12-22 22:35     ` Dave Chinner
  0 siblings, 1 reply; 13+ messages in thread
From: Weber, Charles (NIH/NIA/IRP) [C] @ 2014-12-22 22:13 UTC (permalink / raw)
  To: Dave Chinner; +Cc: xfs@oss.sgi.com


[-- Attachment #1.1: Type: text/plain, Size: 20056 bytes --]

Thanks for replying. The project part is a red herring and I have abandoned it. The only reason project quotas even came up was the winbind/quota issue. UID is fine.
The more interesting part is the way the /proc/self/mounts and mtab/fstab are not coherent.

2 filesystems have identical (cut and paste) setting in fstab. Below results are after setting forcefsck and rebooting.

mount <enter>
/dev/mapper/irphome_vg-home_lv on /home type xfs (rw,delaylog,inode64,nobarrier,logbsize=256k,uquota,prjquota)
/dev/mapper/irphome_vg-imap_lv on /mail type xfs (rw,delaylog,inode64,nobarrier,logbsize=256k,uquota,prjquota)

cat /proc/self/mounts
/dev/mapper/irphome_vg-home_lv /home xfs rw,relatime,attr2,delaylog,nobarrier,inode64,logbsize=256k,sunit=32,swidth=32768,noquota 0 0
/dev/mapper/irphome_vg-imap_lv /mail xfs rw,relatime,attr2,delaylog,nobarrier,inode64,logbsize=256k,sunit=32,swidth=32768,usrquota,prjquota 0 0

cat /etc/mtab
/dev/mapper/irphome_vg-home_lv /home xfs rw,delaylog,inode64,nobarrier,logbsize=256k,uquota,prjquota 0 0
/dev/mapper/irphome_vg-imap_lv /mail xfs rw,delaylog,inode64,nobarrier,logbsize=256k,uquota,prjquota 0 0

List of details per wiki
#############
The most interesting thing in dmesg output was this:
XFS (dm-7): Failed to initialize disk quotas.  from /dev/disk/by-id dm-7 is my problem logical volume => dm-name-irphome_vg-home_lv -> ../../dm-7
#############

2.6.32-504.3.3.el6.x86_64

xfs_repair version 3.1.1

24 cpu using hyperthreading, so 12 real

mem
MemTotal:       49410148 kB
MemFree:          269628 kB
Buffers:          144256 kB
Cached:         47388884 kB
SwapCached:            0 kB
Active:           731016 kB
Inactive:       46871512 kB
Active(anon):       2976 kB
Inactive(anon):    71740 kB
Active(file):     728040 kB
Inactive(file): 46799772 kB
Unevictable:        5092 kB
Mlocked:            5092 kB
SwapTotal:      14331900 kB
SwapFree:       14331900 kB
Dirty:           3773708 kB
Writeback:             0 kB
AnonPages:         75696 kB
Mapped:           190092 kB
Shmem:               312 kB
Slab:            1012580 kB
SReclaimable:     875160 kB
SUnreclaim:       137420 kB
KernelStack:        5512 kB
PageTables:         9332 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:    39036972 kB
Committed_AS:     293324 kB
VmallocTotal:   34359738367 kB
VmallocUsed:      191424 kB
VmallocChunk:   34334431824 kB
HardwareCorrupted:     0 kB
AnonHugePages:      2048 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
DirectMap4k:        6384 kB
DirectMap2M:     2080768 kB
DirectMap1G:    48234496 kB

/proc/mounts
rootfs / rootfs rw 0 0
proc /proc proc rw,relatime 0 0
sysfs /sys sysfs rw,relatime 0 0
devtmpfs /dev devtmpfs rw,relatime,size=24689396k,nr_inodes=6172349,mode=755 0 0
devpts /dev/pts devpts rw,relatime,gid=5,mode=620,ptmxmode=000 0 0
tmpfs /dev/shm tmpfs rw,relatime 0 0
/dev/mapper/VolGroup-lv_root / ext4 rw,relatime,barrier=1,data=ordered 0 0
/proc/bus/usb /proc/bus/usb usbfs rw,relatime 0 0
/dev/sda1 /boot ext4 rw,relatime,barrier=1,data=ordered 0 0
/dev/mapper/irphome_vg-home_lv /home xfs rw,relatime,attr2,delaylog,nobarrier,inode64,logbsize=256k,sunit=32,swidth=32768,noquota 0 0
/dev/mapper/irphome_vg-imap_lv /mail xfs rw,relatime,attr2,delaylog,nobarrier,inode64,logbsize=256k,sunit=32,swidth=32768,usrquota,prjquota 0 0
none /proc/sys/fs/binfmt_misc binfmt_misc rw,relatime 0 0
/dev/mapper/homesavelv-homesavelv /homesave xfs rw,relatime,attr2,delaylog,sunit=32,swidth=32768,noquota 0 0

 /proc/partitions
major minor  #blocks  name

   8        0  143338560 sda
   8        1     512000 sda1
   8        2  142825472 sda2
   8       32 17179869184 sdc
   8       96 17179869184 sdg
   8      128 17179869184 sdi
   8       48 17179869184 sdd
   8      112 17179869184 sdh
   8       64 17179869184 sde
 253        0   52428800 dm-0
 253        1   14331904 dm-1
   8      160 17179869184 sdk
   8      176 17179869184 sdl
   8      192 17179869184 sdm
   8      224 17179869184 sdo
   8      240 17179869184 sdp
  65        0 17179869184 sdq
 253        3 17179869184 dm-3
 253        4 17179869184 dm-4
 253        5 17179869184 dm-5
 253        6 5368709120 dm-6
 253        7 42949672960 dm-7
   8       16 2147483648 sdb
   8       80 2147483648 sdf
 253        2 2147483648 dm-2
   8      144 2147483648 sdj
   8      208 2147483648 sdn
 253        8 2147467264 dm-8
 
 Raid layout
 3Par SAN  raid 6 12 4TB SAS disks (more or less, 3Par does some non-classic raid stuff)
 
 mpathd (360002ac000000000000000080000bf12) dm-4 3PARdata,VV
size=16T features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
  |- 1:0:3:12 sdi 8:128 active ready running
  |- 2:0:0:12 sdm 8:192 active ready running
  |- 1:0:2:12 sde 8:64  active ready running
  `- 2:0:5:12 sdq 65:0  active ready running
mpathc (360002ac000000000000000070000bf12) dm-5 3PARdata,VV
size=16T features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
  |- 1:0:2:11 sdd 8:48  active ready running
  |- 2:0:0:11 sdl 8:176 active ready running
  |- 1:0:3:11 sdh 8:112 active ready running
  `- 2:0:5:11 sdp 8:240 active ready running
mpathb (360002ac000000000000000060000bf12) dm-3 3PARdata,VV
size=16T features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
  |- 1:0:2:10 sdc 8:32  active ready running
  |- 2:0:0:10 sdk 8:160 active ready running
  |- 1:0:3:10 sdg 8:96  active ready running
  `- 2:0:5:10 sdo 8:224 active ready running
mpathg (360002ac000000000000000110000bf12) dm-2 3PARdata,VV
size=2.0T features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
  |- 1:0:2:1  sdb 8:16  active ready running
  |- 2:0:5:1  sdj 8:144 active ready running
  |- 1:0:3:1  sdf 8:80  active ready running
  `- 2:0:0:1  sdn 8:208 active ready running

pvscan
  PV /dev/mapper/mpathd   VG irphome_vg   lvm2 [16.00 TiB / 3.00 TiB free]
  PV /dev/mapper/mpathb   VG irphome_vg   lvm2 [16.00 TiB / 0    free]
  PV /dev/mapper/mpathc   VG irphome_vg   lvm2 [16.00 TiB / 0    free]
  PV /dev/mapper/mpathg   VG homesavelv   lvm2 [2.00 TiB / 0    free]
  PV /dev/sda2            VG VolGroup     lvm2 [136.21 GiB / 72.54 GiB free]
  Total: 5 [50.13 TiB] / in use: 5 [50.13 TiB] / in no VG: 0 [0   ]
 vgscan
  Reading all physical volumes.  This may take a while...
  Found volume group "irphome_vg" using metadata type lvm2
  Found volume group "homesavelv" using metadata type lvm2
  Found volume group "VolGroup" using metadata type lvm2
 lvscan
  ACTIVE            '/dev/irphome_vg/imap_lv' [5.00 TiB] inherit
  ACTIVE            '/dev/irphome_vg/home_lv' [40.00 TiB] inherit
  ACTIVE            '/dev/homesavelv/homesavelv' [2.00 TiB] inherit
  ACTIVE            '/dev/VolGroup/lv_root' [50.00 GiB] inherit
  ACTIVE            '/dev/VolGroup/lv_swap' [13.67 GiB] inherit
  
  lvdisplay irphome_vg/home_lv
  --- Logical volume ---
  LV Path                /dev/irphome_vg/home_lv
  LV Name                home_lv
  VG Name                irphome_vg
  LV UUID                8wLM12-e43p-UhIh-YTXn-kMBx-RffN-yNz2V5
  LV Write Access        read/write
  LV Creation host, time nuhome.irp.nia.nih.gov <http://nuhome.irp.nia.nih.gov/>, 2014-12-01 17:53:47 -0500
  LV Status              available
  # open                 1
  LV Size                40.00 TiB
  Current LE             10485760
  Segments               3
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:7

Disks, write cache etc are controlled by 3Par SAN, I just define up to 16TB blocks and export to host over FC or ISCSI.
In this case I am using FC.

 xfs_info /dev/irphome_vg/home_lv 
meta-data=/dev/mapper/irphome_vg-home_lv isize=256    agcount=40, agsize=268435452 blks
         =                       sectsz=512   attr=2, projid32bit=1
data     =                       bsize=4096   blocks=10737418080, imaxpct=5
         =                       sunit=4      swidth=4096 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal               bsize=4096   blocks=521728, version=2
         =                       sectsz=512   sunit=4 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

xfs_info /dev/irphome_vg/imap_lv 
meta-data=/dev/mapper/irphome_vg-imap_lv isize=256    agcount=32, agsize=41943036 blks
         =                       sectsz=512   attr=2, projid32bit=1
data     =                       bsize=4096   blocks=1342177152, imaxpct=5
         =                       sunit=4      swidth=4096 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal               bsize=4096   blocks=521728, version=2
         =                       sectsz=512   sunit=4 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

dmesg output
SGI XFS with ACLs, security attributes, large block/inode numbers, no debug enabled
SGI XFS Quota Management subsystem
XFS (dm-7): delaylog is the default now, option is deprecated.
XFS (dm-7): Mounting Filesystem
XFS (dm-7): Ending clean mount
XFS (dm-7): Failed to initialize disk quotas.
XFS (dm-6): delaylog is the default now, option is deprecated.
XFS (dm-6): Mounting Filesystem
XFS (dm-6): Ending clean mount


scsi 2:0:0:0: Direct-Access     3PARdata VV               3210 PQ: 0 ANSI: 6
scsi 2:0:0:10: Direct-Access     3PARdata VV               3210 PQ: 0 ANSI: 6
sd 2:0:0:0: [sdj] 4194304 512-byte logical blocks: (2.14 GB/2.00 GiB)
scsi 2:0:0:11: Direct-Access     3PARdata VV               3210 PQ: 0 ANSI: 6
sd 2:0:0:10: [sdk] 34359738368 512-byte logical blocks: (17.5 TB/16.0 TiB)
sd 2:0:0:0: [sdj] Write Protect is off
sd 2:0:0:0: [sdj] Mode Sense: 8b 00 10 08
scsi 2:0:0:12: Direct-Access     3PARdata VV               3210 PQ: 0 ANSI: 6
sd 2:0:0:11: [sdl] 34359738368 512-byte logical blocks: (17.5 TB/16.0 TiB)
sd 2:0:0:0: [sdj] Write cache: disabled, read cache: enabled, supports DPO and FUA
scsi 2:0:0:254: Enclosure         3PARdata SES              3210 PQ: 0 ANSI: 6
sd 2:0:0:12: [sdm] 34359738368 512-byte logical blocks: (17.5 TB/16.0 TiB)
scsi 2:0:1:0: RAID              HP       HSV400           0005 PQ: 0 ANSI: 5
scsi 2:0:2:0: RAID              HP       HSV400           0005 PQ: 0 ANSI: 5
 sdj:
sd 2:0:0:10: [sdk] Write Protect is off
sd 2:0:0:10: [sdk] Mode Sense: 8b 00 10 08
sd 2:0:0:11: [sdl] Write Protect is off
sd 2:0:0:11: [sdl] Mode Sense: 8b 00 10 08
scsi 2:0:3:0: RAID              HP       HSV400           0005 PQ: 0 ANSI: 5
 unknown partition table
sd 2:0:0:10: [sdk] Write cache: disabled, read cache: enabled, supports DPO and FUA
sd 2:0:0:11: [sdl] Write cache: disabled, read cache: enabled, supports DPO and FUA
scsi 2:0:4:0: RAID              HP       HSV400           0005 PQ: 0 ANSI: 5
sd 2:0:0:12: [sdm] Write Protect is off
sd 2:0:0:12: [sdm] Mode Sense: 8b 00 10 08
sd 2:0:0:12: [sdm] Write cache: disabled, read cache: enabled, supports DPO and FUA
scsi 2:0:5:0: Direct-Access     3PARdata VV               3210 PQ: 0 ANSI: 6
sd 2:0:5:0: [sdn] 4194304 512-byte logical blocks: (2.14 GB/2.00 GiB)
scsi 2:0:5:10: Direct-Access     3PARdata VV               3210 PQ: 0 ANSI: 6
sd 2:0:0:0: [sdj] Attached SCSI disk
scsi 2:0:5:11: Direct-Access     3PARdata VV               3210 PQ: 0 ANSI: 6
sd 2:0:5:0: [sdn] Write Protect is off
sd 2:0:5:0: [sdn] Mode Sense: 8b 00 10 08
SGI XFS with ACLs, security attributes, large block/inode numbers, no debug enabled
SGI XFS Quota Management subsystem
XFS (dm-7): delaylog is the default now, option is deprecated.
XFS (dm-7): Mounting Filesystem
XFS (dm-7): Ending clean mount
XFS (dm-7): Failed to initialize disk quotas.
XFS (dm-6): delaylog is the default now, option is deprecated.
XFS (dm-6): Mounting Filesystem
XFS (dm-6): Ending clean mount
Adding 14331900k swap on /dev/mapper/VolGroup-lv_swap.  Priority:-1 extents:1 across:14331900k 
device-mapper: table: 253:9: multipath: error getting device
device-mapper: ioctl: error adding target to table
pcc-cpufreq: (v1.00.00) driver loaded with frequency limits: 1600 MHz, 2400 MHz
sd 1:0:2:10: Warning! Received an indication that the LUN assignments on this target have changed. The Linux SCSI layer does not automatic
ally remap LUN assignments.
sd 2:0:0:10: Warning! Received an indication that the LUN assignments on this target have changed. The Linux SCSI layer does not automatic
ally remap LUN assignments.
sd 1:0:3:10: Warning! Received an indication that the LUN assignments on this target have changed. The Linux SCSI layer does not automatic
ally remap LUN assignments.
sd 2:0:5:10: Warning! Received an indication that the LUN assignments on this target have changed. The Linux SCSI layer does not automatic
ally remap LUN assignments.
device-mapper: multipath: Failing path 8:80.
device-mapper: multipath: Failing path 8:208.
device-mapper: multipath: Failing path 8:144.
end_request: I/O error, dev dm-2, sector 4194176
Buffer I/O error on device dm-2, logical block 524272
end_request: I/O error, dev dm-2, sector 4194176
Buffer I/O error on device dm-2, logical block 524272
end_request: I/O error, dev dm-2, sector 4194288
Buffer I/O error on device dm-2, logical block 524286
end_request: I/O error, dev dm-2, sector 4194288
Buffer I/O error on device dm-2, logical block 524286
end_request: I/O error, dev dm-2, sector 0
Buffer I/O error on device dm-2, logical block 0
end_request: I/O error, dev dm-2, sector 0
Buffer I/O error on device dm-2, logical block 0
end_request: I/O error, dev dm-2, sector 8
Buffer I/O error on device dm-2, logical block 1
end_request: I/O error, dev dm-2, sector 4194296
Buffer I/O error on device dm-2, logical block 524287
end_request: I/O error, dev dm-2, sector 4194296
Buffer I/O error on device dm-2, logical block 524287
end_request: I/O error, dev dm-2, sector 4194296
device-mapper: table: 253:2: multipath: error getting device
device-mapper: ioctl: error adding target to table
device-mapper: table: 253:2: multipath: error getting device
device-mapper: ioctl: error adding target to table
sd 1:0:3:10: Warning! Received an indication that the LUN assignments on this target have changed. The Linux SCSI layer does not automatically remap LUN assignments.
sd 1:0:2:10: Warning! Received an indication that the LUN assignments on this target have changed. The Linux SCSI layer does not automatically remap LUN assignments.
sd 2:0:5:10: Warning! Received an indication that the LUN assignments on this target have changed. The Linux SCSI layer does not automatically remap LUN assignments.
sd 2:0:0:10: Warning! Received an indication that the LUN assignments on this target have changed. The Linux SCSI layer does not automatically remap LUN assignments.
scsi 1:0:2:1: Direct-Access     3PARdata VV               3210 PQ: 0 ANSI: 6
sd 1:0:2:1: Attached scsi generic sg4 type 0
sd 1:0:2:1: [sdb] 4294967296 512-byte logical blocks: (2.19 TB/2.00 TiB)
scsi 1:0:3:1: Direct-Access     3PARdata VV               3210 PQ: 0 ANSI: 6
sd 1:0:3:1: Attached scsi generic sg9 type 0
sd 1:0:3:1: [sdf] 4294967296 512-byte logical blocks: (2.19 TB/2.00 TiB)
sd 1:0:2:1: [sdb] Write Protect is off
sd 1:0:2:1: [sdb] Mode Sense: 8b 00 10 08
sd 1:0:2:1: [sdb] Write cache: disabled, read cache: enabled, supports DPO and FUA
sd 1:0:3:1: [sdf] Write Protect is off
sd 1:0:3:1: [sdf] Mode Sense: 8b 00 10 08
sd 1:0:3:1: [sdf] Write cache: disabled, read cache: enabled, supports DPO and FUA
 sdb: unknown partition table
 sdf: unknown partition table
sd 1:0:2:1: [sdb] Attached SCSI disk
sd 1:0:3:1: [sdf] Attached SCSI disk
scsi 2:0:5:1: Direct-Access     3PARdata VV               3210 PQ: 0 ANSI: 6
sd 2:0:5:1: Attached scsi generic sg16 type 0
sd 2:0:5:1: [sdj] 4294967296 512-byte logical blocks: (2.19 TB/2.00 TiB)
scsi 2:0:0:1: Direct-Access     3PARdata VV               3210 PQ: 0 ANSI: 6
sd 2:0:0:1: Attached scsi generic sg25 type 0
sd 2:0:0:1: [sdn] 4294967296 512-byte logical blocks: (2.19 TB/2.00 TiB)
sd 2:0:5:1: [sdj] Write Protect is off
sd 2:0:5:1: [sdj] Mode Sense: 8b 00 10 08
sd 2:0:5:1: [sdj] Write cache: disabled, read cache: enabled, supports DPO and FUA
sd 2:0:0:1: [sdn] Write Protect is off
sd 2:0:0:1: [sdn] Mode Sense: 8b 00 10 08
sd 2:0:0:1: [sdn] Write cache: disabled, read cache: enabled, supports DPO and FUA
 sdj: unknown partition table
 sdn: unknown partition table
sd 2:0:5:1: [sdj] Attached SCSI disk
sd 2:0:0:1: [sdn] Attached SCSI disk
XFS (dm-8): Mounting Filesystem
XFS (dm-8): Ending clean mount
sd 1:0:2:10: Warning! Received an indication that the LUN assignments on this target have changed. The Linux SCSI layer does not automatically remap LUN assignments.
sd 1:0:3:10: Warning! Received an indication that the LUN assignments on this target have changed. The Linux SCSI layer does not automatically remap LUN assignments.
sd 2:0:5:10: Warning! Received an indication that the LUN assignments on this target have changed. The Linux SCSI layer does not automatically remap LUN assignments.
sd 2:0:0:10: Warning! Received an indication that the LUN assignments on this target have changed. The Linux SCSI layer does not automatically remap LUN assignments.
 rport-2:0-17: blocked FC remote port time out: removing rport
 rport-2:0-2: blocked FC remote port time out: removing rport

> On Dec 22, 2014, at 3:48 PM, Dave Chinner <david@fromorbit.com> wrote:
> 
> On Fri, Dec 19, 2014 at 09:26:12PM +0000, Weber, Charles (NIH/NIA/IRP) [E] wrote:
>> HI everyone, long time xfs/quota user with new server and problem
>> hardware is HP BL460 G7 blade, qlogic fiber channel and 3Par 7200 storage
>> 3 16TB vols exported from 3Par to server via FC. These are thin volumes, but plenty of available backing storage.
>> 
>> Server runs current patched CentOS 6.6
>> kernel 2.6.32-504.3.3.el6.x86_64
>> xfsprogs 2.1.1-16.el6
>> Default mkfs.xfs options for volumes
>> 
>> mount options for logical volumes  home_lv 39TB imap_lv 4.6TB
>> /dev/mapper/irphome_vg-home_lv on /home type xfs (rw,delaylog,inode64,nobarrier,logbsize=256k,uquota,prjquota)
>> /dev/mapper/irphome_vg-imap_lv on /mail type xfs (rw,delaylog,inode64,nobarrier,logbsize=256k,uquota,prjquota)
>> 
>> Users are from large AD via winbind set to not enumerate. I saw
>> the bug with xfs_quota report not listing winbind defined user
>> names. Yes this happens to me.
> 
> So just enumerate them by uid. (report -un)
> 
>> I can assign project quota to smaller volume. xfs_quota will not
>> report it. I cannot assign a project quota to larger volume. I get
>> this error: xfs_quota: cannot set limits: Function not
>> implemented.
> 
> You need to be more specific and document all your quota setup.
> 
> http://xfs.org/index.php/XFS_FAQ#Q:_What_information_should_I_include_when_reporting_a_problem.3F
> 
>> xfs_quota -x -c 'report -uh' /mail
>> User quota on /mail (/dev/mapper/irphome_vg-imap_lv)
>>                        Blocks
>> User ID      Used   Soft   Hard Warn/Grace
>> ---------- ---------------------------------
>> root         2.2G      0      0  00 [------]
>> 
>> [xfs_quota -x -c 'report -uh' /home
>> 
>> nothing is returned
>> 
>> I can set user and project quotas on /mail but cannot see them. I have not tested them yet.
>> I cannot set user or project quotas on /home.
>> At one time I could definitely set usr quotas on /home. I did so and verified it worked.
>> 
>> Any ideas what is messed up on the /home volume?
> 
> Not without knowing a bunch more about your project quota setup.
> 
> Cheers,
> 
> Dave.
> -- 
> Dave Chinner
> david@fromorbit.com


[-- Attachment #1.2: Type: text/html, Size: 37502 bytes --]

[-- Attachment #2: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: weird quota issue
  2014-12-22 22:13   ` Weber, Charles (NIH/NIA/IRP) [C]
@ 2014-12-22 22:35     ` Dave Chinner
  2014-12-22 22:46       ` Weber, Charles (NIH/NIA/IRP) [C]
  0 siblings, 1 reply; 13+ messages in thread
From: Dave Chinner @ 2014-12-22 22:35 UTC (permalink / raw)
  To: Weber, Charles (NIH/NIA/IRP) [C]; +Cc: xfs@oss.sgi.com

On Mon, Dec 22, 2014 at 05:13:06PM -0500, Weber, Charles (NIH/NIA/IRP) [C] wrote:
> Thanks for replying. The project part is a red herring and I have
> abandoned it. The only reason project quotas even came up was the
> winbind/quota issue. UID is fine.  The more interesting part is
> the way the /proc/self/mounts and mtab/fstab are not coherent.

If /etc/mtab is not linked to /proc/mounts, then userspace maintains
it and it does not reflect the mount options the kernel have active.
YOU can put any amount of crap in invalid mount options that mount
will just dump in /etc/mtab even though the kernel ignores them.

We generally expect that systems are set up like this:

$ ls -l /etc/mtab
lrwxrwxrwx 1 root root 12 Jan  9  2012 /etc/mtab -> /proc/mounts
$

> 
> dmesg output
> SGI XFS with ACLs, security attributes, large block/inode numbers, no debug enabled
> SGI XFS Quota Management subsystem
> XFS (dm-7): delaylog is the default now, option is deprecated.
> XFS (dm-7): Mounting Filesystem
> XFS (dm-7): Ending clean mount
> XFS (dm-7): Failed to initialize disk quotas.

Which indicates that there's problems reading or allocating the
quota inodes. What is the output of 'xfs_db -c "sb 0" -c p
/dev/dm-7'?

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: weird quota issue
  2014-12-22 22:35     ` Dave Chinner
@ 2014-12-22 22:46       ` Weber, Charles (NIH/NIA/IRP) [C]
  2014-12-23  0:32         ` Dave Chinner
  0 siblings, 1 reply; 13+ messages in thread
From: Weber, Charles (NIH/NIA/IRP) [C] @ 2014-12-22 22:46 UTC (permalink / raw)
  To: Dave Chinner; +Cc: xfs@oss.sgi.com

I wonder if it is a thin-provision issue? ~40TB allocated by the SAN but setup to not really allocate space until it is claimed by the OS. 

 xfs_db -c "sb 0" -c p /dev/dm-7

magicnum = 0x58465342
blocksize = 4096
dblocks = 10737418080
rblocks = 0
rextents = 0
uuid = f6a8f271-6e30-4de9-9b60-aa5f91ca1a52
logstart = 5368709124
rootino = 128
rbmino = 129
rsumino = 130
rextsize = 1
agblocks = 268435452
agcount = 40
rbmblocks = 0
logblocks = 521728
versionnum = 0xb5e4
sectsize = 512
inodesize = 256
inopblock = 16
fname = "\000\000\000\000\000\000\000\000\000\000\000\000"
blocklog = 12
sectlog = 9
inodelog = 8
inopblog = 4
agblklog = 28
rextslog = 0
inprogress = 0
imax_pct = 5
icount = 3354944
ifree = 272409
fdblocks = 10317701783
frextents = 0
uquotino = 131
gquotino = 0
qflags = 0
flags = 0
shared_vn = 0
inoalignmt = 2
unit = 4
width = 4096
dirblklog = 0
logsectlog = 0
logsectsize = 0
logsunit = 16384
features2 = 0x8a
bad_features2 = 0x8a
> On Dec 22, 2014, at 5:35 PM, Dave Chinner <david@fromorbit.com> wrote:
> 
> On Mon, Dec 22, 2014 at 05:13:06PM -0500, Weber, Charles (NIH/NIA/IRP) [C] wrote:
>> Thanks for replying. The project part is a red herring and I have
>> abandoned it. The only reason project quotas even came up was the
>> winbind/quota issue. UID is fine.  The more interesting part is
>> the way the /proc/self/mounts and mtab/fstab are not coherent.
> 
> If /etc/mtab is not linked to /proc/mounts, then userspace maintains
> it and it does not reflect the mount options the kernel have active.
> YOU can put any amount of crap in invalid mount options that mount
> will just dump in /etc/mtab even though the kernel ignores them.
> 
> We generally expect that systems are set up like this:
> 
> $ ls -l /etc/mtab
> lrwxrwxrwx 1 root root 12 Jan  9  2012 /etc/mtab -> /proc/mounts
> $
> 
>> 
>> dmesg output
>> SGI XFS with ACLs, security attributes, large block/inode numbers, no debug enabled
>> SGI XFS Quota Management subsystem
>> XFS (dm-7): delaylog is the default now, option is deprecated.
>> XFS (dm-7): Mounting Filesystem
>> XFS (dm-7): Ending clean mount
>> XFS (dm-7): Failed to initialize disk quotas.
> 
> Which indicates that there's problems reading or allocating the
> quota inodes. What is the output of 'xfs_db -c "sb 0" -c p
> /dev/dm-7'?
> 
> Cheers,
> 
> Dave.
> -- 
> Dave Chinner
> david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: weird quota issue
  2014-12-22 22:46       ` Weber, Charles (NIH/NIA/IRP) [C]
@ 2014-12-23  0:32         ` Dave Chinner
  2014-12-23  2:12           ` Weber, Charles (NIH/NIA/IRP) [E]
  0 siblings, 1 reply; 13+ messages in thread
From: Dave Chinner @ 2014-12-23  0:32 UTC (permalink / raw)
  To: Weber, Charles (NIH/NIA/IRP) [C]; +Cc: xfs@oss.sgi.com

On Mon, Dec 22, 2014 at 05:46:42PM -0500, Weber, Charles (NIH/NIA/IRP) [C] wrote:
> I wonder if it is a thin-provision issue? ~40TB allocated by the SAN but setup to not really allocate space until it is claimed by the OS. 
> 
>  xfs_db -c "sb 0" -c p /dev/dm-7
> 
....
> versionnum = 0xb5e4

So the quota bit is set (0x40) hence quotas will attempt to be
enabled.

> uquotino = 131
> gquotino = 0
> qflags = 0

But we have no quota enabled but a user quota inode allocated.
the quota flags would have been written to zero by the initial
failure, so this implies that reading the  user quota inode failed.
Output of 'xfs_db -c "inode 131" -c p /dev/dm-7', please?

-Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 13+ messages in thread

* RE: weird quota issue
  2014-12-23  0:32         ` Dave Chinner
@ 2014-12-23  2:12           ` Weber, Charles (NIH/NIA/IRP) [E]
  2014-12-23  2:42             ` Dave Chinner
  0 siblings, 1 reply; 13+ messages in thread
From: Weber, Charles (NIH/NIA/IRP) [E] @ 2014-12-23  2:12 UTC (permalink / raw)
  To: Dave Chinner; +Cc: xfs@oss.sgi.com

here you go

# xfs_db -c "inode 131" -c p /dev/dm-7
core.magic = 0x494e
core.mode = 0100000
core.version = 2
core.format = 3 (btree)
core.nlinkv2 = 1
core.onlink = 0
core.projid_lo = 0
core.projid_hi = 0
core.uid = 0
core.gid = 0
core.flushiter = 4
core.atime.sec = Mon Dec  8 14:55:46 2014
core.atime.nsec = 555792066
core.mtime.sec = Mon Dec  8 14:55:46 2014
core.mtime.nsec = 555792066
core.ctime.sec = Mon Dec  8 14:55:46 2014
core.ctime.nsec = 555792066
core.size = 0
core.nblocks = 283
core.extsize = 0
core.nextents = 252
core.naextents = 0
core.forkoff = 0
core.aformat = 2 (extents)
core.dmevmask = 0
core.dmstate = 0
core.newrtbm = 0
core.prealloc = 0
core.realtime = 0
core.immutable = 0
core.append = 0
core.sync = 0
core.noatime = 0
core.nodump = 0
core.rtinherit = 0
core.projinherit = 0
core.nosymlinks = 0
core.extsz = 0
core.extszinherit = 0
core.nodefrag = 0
core.filestream = 0
core.gen = 0
next_unlinked = null
u.bmbt.level = 1
u.bmbt.numrecs = 1
u.bmbt.keys[1] = [startoff] 1:[0]
u.bmbt.ptrs[1] = 1:2819


Charles Weber
NIA IRP NCTS
410-558-8001

________________________________________
From: Dave Chinner [david@fromorbit.com]
Sent: Monday, December 22, 2014 7:32 PM
To: Weber, Charles (NIH/NIA/IRP) [E]
Cc: xfs@oss.sgi.com
Subject: Re: weird quota issue

On Mon, Dec 22, 2014 at 05:46:42PM -0500, Weber, Charles (NIH/NIA/IRP) [C] wrote:
> I wonder if it is a thin-provision issue? ~40TB allocated by the SAN but setup to not really allocate space until it is claimed by the OS.
>
>  xfs_db -c "sb 0" -c p /dev/dm-7
>
....
> versionnum = 0xb5e4

So the quota bit is set (0x40) hence quotas will attempt to be
enabled.

> uquotino = 131
> gquotino = 0
> qflags = 0

But we have no quota enabled but a user quota inode allocated.
the quota flags would have been written to zero by the initial
failure, so this implies that reading the  user quota inode failed.
Output of 'xfs_db -c "inode 131" -c p /dev/dm-7', please?

-Dave.
--
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: weird quota issue
  2014-12-23  2:12           ` Weber, Charles (NIH/NIA/IRP) [E]
@ 2014-12-23  2:42             ` Dave Chinner
  2014-12-23  7:19               ` Arkadiusz Miśkiewicz
                                 ` (2 more replies)
  0 siblings, 3 replies; 13+ messages in thread
From: Dave Chinner @ 2014-12-23  2:42 UTC (permalink / raw)
  To: Weber, Charles (NIH/NIA/IRP) [E]; +Cc: xfs@oss.sgi.com

On Tue, Dec 23, 2014 at 02:12:15AM +0000, Weber, Charles (NIH/NIA/IRP) [E] wrote:
> here you go
> 
> # xfs_db -c "inode 131" -c p /dev/dm-7

Nothing obviously wrong there, so there's no clear indication of why
the quota initialisation failed. If hasn't got to quotacheck,
because theat throws verbos errors when it fails, so it's something
going wrong during initialisation.

Just to narrow it down, if you mount with just uquota does the
mount succeed? Please post the dmesg output whatever the outcome.
Does mounting with just pquota succeed? If neither succeed, what
happens if you mount with no quotas, then unmount and mount again
with quotas enabled?

If it still doesn't work, I'm going to need an event trace of a
failed mount (install trace-cmd and run:

# trace-cmd record -e xfs\* mount -o uquota,pquota /dev/dm-7 /mnt/pt
<some output>
# trace-cmd report > trace.out

And then compress the trace.out file and attach it.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: weird quota issue
  2014-12-23  2:42             ` Dave Chinner
@ 2014-12-23  7:19               ` Arkadiusz Miśkiewicz
  2014-12-23 20:35                 ` Dave Chinner
  2014-12-23 16:31               ` Weber, Charles (NIH/NIA/IRP) [C]
  2014-12-23 18:48               ` Weber, Charles (NIH/NIA/IRP) [C]
  2 siblings, 1 reply; 13+ messages in thread
From: Arkadiusz Miśkiewicz @ 2014-12-23  7:19 UTC (permalink / raw)
  To: xfs

On Tuesday 23 of December 2014, Dave Chinner wrote:
> On Tue, Dec 23, 2014 at 02:12:15AM +0000, Weber, Charles (NIH/NIA/IRP) [E] 
wrote:
> > here you go
> > 
> > # xfs_db -c "inode 131" -c p /dev/dm-7
> 
> Nothing obviously wrong there, so there's no clear indication of why
> the quota initialisation failed.

gquotino should be set to null, setting it via xfs_db should fix the problem

> uquotino = 131
> gquotino = 0
> qflags = 0


Otherwise we end up with my last problem

http://oss.sgi.com/archives/xfs/2014-07/msg00121.html

"- 3.10 kernel is not able to handle case when uquotino == value, gquotino == 
0. For 3.10 this case is impossible / should never happen. 3.10 expects 
(uquotino == value, gquotino == null) or (uquotino == value, gquotino == 
othervalue) or (uqotinfo == null, gruotino == value) only."

So I guess 2.6.32 is doing the same.
 
AFAIK xfs_repair doesn't fix this issue. Not sure.

> Dave.


-- 
Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org )

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: weird quota issue
  2014-12-23  2:42             ` Dave Chinner
  2014-12-23  7:19               ` Arkadiusz Miśkiewicz
@ 2014-12-23 16:31               ` Weber, Charles (NIH/NIA/IRP) [C]
  2014-12-23 18:48               ` Weber, Charles (NIH/NIA/IRP) [C]
  2 siblings, 0 replies; 13+ messages in thread
From: Weber, Charles (NIH/NIA/IRP) [C] @ 2014-12-23 16:31 UTC (permalink / raw)
  To: Dave Chinner; +Cc: xfs

here you go

no quota mount
very quick to mount
dmesg 
XFS (dm-7): delaylog is the default now, option is deprecated.
XFS (dm-7): Mounting Filesystem


uquota mount
hangs for long time after mount command
XFS (dm-7): Mounting Filesystem
XFS (dm-7): Ending clean mount
XFS (dm-7): Quotacheck needed: Please wait.
XFS (dm-7): Quotacheck: Done.

uquota/prjquota mount
quick mount time
XFS (dm-7): Mounting Filesystem
XFS (dm-7): Ending clean mount
XFS (dm-7): Failed to initialize disk quotas.

prjquota mnt
XFS (dm-7): Failed to initialize disk quotas.

I’ll look into the tracecmd stuff after lunch.
Chuck
> On Dec 22, 2014, at 9:42 PM, Dave Chinner <david@fromorbit.com> wrote:
> 
> On Tue, Dec 23, 2014 at 02:12:15AM +0000, Weber, Charles (NIH/NIA/IRP) [E] wrote:
>> here you go
>> 
>> # xfs_db -c "inode 131" -c p /dev/dm-7
> 
> Nothing obviously wrong there, so there's no clear indication of why
> the quota initialisation failed. If hasn't got to quotacheck,
> because theat throws verbos errors when it fails, so it's something
> going wrong during initialisation.
> 
> Just to narrow it down, if you mount with just uquota does the
> mount succeed? Please post the dmesg output whatever the outcome.
> Does mounting with just pquota succeed? If neither succeed, what
> happens if you mount with no quotas, then unmount and mount again
> with quotas enabled?
> 
> If it still doesn't work, I'm going to need an event trace of a
> failed mount (install trace-cmd and run:
> 
> # trace-cmd record -e xfs\* mount -o uquota,pquota /dev/dm-7 /mnt/pt
> <some output>
> # trace-cmd report > trace.out
> 
> And then compress the trace.out file and attach it.
> 
> Cheers,
> 
> Dave.
> -- 
> Dave Chinner
> david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: weird quota issue
  2014-12-23  2:42             ` Dave Chinner
  2014-12-23  7:19               ` Arkadiusz Miśkiewicz
  2014-12-23 16:31               ` Weber, Charles (NIH/NIA/IRP) [C]
@ 2014-12-23 18:48               ` Weber, Charles (NIH/NIA/IRP) [C]
  2 siblings, 0 replies; 13+ messages in thread
From: Weber, Charles (NIH/NIA/IRP) [C] @ 2014-12-23 18:48 UTC (permalink / raw)
  To: Dave Chinner; +Cc: xfs@oss.sgi.com


[-- Attachment #1.1: Type: text/plain, Size: 3235 bytes --]


I had these errors running the trace-cmd report.
trace-cmd: No such file or directory
  function scsi_trace_parse_cdb not defined
  failed to read event print fmt for scsi_dispatch_cmd_start
  function scsi_trace_parse_cdb not defined
  failed to read event print fmt for scsi_dispatch_cmd_error
  function scsi_trace_parse_cdb not defined
  failed to read event print fmt for scsi_dispatch_cmd_done
  function scsi_trace_parse_cdb not defined
  failed to read event print fmt for scsi_dispatch_cmd_timeout
  Error: expected 'field' but read 'print'
  failed to read event format for sys_enter_inotify_init
  Error: expected 'field' but read 'print'
  failed to read event format for sys_enter_sync
  Error: expected 'field' but read 'print'
  failed to read event format for sys_enter_vhangup
  Error: expected 'field' but read 'print'
  failed to read event format for sys_enter_munlockall
  Error: expected 'field' but read 'print'
  failed to read event format for sys_enter_getpgrp
  Error: expected 'field' but read 'print'
  failed to read event format for sys_enter_setsid
  Error: expected 'field' but read 'print'
  failed to read event format for sys_enter_restart_syscall
  Error: expected 'field' but read 'print'
  failed to read event format for sys_enter_pause
  Error: expected 'field' but read 'print'
  failed to read event format for sys_enter_getpid
  Error: expected 'field' but read 'print'
  failed to read event format for sys_enter_getppid
  Error: expected 'field' but read 'print'
  failed to read event format for sys_enter_getuid
  Error: expected 'field' but read 'print'
  failed to read event format for sys_enter_geteuid
  Error: expected 'field' but read 'print'
  failed to read event format for sys_enter_getgid
  Error: expected 'field' but read 'print'
  failed to read event format for sys_enter_getegid
  Error: expected 'field' but read 'print'
  failed to read event format for sys_enter_gettid
  Error: expected 'field' but read 'print'
  failed to read event format for sys_enter_sched_yield
> On Dec 22, 2014, at 9:42 PM, Dave Chinner <david@fromorbit.com> wrote:
> 
> On Tue, Dec 23, 2014 at 02:12:15AM +0000, Weber, Charles (NIH/NIA/IRP) [E] wrote:
>> here you go
>> 
>> # xfs_db -c "inode 131" -c p /dev/dm-7
> 
> Nothing obviously wrong there, so there's no clear indication of why
> the quota initialisation failed. If hasn't got to quotacheck,
> because theat throws verbos errors when it fails, so it's something
> going wrong during initialisation.
> 
> Just to narrow it down, if you mount with just uquota does the
> mount succeed? Please post the dmesg output whatever the outcome.
> Does mounting with just pquota succeed? If neither succeed, what
> happens if you mount with no quotas, then unmount and mount again
> with quotas enabled?
> 
> If it still doesn't work, I'm going to need an event trace of a
> failed mount (install trace-cmd and run:
> 
> # trace-cmd record -e xfs\* mount -o uquota,pquota /dev/dm-7 /mnt/pt
> <some output>
> # trace-cmd report > trace.out
> 
> And then compress the trace.out file and attach it.
> 
> Cheers,
> 
> Dave.
> -- 
> Dave Chinner
> david@fromorbit.com


[-- Attachment #1.2.1: Type: text/html, Size: 213 bytes --]

[-- Attachment #1.2.2: trace-xfs.out.gz --]
[-- Type: application/x-gzip, Size: 5400 bytes --]

[-- Attachment #1.2.3: Type: text/html, Size: 8933 bytes --]

[-- Attachment #2: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: weird quota issue
  2014-12-23  7:19               ` Arkadiusz Miśkiewicz
@ 2014-12-23 20:35                 ` Dave Chinner
  0 siblings, 0 replies; 13+ messages in thread
From: Dave Chinner @ 2014-12-23 20:35 UTC (permalink / raw)
  To: Arkadiusz Miśkiewicz; +Cc: xfs

On Tue, Dec 23, 2014 at 08:19:20AM +0100, Arkadiusz Miśkiewicz wrote:
> On Tuesday 23 of December 2014, Dave Chinner wrote:
> > On Tue, Dec 23, 2014 at 02:12:15AM +0000, Weber, Charles (NIH/NIA/IRP) [E] 
> wrote:
> > > here you go
> > > 
> > > # xfs_db -c "inode 131" -c p /dev/dm-7
> > 
> > Nothing obviously wrong there, so there's no clear indication of why
> > the quota initialisation failed.
> 
> gquotino should be set to null, setting it via xfs_db should fix the problem

# umount /dev/dm-7
# xfs_db -x -c "sb 0" -c "write gquotino -1" /dev/dm-7

> > uquotino = 131
> > gquotino = 0
> > qflags = 0
> 
> Otherwise we end up with my last problem
> 
> http://oss.sgi.com/archives/xfs/2014-07/msg00121.html
> 
> "- 3.10 kernel is not able to handle case when uquotino == value, gquotino == 
> 0. For 3.10 this case is impossible / should never happen. 3.10 expects 
> (uquotino == value, gquotino == null) or (uquotino == value, gquotino == 
> othervalue) or (uqotinfo == null, gruotino == value) only."
> 
> So I guess 2.6.32 is doing the same.

Except that the problem you saw required running a 3.16 kernel to
trigger the unhandled state. I can't see why a system only running
a 2.6.32 kernel would ever get into this state....

> AFAIK xfs_repair doesn't fix this issue. Not sure.

Certainly not the one that comes with centos 6 - 0 and NULL are both
valid values...

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2014-12-23 20:35 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-12-19 21:26 weird quota issue Weber, Charles (NIH/NIA/IRP) [E]
2014-12-22 16:34 ` Charles Weber
2014-12-22 20:48 ` Dave Chinner
2014-12-22 22:13   ` Weber, Charles (NIH/NIA/IRP) [C]
2014-12-22 22:35     ` Dave Chinner
2014-12-22 22:46       ` Weber, Charles (NIH/NIA/IRP) [C]
2014-12-23  0:32         ` Dave Chinner
2014-12-23  2:12           ` Weber, Charles (NIH/NIA/IRP) [E]
2014-12-23  2:42             ` Dave Chinner
2014-12-23  7:19               ` Arkadiusz Miśkiewicz
2014-12-23 20:35                 ` Dave Chinner
2014-12-23 16:31               ` Weber, Charles (NIH/NIA/IRP) [C]
2014-12-23 18:48               ` Weber, Charles (NIH/NIA/IRP) [C]

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox