cluster-devel.redhat.com archive mirror
 help / color / mirror / Atom feed
* [Cluster-devel] so detail uevent dmesg printed by dlm when using cluster-md
@ 2016-05-31  8:47 zhilong
  0 siblings, 0 replies; only message in thread
From: zhilong @ 2016-05-31  8:47 UTC (permalink / raw)
  To: cluster-devel.redhat.com

hi, teigland and all;
    Excuse me, I found this commit 
'075f01775f53640af4a2ca3ed8cbc71de6e37582' in upstream, and it would 
been always available printed lots of detail uevent dmesg when dlm 
invoked by cluster-md; Would you take some patience refer to my example?

For example:
     I just create cluster-md, or grow the clustered array, and we can 
see so detail dmesg printed by dlm. Thus personally, in order to keep 
kern_info concise, actually I would prefer these info printed by 
debug_mode. It's just my opinion, and really look forward to get your 
opinion about this scenario.
     I'm sorry about the following redundant messages.

    # mdadm -CR /dev/md0 --bitmap=clustered -l1 -n2 /dev/sdb /dev/sdc 
--assume-clean
     mdadm: /dev/sdb appears to be part of a raid array:
             level=raid1 devices=2 ctime=Mon May 30 11:31:56 2016
     mdadm: Note: this array has metadata at the start and
          may not be suitable as a boot device.  If you plan to
          store '/boot' on this device please ensure that
          your boot-loader understands md/v1.x metadata, or use
          --metadata=0.90
     mdadm: /dev/sdc appears to be part of a raid array:
             level=raid1 devices=2 ctime=Mon May 30 11:31:56 2016
     mdadm: Defaulting to version 1.2 metadata
     mdadm: array /dev/md0 started.
     # dmesg
     [ 1743.420595] dlm: Using TCP for communications
     [ 1743.421825] dlm: cluster: joining the lockspace group...
     [ 1743.425482] dlm: cluster: group event done 0 0
     [ 1743.425486] dlm: cluster: dlm_recover 1
     [ 1743.425505] dlm: cluster: add member 1084783217
     [ 1743.425508] dlm: cluster: dlm_recover_members 1 nodes
     [ 1743.425510] dlm: cluster: generation 1 slots 1 1:1084783217
     [ 1743.425512] dlm: cluster: dlm_recover_directory
     [ 1743.425513] dlm: cluster: dlm_recover_directory 0 in 0 new
     [ 1743.425514] dlm: cluster: dlm_recover_directory 0 out 0 messages
     [ 1743.425526] dlm: cluster: dlm_recover 1 generation 1 done: 0 ms
     [ 1743.425538] dlm: cluster: join complete
     [ 1744.425866] dlm: cluster: leaving the lockspace group...
     [ 1744.426770] dlm: cluster: group event done 0 0
     [ 1744.427142] dlm: cluster: release_lockspace final free
     [ 1744.435623] dlm: Using TCP for communications
     [ 1744.435775] dlm: cluster: joining the lockspace group...
     [ 1744.440172] dlm: cluster: group event done 0 0
     [ 1744.440198] dlm: cluster: dlm_recover 1
     [ 1744.440216] dlm: cluster: add member 1084783217
     [ 1744.440220] dlm: cluster: dlm_recover_members 1 nodes
     [ 1744.440222] dlm: cluster: generation 1 slots 1 1:1084783217
     [ 1744.440224] dlm: cluster: dlm_recover_directory
     [ 1744.440225] dlm: cluster: dlm_recover_directory 0 in 0 new
     [ 1744.440226] dlm: cluster: dlm_recover_directory 0 out 0 messages
     [ 1744.440237] dlm: cluster: dlm_recover 1 generation 1 done: 0 ms
     [ 1744.440252] dlm: cluster: join complete
     [ 1745.440621] dlm: cluster: leaving the lockspace group...
     [ 1745.441521] dlm: cluster: group event done 0 0
     [ 1745.441589] dlm: cluster: release_lockspace final free
     [ 1745.452778] dlm: Using TCP for communications
     [ 1745.452853] dlm: cluster: joining the lockspace group...
     [ 1745.457448] dlm: cluster: group event done 0 0
     [ 1745.457455] dlm: cluster: dlm_recover 1
     [ 1745.457488] dlm: cluster: add member 1084783217
     [ 1745.457494] dlm: cluster: dlm_recover_members 1 nodes
     [ 1745.457498] dlm: cluster: generation 1 slots 1 1:1084783217
     [ 1745.457543] dlm: cluster: dlm_recover_directory
     [ 1745.457547] dlm: cluster: dlm_recover_directory 0 in 0 new
     [ 1745.457549] dlm: cluster: dlm_recover_directory 0 out 0 messages
     [ 1745.457568] dlm: cluster: dlm_recover 1 generation 1 done: 0 ms
     [ 1745.457654] dlm: cluster: join complete
     [ 1746.458672] dlm: cluster: leaving the lockspace group...
     [ 1746.460292] dlm: cluster: group event done 0 0
     [ 1746.460387] dlm: cluster: release_lockspace final free
     [ 1746.472279] dlm: Using TCP for communications
     [ 1746.472382] dlm: cluster: joining the lockspace group...
     [ 1746.476781] dlm: cluster: group event done 0 0
     [ 1746.477023] dlm: cluster: dlm_recover 1
     [ 1746.477049] dlm: cluster: add member 1084783217
     [ 1746.477055] dlm: cluster: dlm_recover_members 1 nodes
     [ 1746.477058] dlm: cluster: generation 1 slots 1 1:1084783217
     [ 1746.477059] dlm: cluster: dlm_recover_directory
     [ 1746.477061] dlm: cluster: dlm_recover_directory 0 in 0 new
     [ 1746.477062] dlm: cluster: dlm_recover_directory 0 out 0 messages
     [ 1746.477074] dlm: cluster: dlm_recover 1 generation 1 done: 0 ms
     [ 1746.477087] dlm: cluster: join complete
     [ 1747.478402] dlm: cluster: leaving the lockspace group...
     [ 1747.481051] dlm: cluster: group event done 0 0
     [ 1747.481132] dlm: cluster: release_lockspace final free
     [ 1747.485751] md: bind<sdb>
     [ 1747.490087] md: bind<sdc>
     [ 1747.499718] md/raid1:md0: active with 2 out of 2 mirrors
     [ 1747.503939] md-cluster: EXPERIMENTAL. Use with caution
     [ 1747.503943] Registering Cluster MD functions
     [ 1747.512981] dlm: Using TCP for communications
     [ 1747.513377] dlm: 3f8fd516-f74d-3927-f232-ff88e76ae21a: joining the
     lockspace group...
     [ 1747.520175] dlm: 3f8fd516-f74d-3927-f232-ff88e76ae21a: group 
event done 0 0
     [ 1747.520186] dlm: 3f8fd516-f74d-3927-f232-ff88e76ae21a: 
dlm_recover 1
     [ 1747.520205] dlm: 3f8fd516-f74d-3927-f232-ff88e76ae21a: add 
member 1084783217
     [ 1747.520209] dlm: 3f8fd516-f74d-3927-f232-ff88e76ae21a: 
dlm_recover_members 1 nodes
     [ 1747.520211] dlm: 3f8fd516-f74d-3927-f232-ff88e76ae21a: generation 1
     slots 1 1:1084783217
     [ 1747.520213] dlm: 3f8fd516-f74d-3927-f232-ff88e76ae21a:
     dlm_recover_directory
     [ 1747.520214] dlm: 3f8fd516-f74d-3927-f232-ff88e76ae21a:
     dlm_recover_directory 0 in 0 new
     [ 1747.520215] dlm: 3f8fd516-f74d-3927-f232-ff88e76ae21a:
     dlm_recover_directory 0 out 0 messages
     [ 1747.520226] dlm: 3f8fd516-f74d-3927-f232-ff88e76ae21a: 
dlm_recover 1
     generation 1 done: 0 ms
     [ 1747.520240] dlm: 3f8fd516-f74d-3927-f232-ff88e76ae21a: join 
complete
     [ 1747.521757] md-cluster: Joined cluster
     3f8fd516-f74d-3927-f232-ff88e76ae21a slot 1
     [ 1747.521783] bitmap_read_sb:597 bm slot: 1 offset: 16
     [ 1747.522179] created bitmap (1 pages) for device md0
     [ 1747.522398] md0: bitmap initialized from disk: read 1 pages, set 
0 of 10 bits
     [ 1747.522471] bitmap_read_sb:597 bm slot: 2 offset: 24
     [ 1747.522656] created bitmap (1 pages) for device md0
     [ 1747.522879] md0: bitmap initialized from disk: read 1 pages, set 
0 of 10 bits
     [ 1747.522952] bitmap_read_sb:597 bm slot: 3 offset: 32
     [ 1747.523184] created bitmap (1 pages) for device md0
     [ 1747.523412] md0: bitmap initialized from disk: read 1 pages, set 
0 of 10 bits
     [ 1747.523429] bitmap_read_sb:597 bm slot: 0 offset: 8
     [ 1747.523685] created bitmap (1 pages) for device md0
     [ 1747.523934] md0: bitmap initialized from disk: read 1 pages, set 
10 of 10 bits
     [ 1747.525628] md0: detected capacity change from 0 to 628555776

# mdadm --grow /dev/md0 --raid-devices=3 --add /dev/sdb
mdadm: added /dev/sdb
raid_disks for /dev/md0 set to 3
# dmesg
[69454.078715] dlm: cluster: joining the lockspace group...
[69454.082646] dlm: cluster: group event done 0 0
[69454.082652] dlm: cluster: dlm_recover 1
[69454.082670] dlm: cluster: add member 1084783217
[69454.082673] dlm: cluster: dlm_recover_members 1 nodes
[69454.082676] dlm: cluster: generation 1 slots 1 1:1084783217
[69454.082677] dlm: cluster: dlm_recover_directory
[69454.082678] dlm: cluster: dlm_recover_directory 0 in 0 new
[69454.082680] dlm: cluster: dlm_recover_directory 0 out 0 messages
[69454.082691] dlm: cluster: dlm_recover 1 generation 1 done: 0 ms
[69454.082702] dlm: cluster: join complete
[69455.083138] dlm: cluster: leaving the lockspace group...
[69455.084491] dlm: cluster: group event done 0 0
[69455.084599] dlm: cluster: release_lockspace final free
[69455.091931] dlm: cluster: joining the lockspace group...
[69455.095412] dlm: cluster: group event done 0 0
[69455.095418] dlm: cluster: dlm_recover 1
[69455.095436] dlm: cluster: add member 1084783217
[69455.095439] dlm: cluster: dlm_recover_members 1 nodes
[69455.095447] dlm: cluster: generation 1 slots 1 1:1084783217
[69455.095449] dlm: cluster: dlm_recover_directory
[69455.095450] dlm: cluster: dlm_recover_directory 0 in 0 new
[69455.095452] dlm: cluster: dlm_recover_directory 0 out 0 messages
[69455.095463] dlm: cluster: dlm_recover 1 generation 1 done: 0 ms
[69455.095474] dlm: cluster: join complete
[69456.098041] dlm: cluster: leaving the lockspace group...
[69456.099255] dlm: cluster: group event done 0 0
[69456.099303] dlm: cluster: release_lockspace final free
[69458.198116] md: bind<sdb>
[69458.202266] RAID1 conf printout:
[69458.202271]  --- wd:2 rd:3
[69458.202273]  disk 0, wo:0, o:1, dev:sdd
[69458.202276]  disk 1, wo:0, o:1, dev:sdc
[69458.202277]  disk 2, wo:1, o:1, dev:sdb
[69458.204878] md: recovery of RAID array md0
[69458.204881] md: minimum _guaranteed_ speed: 1000 KB/sec/disk.
[69458.204882] md: using maximum available idle IO bandwidth (but not 
more than 200000 KB/sec) for recovery.
[69458.204884] md: using 128k window, over a total of 613824k.
[69462.750356] md: md0: recovery done.
[69465.060312] RAID1 conf printout:
[69465.060317]  --- wd:3 rd:3
[69465.060320]  disk 0, wo:0, o:1, dev:sdd
[69465.060321]  disk 1, wo:0, o:1, dev:sdc
[69465.060323]  disk 2, wo:0, o:1, dev:sdb


Thanks,
-Zhilong



^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2016-05-31  8:47 UTC | newest]

Thread overview: (only message) (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-05-31  8:47 [Cluster-devel] so detail uevent dmesg printed by dlm when using cluster-md zhilong

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).