linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: NeilBrown <neilb@suse.com>
To: Artur Paszkiewicz <artur.paszkiewicz@intel.com>,
	Shaohua Li <shli@kernel.org>
Cc: Linux Raid <linux-raid@vger.kernel.org>
Subject: Re: [PATCH] md: create new workqueue for object destruction
Date: Wed, 01 Nov 2017 14:57:14 +1100	[thread overview]
Message-ID: <87po92veg5.fsf@notabene.neil.brown.name> (raw)
In-Reply-To: <c34d2d7f-2740-365f-85e4-7ec9c9c66f1a@intel.com>

[-- Attachment #1: Type: text/plain, Size: 7364 bytes --]

On Mon, Oct 30 2017, Artur Paszkiewicz wrote:

> On 10/29/2017 11:18 PM, NeilBrown wrote:
>> On Fri, Oct 27 2017, Artur Paszkiewicz wrote:
>> 
>>> On 10/23/2017 01:31 AM, NeilBrown wrote:
>>>> On Fri, Oct 20 2017, Artur Paszkiewicz wrote:
>>>>
>>>>> On 10/20/2017 12:28 AM, NeilBrown wrote:
>>>>>> On Thu, Oct 19 2017, Artur Paszkiewicz wrote:
>>>>>>
>>>>>>> On 10/19/2017 12:36 AM, NeilBrown wrote:
>>>>>>>> On Wed, Oct 18 2017, Artur Paszkiewicz wrote:
>>>>>>>>
>>>>>>>>> On 10/18/2017 09:29 AM, NeilBrown wrote:
>>>>>>>>>> On Tue, Oct 17 2017, Shaohua Li wrote:
>>>>>>>>>>
>>>>>>>>>>> On Tue, Oct 17, 2017 at 04:04:52PM +1100, Neil Brown wrote:
>>>>>>>>>>>>
>>>>>>>>>>>> lockdep currently complains about a potential deadlock
>>>>>>>>>>>> with sysfs access taking reconfig_mutex, and that
>>>>>>>>>>>> waiting for a work queue to complete.
>>>>>>>>>>>>
>>>>>>>>>>>> The cause is inappropriate overloading of work-items
>>>>>>>>>>>> on work-queues.
>>>>>>>>>>>>
>>>>>>>>>>>> We currently have two work-queues: md_wq and md_misc_wq.
>>>>>>>>>>>> They service 5 different tasks:
>>>>>>>>>>>>
>>>>>>>>>>>>   mddev->flush_work                       md_wq
>>>>>>>>>>>>   mddev->event_work (for dm-raid)         md_misc_wq
>>>>>>>>>>>>   mddev->del_work (mddev_delayed_delete)  md_misc_wq
>>>>>>>>>>>>   mddev->del_work (md_start_sync)         md_misc_wq
>>>>>>>>>>>>   rdev->del_work                          md_misc_wq
>>>>>>>>>>>>
>>>>>>>>>>>> We need to call flush_workqueue() for md_start_sync and ->event_work
>>>>>>>>>>>> while holding reconfig_mutex, but mustn't hold it when
>>>>>>>>>>>> flushing mddev_delayed_delete or rdev->del_work.
>>>>>>>>>>>>
>>>>>>>>>>>> md_wq is a bit special as it has WQ_MEM_RECLAIM so it is
>>>>>>>>>>>> best to leave that alone.
>>>>>>>>>>>>
>>>>>>>>>>>> So create a new workqueue, md_del_wq, and a new work_struct,
>>>>>>>>>>>> mddev->sync_work, so we can keep two classes of work separate.
>>>>>>>>>>>>
>>>>>>>>>>>> md_del_wq and ->del_work are used only for destroying rdev
>>>>>>>>>>>> and mddev.
>>>>>>>>>>>> md_misc_wq is used for event_work and sync_work.
>>>>>>>>>>>>
>>>>>>>>>>>> Also document the purpose of each flush_workqueue() call.
>>>>>>>>>>>>
>>>>>>>>>>>> This removes the lockdep warning.
>>>>>>>>>>>
>>>>>>>>>>> I had the exactly same patch queued internally,
>>>>>>>>>>
>>>>>>>>>> Cool :-)
>>>>>>>>>>
>>>>>>>>>>>                                                   but the mdadm test suite still
>>>>>>>>>>> shows lockdep warnning. I haven't time to check further.
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> The only other lockdep I've seen later was some ext4 thing, though I
>>>>>>>>>> haven't tried the full test suite.  I might have a look tomorrow.
>>>>>>>>>
>>>>>>>>> I'm also seeing a lockdep warning with or without this patch,
>>>>>>>>> reproducible with:
>>>>>>>>>
>>>>>>>>
>>>>>>>> Thanks!
>>>>>>>> Looks like using one workqueue for mddev->del_work and rdev->del_work
>>>>>>>> causes problems.
>>>>>>>> Can you try with this addition please?
>>>>>>>
>>>>>>> It helped for that case but now there is another warning triggered by:
>>>>>>>
>>>>>>> export IMSM_NO_PLATFORM=1 # for platforms without IMSM
>>>>>>> mdadm -C /dev/md/imsm0 -eimsm -n4 /dev/sd[a-d] -R
>>>>>>> mdadm -C /dev/md/vol0 -l5 -n4 /dev/sd[a-d] -R --assume-clean
>>>>>>> mdadm -If sda
>>>>>>> mdadm -a /dev/md127 /dev/sda
>>>>>>> mdadm -Ss
>>>>>>
>>>>>> I tried that ... and mdmon gets a SIGSEGV.
>>>>>> imsm_set_disk() calls get_imsm_disk() and gets a NULL back.
>>>>>> It then passes the NULL to mark_failure() and that dereferences it.
>>>>>
>>>>> Interesting... I can't reproduce this. Can you show the output from
>>>>> mdadm -E for all disks after mdmon crashes? And maybe a debug log from
>>>>> mdmon?
>>>>
>>>> The crash happens when I run "mdadm -If sda".
>>>> gdb tell me:
>>>>
>>>> Thread 2 "mdmon" received signal SIGSEGV, Segmentation fault.
>>>> [Switching to Thread 0x7f5526c24700 (LWP 4757)]
>>>> 0x000000000041601c in is_failed (disk=0x0) at super-intel.c:1324
>>>> 1324		return (disk->status & FAILED_DISK) == FAILED_DISK;
>>>> (gdb) where
>>>> #0  0x000000000041601c in is_failed (disk=0x0) at super-intel.c:1324
>>>> #1  0x00000000004255a2 in mark_failure (super=0x65fa30, dev=0x660ba0, 
>>>>     disk=0x0, idx=0) at super-intel.c:7973
>>>> #2  0x00000000004260e8 in imsm_set_disk (a=0x6635d0, n=0, state=17)
>>>>     at super-intel.c:8357
>>>> #3  0x0000000000405069 in read_and_act (a=0x6635d0, fds=0x7f5526c23e10)
>>>>     at monitor.c:551
>>>> #4  0x0000000000405c8e in wait_and_act (container=0x65f010, nowait=0)
>>>>     at monitor.c:875
>>>> #5  0x0000000000405dc7 in do_monitor (container=0x65f010) at monitor.c:906
>>>> #6  0x0000000000403037 in run_child (v=0x65f010) at mdmon.c:85
>>>> #7  0x00007f5526fcb494 in start_thread (arg=0x7f5526c24700)
>>>>     at pthread_create.c:333
>>>> #8  0x00007f5526d0daff in clone ()
>>>>     at ../sysdeps/unix/sysv/linux/x86_64/clone.S:97
>>>>
>>>> The super-disks list that get_imsm_dl_disk() looks through contains
>>>> sdc, sdd, sde, but not sda - so get_imsm_disk() returns NULL.
>>>> (the 4 devices I use are sda sdc sde sde).
>>>>  mdadm --examine of sda and sdc after the crash are below.
>>>>  mdmon debug output is below that.
>>>
>>> Thank you for the information. The metadata output shows that there is
>>> something wrong with sda. Is there anything different about this device?
>>> The other disks are 10M QEMU SCSI drives, is sda the same? Can you
>>> check its serial e.g. with sg_inq?
>> 
>> sdc, sdd, and sde are specified to qemu with
>> 
>>        -hdb /var/tmp/mdtest10 \
>>        -hdc /var/tmp/mdtest11 \
>>        -hdd /var/tmp/mdtest12 \
>> 
>> sda comes from
>>        -drive file=/var/tmp/mdtest13,if=scsi,index=3,media=disk -s
>> 
>> /var/tmp/mdtest* are simple raw images, 10M each.
>> 
>> sg_inq report sd[cde] as
>>   Vendor: ATA
>>   Product: QEMU HARDDISK
>>   Serial: QM0000[234]
>> 
>> sda is
>>   Vendor: QEMU
>>   Product: QEMU HARDDISK
>>   no serial number.
>> 
>> 
>> If I change my script to use
>>        -drive file=/var/tmp/mdtest13,if=scsi,index=3,serial=QM00009,media=disk -s
>> 
>> for sda, mdmon doesn't crash.  It may well be reasonable to refuse to
>> work with a device that has no serial number.  It is not very friendly
>> to crash :-(
>
> OK, this explains a lot. Can you try the same with this patch? It looks
> like there was insufficient error checking when retrieving the scsi
> serial. Mdadm should now abort when creating the container.
> IMSM_DEVNAME_AS_SERIAL can be used to create an array with disks that
> don't have a serial number.
>
> Thanks,
> Artur
>
> diff --git a/sg_io.c b/sg_io.c
> index 42c91e1e..7889a95e 100644
> --- a/sg_io.c
> +++ b/sg_io.c
> @@ -46,6 +46,9 @@ int scsi_get_serial(int fd, void *buf, size_t buf_len)
>         if (rv)
>                 return rv;
>
> +       if ((io_hdr.info & SG_INFO_OK_MASK) != SG_INFO_OK)
> +               return -1;
> +
>         rsp_len = rsp_buf[3];
>
>         if (!rsp_len || buf_len < rsp_len)

Thanks. That does seem to make a useful difference.
It doesn't crash now.  I need IMSM_DEVNAME_AS_SERIAL=1 to create the
array, but then it runs smoothly.

Thanks,
NeilBrown

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 832 bytes --]

      reply	other threads:[~2017-11-01  3:57 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-10-17  5:04 [PATCH] md: create new workqueue for object destruction NeilBrown
2017-10-18  6:21 ` Shaohua Li
2017-10-18  7:29   ` NeilBrown
2017-10-18 11:21     ` Artur Paszkiewicz
2017-10-18 22:36       ` NeilBrown
2017-10-19  8:27         ` Artur Paszkiewicz
2017-10-19 22:28           ` NeilBrown
2017-10-20 14:00             ` Artur Paszkiewicz
2017-10-22 23:31               ` NeilBrown
2017-10-27 10:44                 ` Artur Paszkiewicz
2017-10-29 22:18                   ` NeilBrown
2017-10-30 13:02                     ` Artur Paszkiewicz
2017-11-01  3:57                       ` NeilBrown [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87po92veg5.fsf@notabene.neil.brown.name \
    --to=neilb@suse.com \
    --cc=artur.paszkiewicz@intel.com \
    --cc=linux-raid@vger.kernel.org \
    --cc=shli@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).