From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.0 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 49692C10F11 for ; Wed, 24 Apr 2019 10:02:49 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 124292089F for ; Wed, 24 Apr 2019 10:02:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728147AbfDXKCr (ORCPT ); Wed, 24 Apr 2019 06:02:47 -0400 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:53586 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726135AbfDXKCr (ORCPT ); Wed, 24 Apr 2019 06:02:47 -0400 Received: from pps.filterd (m0098413.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x3OA2LTx004321 for ; Wed, 24 Apr 2019 06:02:45 -0400 Received: from e06smtp04.uk.ibm.com (e06smtp04.uk.ibm.com [195.75.94.100]) by mx0b-001b2d01.pphosted.com with ESMTP id 2s2n3f9u5e-1 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=NOT) for ; Wed, 24 Apr 2019 06:02:45 -0400 Received: from localhost by e06smtp04.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 24 Apr 2019 11:02:43 +0100 Received: from b06cxnps4076.portsmouth.uk.ibm.com (9.149.109.198) by e06smtp04.uk.ibm.com (192.168.101.134) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; (version=TLSv1/SSLv3 cipher=AES256-GCM-SHA384 bits=256/256) Wed, 24 Apr 2019 11:02:41 +0100 Received: from d06av25.portsmouth.uk.ibm.com (d06av25.portsmouth.uk.ibm.com [9.149.105.61]) by b06cxnps4076.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id x3OA2eAM17301674 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 24 Apr 2019 10:02:40 GMT Received: from d06av25.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 5027011C06F; Wed, 24 Apr 2019 10:02:40 +0000 (GMT) Received: from d06av25.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 11BD711C050; Wed, 24 Apr 2019 10:02:40 +0000 (GMT) Received: from oc2783563651 (unknown [9.152.224.116]) by d06av25.portsmouth.uk.ibm.com (Postfix) with ESMTP; Wed, 24 Apr 2019 10:02:40 +0000 (GMT) Date: Wed, 24 Apr 2019 12:02:38 +0200 From: Halil Pasic To: Cornelia Huck Cc: Farhan Ali , Eric Farman , kvm@vger.kernel.org, linux-s390@vger.kernel.org, pmorel@linux.ibm.com Subject: Re: [PATCH v3 1/1] vfio-ccw: Prevent quiesce function going into an infinite loop In-Reply-To: <20190424090915.4a5ab1d9.cohuck@redhat.com> References: <4d5a4b98ab1b41ac6131b5c36de18b76c5d66898.1555449329.git.alifm@linux.ibm.com> <20190417110348.28efc8e3.cohuck@redhat.com> <20190417171311.3478402b@oc2783563651> <20190419221251.5b4aa9c8.pasic@linux.ibm.com> <8bd8ec0b-8b0c-3e74-1b14-7fad7470679e@linux.ibm.com> <20190423194251.093304c7.pasic@linux.ibm.com> <20190424090915.4a5ab1d9.cohuck@redhat.com> Organization: IBM X-Mailer: Claws Mail 3.11.1 (GTK+ 2.24.31; x86_64-redhat-linux-gnu) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-TM-AS-GCONF: 00 x-cbid: 19042410-0016-0000-0000-00000272EB1D X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 19042410-0017-0000-0000-000032CF5BC6 Message-Id: <20190424120238.4e8e1c2f.pasic@linux.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:,, definitions=2019-04-24_07:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=0 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=875 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1810050000 definitions=main-1904240084 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org On Wed, 24 Apr 2019 09:09:15 +0200 Cornelia Huck wrote: > On Tue, 23 Apr 2019 15:41:34 -0400 > Farhan Ali wrote: > > > On 04/23/2019 01:42 PM, Halil Pasic wrote: > > > One thing I'm confused about is, that we don't seem to prevent > > > new I/O being submitted. That is we could still loop indefinitely > > > if we get new IO after the 'kill I/O on the subchannel' is done but > > > before the msch() with the disable is issued. > > > > So the quiesce function will be called in the remove, release functions > > and also in the mdev reset callback via an ioctl VFIO_DEVICE_RESET. > > > > Now the release function is invoked in cases when we hot unplug the > > device or the guest is gone (or anything that will close the vfio mdev > > file descriptor, I believe). In such scenarios it's really the userspace > > which is asking to release the device. Similar for remove, where the > > user has to explicitly write to the remove file for the mdev to invoke > > it. Under normal conditions no sane userspace should be doing > > release/remove while there are still on going I/Os :) > > > > Me and Conny had some discussion on this in v1 of this patch: > > https://marc.info/?l=kvm&m=155437117823248&w=2 > > > > > > > > The 'flush all I/O' parts in the commit message and in the code make > > > this even more confusing. > > > > Maybe...if it's too confusing it could be fixed, but IMHO I don't think > > it's a dealbreaker. If anyone else thinks otherwise, I can go ahead and > > change it. > > I think it's fine -- I wasn't confused. > What I/O is flushed in the workqueue? I guess it is about the line flush_workqueue(vfio_ccw_work_q); But there is no I/O that can be flushed in vfio_ccw_work_q, but the bottom half of the interrupt handler if you like. > > > > > > > > Another thing that I don't quite understand is injecting interrupts > > > into QEMU for stuff that is actually not guest initiated. > > > > As mentioned above under normal conditions we shouldn't be doing > > quiesce. But wouldn't those interrupts just be unsolicited interrupts > > for the guest? > > Yes, you simply cannot keep an enabled subchannel from getting status > pending with unsolicited status. > I don't think a status that results from a halt subchannel can be called unsolicited. For example if no halt signal was issued, the halt remains pending. IMHO it is, form a guest perspective to see a halt pending in an IRB without having a HSCH executed. > > > > > > > > Furthermore I find how cio_cancel_halt_clear() quite confusing. We > > Well, that's a problem (if any) with the common I/O layer and beyond > the scope of this patch. > > > > TL;DR: > > > > > > I welcome this batch (with an r-b) but I would like the commit message > > So, what does this sentence mean? Confused. > s/batch/patch/ and the sentence misses 'is gone' at the very end. > > > and the comment changed so that the misleading 'flush all I/O in the > > > workqueue'. > > > > > > I think 'vfio-ccw: fix cio_cancel_halt_clear() usage' would reflect the > > > content of this patch better, because reasoning about the upper limit, > > > and what happens if this upper limit is hit is not what this patch is > > > about. It is about a client code bug that rendered iretry ineffective. > > > > > > > I politely disagree with the change in subject line. I think the current > > subject line describe what we are trying to prevent with this patch. But > > again if anyone else feels otherwise, I will go ahead and change :) > > No, I agree that the subject line is completely fine. > This is the 'infinite' loop in question. iretry = 255; ret = cio_cancel_halt_clear(sch, &iretry); while (ret == -EBUSY) { /* * Flush all I/O and wait for * cancel/halt/clear completion. */ private->completion = &completion; spin_unlock_irq(sch->lock); wait_for_completion_timeout(&completion, 3*HZ); private->completion = NULL; flush_workqueue(vfio_ccw_work_q); spin_lock_irq(sch->lock); ret = cio_cancel_halt_clear(sch, &iretry); }; Considering the documentation of cio_cancel_halt_clear() and without fully understanding the implementation of cio_cancel_halt_clear() it looks IMHO limited to 255 retries. But it is not. Because cio_cancel_halt_clear() is used incorrectly. Regarding the changed code, sorry, I missed the break on -EIO. With that the new loop should indeed be limited to ~255 iterations. Acked-by: Halil Pasic Regards, Halil