From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754512AbcG0JMu (ORCPT ); Wed, 27 Jul 2016 05:12:50 -0400 Received: from userp1040.oracle.com ([156.151.31.81]:36083 "EHLO userp1040.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752549AbcG0JMs (ORCPT ); Wed, 27 Jul 2016 05:12:48 -0400 Message-ID: <57987AF6.1050105@oracle.com> Date: Wed, 27 Jul 2016 17:12:22 +0800 From: Bob Liu User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130308 Thunderbird/17.0.4 MIME-Version: 1.0 To: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= CC: linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org, konrad.wilk@oracle.com Subject: Re: [PATCH v3] xen-blkfront: dynamic configuration of per-vbd resources References: <1469589685-31630-1-git-send-email-bob.liu@oracle.com> <20160727080729.7sgmqak42rw6vaok@mac> In-Reply-To: <20160727080729.7sgmqak42rw6vaok@mac> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 8bit X-Source-IP: userv0022.oracle.com [156.151.31.74] Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 07/27/2016 04:07 PM, Roger Pau Monné wrote: ..[snip].. >> @@ -2443,6 +2674,22 @@ static void blkfront_connect(struct blkfront_info *info) >> return; >> } >> >> + err = device_create_file(&info->xbdev->dev, &dev_attr_max_ring_page_order); >> + if (err) >> + goto fail; >> + >> + err = device_create_file(&info->xbdev->dev, &dev_attr_max_indirect_segs); >> + if (err) { >> + device_remove_file(&info->xbdev->dev, &dev_attr_max_ring_page_order); >> + goto fail; >> + } >> + >> + err = device_create_file(&info->xbdev->dev, &dev_attr_max_queues); >> + if (err) { >> + device_remove_file(&info->xbdev->dev, &dev_attr_max_ring_page_order); >> + device_remove_file(&info->xbdev->dev, &dev_attr_max_indirect_segs); >> + goto fail; >> + } >> xenbus_switch_state(info->xbdev, XenbusStateConnected); >> >> /* Kick pending requests. */ >> @@ -2453,6 +2700,12 @@ static void blkfront_connect(struct blkfront_info *info) >> add_disk(info->gd); >> >> info->is_ready = 1; >> + return; >> + >> +fail: >> + blkif_free(info, 0); >> + xlvbd_release_gendisk(info); >> + return; > > Hm, I'm not sure whether this chunk should be in a separate patch, it seems > like blkfront_connect doesn't properly cleanup on error (if > xlvbd_alloc_gendisk fails blkif_free will not be called). Do you think you > could send the addition of the 'fail' label as a separate patch and fix the > error path of xlvbd_alloc_gendisk? > Sure, will fix all of your comments above. >> } >> >> /** >> @@ -2500,8 +2753,16 @@ static void blkback_changed(struct xenbus_device *dev, >> break; >> >> case XenbusStateClosed: >> - if (dev->state == XenbusStateClosed) >> + if (dev->state == XenbusStateClosed) { >> + if (info->reconfiguring) >> + if (blkfront_resume(info->xbdev)) { > > Could you please join those two conditions: > > if (info->reconfiguring && blkfront_resume(info->xbdev)) { ... > > Also, I'm not sure this is correct, if blkfront sees the "Closing" state on > blkback it will try to close the frontend and destroy the block device (see > blkfront_closing), and this should be avoided. You should call > blkfront_resume as soon as you see the backend move to the Closed or Closing > states, without calling blkfront_closing. > I didn't get how this can happen, backend state won't be changed to 'Closing' before blkfront_closing() is called. So I think current logic is fine. Btw: could you please ack [PATCH v2 2/3] xen-blkfront: introduce blkif_set_queue_limits()? Thank you! Bob