From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 430FEC433DB for ; Thu, 28 Jan 2021 15:00:41 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id A829264DFA for ; Thu, 28 Jan 2021 15:00:40 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A829264DFA Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:39422 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1l58mc-0006Dy-3M for qemu-devel@archiver.kernel.org; Thu, 28 Jan 2021 10:00:39 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]:47366) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1l58bQ-00029W-CQ for qemu-devel@nongnu.org; Thu, 28 Jan 2021 09:48:56 -0500 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:40315) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_CBC_SHA1:256) (Exim 4.90_1) (envelope-from ) id 1l58bO-0001Jv-3L for qemu-devel@nongnu.org; Thu, 28 Jan 2021 09:48:55 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1611845331; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=fyWO6j7+YTakEXhelLvekc2HyoTgo3q1gEmw2V/wkzc=; b=Cd4bEspqW6+daCKvIVk7iBP9JSw4nxY81oO3kCGtpq7CAkZ5x6n/f/PreHeqezXWGywySW aKCxGv+Up0V79XEZQ5AuMT7iT51VuEL2Edb76fq6tXzjSSbaZwsuVFa+DzsS0eIntgnIGS pXVxXnpSDlPvhypARwhQCyPO0FvNw6I= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-309-kAyPG-w9P0S7OXVe92Fy5g-1; Thu, 28 Jan 2021 09:48:49 -0500 X-MC-Unique: kAyPG-w9P0S7OXVe92Fy5g-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 06CCA107ACE3; Thu, 28 Jan 2021 14:48:48 +0000 (UTC) Received: from horse.redhat.com (ovpn-117-248.rdu2.redhat.com [10.10.117.248]) by smtp.corp.redhat.com (Postfix) with ESMTP id C3B5E6F13B; Thu, 28 Jan 2021 14:48:35 +0000 (UTC) Received: by horse.redhat.com (Postfix, from userid 10451) id 58BE7220BCF; Thu, 28 Jan 2021 09:48:35 -0500 (EST) Date: Thu, 28 Jan 2021 09:48:35 -0500 From: Vivek Goyal To: Greg Kurz Subject: Re: [PATCH 2/6] libvhost-user: Use slave_mutex in all slave messages Message-ID: <20210128144835.GA3342@redhat.com> References: <20210125180115.22936-1-vgoyal@redhat.com> <20210125180115.22936-3-vgoyal@redhat.com> <20210128153123.4aba231c@bahia.lan> MIME-Version: 1.0 In-Reply-To: <20210128153123.4aba231c@bahia.lan> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=vgoyal@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Received-SPF: pass client-ip=216.205.24.124; envelope-from=vgoyal@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -29 X-Spam_score: -3.0 X-Spam_bar: --- X-Spam_report: (-3.0 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.252, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H3=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: mst@redhat.com, qemu-devel@nongnu.org, dgilbert@redhat.com, virtio-fs@redhat.com, stefanha@redhat.com, marcandre.lureau@redhat.com Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" On Thu, Jan 28, 2021 at 03:31:23PM +0100, Greg Kurz wrote: > On Mon, 25 Jan 2021 13:01:11 -0500 > Vivek Goyal wrote: > > > dev->slave_mutex needs to be taken when sending messages on slave_fd. > > Currently _vu_queue_notify() does not do that. > > > > Introduce a helper vu_message_slave_send_receive() which sends as well > > as receive response. Use this helper in all the paths which send > > message on slave_fd channel. > > > > Does this fix any known bug ? I am not aware of any bug. This fix is based on code inspection. Also I wanted a central place/function to send messages on slave channel so that I can check state of slave channel (open/close) and act accordingly. Otherwise I will have to do the check at every place which is trying to send/receive message on slave channel. Vivek > > > Signed-off-by: Vivek Goyal > > --- > > LGTM > > Reviewed-by: Greg Kurz > > > subprojects/libvhost-user/libvhost-user.c | 50 ++++++++++++----------- > > 1 file changed, 27 insertions(+), 23 deletions(-) > > > > diff --git a/subprojects/libvhost-user/libvhost-user.c b/subprojects/libvhost-user/libvhost-user.c > > index 4cf4aef63d..7a56c56dc8 100644 > > --- a/subprojects/libvhost-user/libvhost-user.c > > +++ b/subprojects/libvhost-user/libvhost-user.c > > @@ -403,7 +403,7 @@ vu_send_reply(VuDev *dev, int conn_fd, VhostUserMsg *vmsg) > > * Processes a reply on the slave channel. > > * Entered with slave_mutex held and releases it before exit. > > * Returns true on success. > > - * *payload is written on success > > + * *payload is written on success, if payload is not NULL. > > */ > > static bool > > vu_process_message_reply(VuDev *dev, const VhostUserMsg *vmsg, > > @@ -427,7 +427,9 @@ vu_process_message_reply(VuDev *dev, const VhostUserMsg *vmsg, > > goto out; > > } > > > > - *payload = msg_reply.payload.u64; > > + if (payload) { > > + *payload = msg_reply.payload.u64; > > + } > > result = true; > > > > out: > > @@ -435,6 +437,25 @@ out: > > return result; > > } > > > > +/* Returns true on success, false otherwise */ > > +static bool > > +vu_message_slave_send_receive(VuDev *dev, VhostUserMsg *vmsg, uint64_t *payload) > > +{ > > + pthread_mutex_lock(&dev->slave_mutex); > > + if (!vu_message_write(dev, dev->slave_fd, vmsg)) { > > + pthread_mutex_unlock(&dev->slave_mutex); > > + return false; > > + } > > + > > + if ((vmsg->flags & VHOST_USER_NEED_REPLY_MASK) == 0) { > > + pthread_mutex_unlock(&dev->slave_mutex); > > + return true; > > + } > > + > > + /* Also unlocks the slave_mutex */ > > + return vu_process_message_reply(dev, vmsg, payload); > > +} > > + > > /* Kick the log_call_fd if required. */ > > static void > > vu_log_kick(VuDev *dev) > > @@ -1340,16 +1361,8 @@ bool vu_set_queue_host_notifier(VuDev *dev, VuVirtq *vq, int fd, > > return false; > > } > > > > - pthread_mutex_lock(&dev->slave_mutex); > > - if (!vu_message_write(dev, dev->slave_fd, &vmsg)) { > > - pthread_mutex_unlock(&dev->slave_mutex); > > - return false; > > - } > > - > > - /* Also unlocks the slave_mutex */ > > - res = vu_process_message_reply(dev, &vmsg, &payload); > > + res = vu_message_slave_send_receive(dev, &vmsg, &payload); > > res = res && (payload == 0); > > - > > return res; > > } > > > > @@ -2395,10 +2408,7 @@ static void _vu_queue_notify(VuDev *dev, VuVirtq *vq, bool sync) > > vmsg.flags |= VHOST_USER_NEED_REPLY_MASK; > > } > > > > - vu_message_write(dev, dev->slave_fd, &vmsg); > > - if (ack) { > > - vu_message_read_default(dev, dev->slave_fd, &vmsg); > > - } > > + vu_message_slave_send_receive(dev, &vmsg, NULL); > > return; > > } > > > > @@ -2942,17 +2952,11 @@ int64_t vu_fs_cache_request(VuDev *dev, VhostUserSlaveRequest req, int fd, > > return -EINVAL; > > } > > > > - pthread_mutex_lock(&dev->slave_mutex); > > - if (!vu_message_write(dev, dev->slave_fd, &vmsg)) { > > - pthread_mutex_unlock(&dev->slave_mutex); > > - return -EIO; > > - } > > - > > - /* Also unlocks the slave_mutex */ > > - res = vu_process_message_reply(dev, &vmsg, &payload); > > + res = vu_message_slave_send_receive(dev, &vmsg, &payload); > > if (!res) { > > return -EIO; > > } > > + > > /* > > * Payload is delivered as uint64_t but is actually signed for > > * errors. >