From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:57945) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1XJvW1-0001wD-A8 for qemu-devel@nongnu.org; Tue, 19 Aug 2014 22:20:51 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1XJvVv-00085D-4A for qemu-devel@nongnu.org; Tue, 19 Aug 2014 22:20:45 -0400 Received: from szxga02-in.huawei.com ([119.145.14.65]:6756) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1XJvVu-0007wQ-2h for qemu-devel@nongnu.org; Tue, 19 Aug 2014 22:20:38 -0400 Message-ID: <53F405BA.8090409@huawei.com> Date: Wed, 20 Aug 2014 10:19:38 +0800 From: zhanghailiang MIME-Version: 1.0 References: <1408337205-10260-1-git-send-email-zhang.zhanghailiang@huawei.com> <53F1A363.8070009@redhat.com> <53F1BA2A.8020008@huawei.com> <20140819122953.GG25538@stefanha-thinkpad.redhat.com> In-Reply-To: <20140819122953.GG25538@stefanha-thinkpad.redhat.com> Content-Type: text/plain; charset="ISO-8859-1"; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [PATCH] net: Forbid dealing with packets when VM is not running List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Stefan Hajnoczi Cc: peter.maydell@linaro.org, mst@redhat.com, Jason Wang , luonengjun@huawei.com, peter.huangpeng@huawei.com, qemu-devel@nongnu.org On 2014/8/19 20:29, Stefan Hajnoczi wrote: > On Mon, Aug 18, 2014 at 04:32:42PM +0800, zhanghailiang wrote: >> On 2014/8/18 14:55, Jason Wang wrote: >>> On 08/18/2014 12:46 PM, zhanghailiang wrote: >>>> diff --git a/net/net.c b/net/net.c >>>> index 6d930ea..21f0d48 100644 >>>> --- a/net/net.c >>>> +++ b/net/net.c >>>> @@ -242,6 +242,29 @@ NetClientState *qemu_new_net_client(NetClientInfo *info, >>>> return nc; >>>> } >>>> >>>> +static void nic_vmstate_change_handler(void *opaque, >>>> + int running, >>>> + RunState state) >>>> +{ >>>> + NICState *nic = opaque; >>>> + NetClientState *nc; >>>> + int i, queues; >>>> + >>>> + if (!running) { >>>> + return; >>>> + } >>>> + >>>> + queues = MAX(1, nic->conf->peers.queues); >>>> + for (i = 0; i< queues; i++) { >>>> + nc =&nic->ncs[i]; >>>> + if (nc->receive_disabled >>>> + || (nc->info->can_receive&& !nc->info->can_receive(nc))) { >>>> + continue; >>>> + } >>>> + qemu_flush_queued_packets(nc); >>> >>> How about simply purge the receive queue during stop? If ok, there's no >>> need to introduce extra vmstate change handler. >>> >> >> I don't know whether it is OK to purge the receive packages, it was >> suggested by Stefan Hajnoczi, and i am waiting for his opinion .:) >> >> I think we still need the extra vmstate change handler, Without the >> change handler, we don't know if the VM will go to stop and the time >> when to call qemu_purge_queued_packets. > > qemu_flush_queued_packets() sets nc->received_disabled = 0. This may be > needed to get packets flowing again if ->receive() previously returned 0. > > Purging the queue does not clear nc->received_disabled so it is not > enough. So this is the reason why we need to do flush after VM runstate come back to running again.:) Oh, In the above patch, we don't need check the nc->receive_disabled before qemu_flush_queued_packets(nc), but still need check nc->info->can_receive(nc), is it? Before send another patch, I will check if this patch has side-effect when we use vhost_net. Thanks, zhanghailiang