netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* vhost-net: is there a race for sock in handle_tx/rx?
@ 2012-05-03  8:33 Liu ping fan
  2012-05-03  8:41 ` Michael S. Tsirkin
  0 siblings, 1 reply; 3+ messages in thread
From: Liu ping fan @ 2012-05-03  8:33 UTC (permalink / raw)
  To: netdev; +Cc: Michael S. Tsirkin, kvm, linux-kernel

Hi,

During reading the vhost-net code, I find the following,

static void handle_tx(struct vhost_net *net)
{
	struct vhost_virtqueue *vq = &net->dev.vqs[VHOST_NET_VQ_TX];
	unsigned out, in, s;
	int head;
	struct msghdr msg = {
		.msg_name = NULL,
		.msg_namelen = 0,
		.msg_control = NULL,
		.msg_controllen = 0,
		.msg_iov = vq->iov,
		.msg_flags = MSG_DONTWAIT,
	};
	size_t len, total_len = 0;
	int err, wmem;
	size_t hdr_size;
	struct socket *sock;
	struct vhost_ubuf_ref *uninitialized_var(ubufs);
	bool zcopy;

	/* TODO: check that we are running from vhost_worker? */
	sock = rcu_dereference_check(vq->private_data, 1);
	if (!sock)
		return;

           --------------------------------> Qemu calls
vhost_net_set_backend() to set a new backend fd, and close
@oldsock->file. And  sock->file refcnt==0.

                                              Can vhost_worker prevent
itself from such situation? And how?

	wmem = atomic_read(&sock->sk->sk_wmem_alloc);
       .........................................................................

Is it a race?

Thanks and regards,
pingfan

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2012-05-03  9:08 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-05-03  8:33 vhost-net: is there a race for sock in handle_tx/rx? Liu ping fan
2012-05-03  8:41 ` Michael S. Tsirkin
2012-05-03  9:08   ` Liu ping fan

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).