* Re: [PATCH 1/2] vsock: each transport cycles only on its own sockets [not found] ` <20220310124936.4179591-2-jiyong@google.com> @ 2022-03-10 12:53 ` Michael S. Tsirkin 2022-03-10 12:54 ` Michael S. Tsirkin 0 siblings, 1 reply; 6+ messages in thread From: Michael S. Tsirkin @ 2022-03-10 12:53 UTC (permalink / raw) To: Jiyong Park Cc: adelva, kvm, netdev, linux-kernel, virtualization, Stefan Hajnoczi, Jakub Kicinski, David S. Miller This message had In-Reply-To: <20220310124936.4179591-1-jiyong@google.com> in its header but 20220310124936.4179591-2-jiyong@google.com was not sent to the list. Please don't do that. Instead, please write and send a proper cover letter. Thanks! On Thu, Mar 10, 2022 at 09:49:35PM +0900, Jiyong Park wrote: > When iterating over sockets using vsock_for_each_connected_socket, make > sure that a transport filters out sockets that don't belong to the > transport. > > There actually was an issue caused by this; in a nested VM > configuration, destroying the nested VM (which often involves the > closing of /dev/vhost-vsock if there was h2g connections to the nested > VM) kills not only the h2g connections, but also all existing g2h > connections to the (outmost) host which are totally unrelated. > > Tested: Executed the following steps on Cuttlefish (Android running on a > VM) [1]: (1) Enter into an `adb shell` session - to have a g2h > connection inside the VM, (2) open and then close /dev/vhost-vsock by > `exec 3< /dev/vhost-vsock && exec 3<&-`, (3) observe that the adb > session is not reset. > > [1] https://android.googlesource.com/device/google/cuttlefish/ > > Fixes: c0cfa2d8a788 ("vsock: add multi-transports support") > Signed-off-by: Jiyong Park <jiyong@google.com> > --- > drivers/vhost/vsock.c | 4 ++++ > net/vmw_vsock/virtio_transport.c | 7 +++++++ > net/vmw_vsock/vmci_transport.c | 5 +++++ > 3 files changed, 16 insertions(+) > > diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c > index 37f0b4274113..853ddac00d5b 100644 > --- a/drivers/vhost/vsock.c > +++ b/drivers/vhost/vsock.c > @@ -722,6 +722,10 @@ static void vhost_vsock_reset_orphans(struct sock *sk) > * executing. > */ > > + /* Only handle our own sockets */ > + if (vsk->transport != &vhost_transport.transport) > + return; > + > /* If the peer is still valid, no need to reset connection */ > if (vhost_vsock_get(vsk->remote_addr.svm_cid)) > return; > diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c > index fb3302fff627..61b24eb31d4b 100644 > --- a/net/vmw_vsock/virtio_transport.c > +++ b/net/vmw_vsock/virtio_transport.c > @@ -24,6 +24,7 @@ > static struct workqueue_struct *virtio_vsock_workqueue; > static struct virtio_vsock __rcu *the_virtio_vsock; > static DEFINE_MUTEX(the_virtio_vsock_mutex); /* protects the_virtio_vsock */ > +static struct virtio_transport virtio_transport; /* forward declaration */ > > struct virtio_vsock { > struct virtio_device *vdev; > @@ -357,11 +358,17 @@ static void virtio_vsock_event_fill(struct virtio_vsock *vsock) > > static void virtio_vsock_reset_sock(struct sock *sk) > { > + struct vsock_sock *vsk = vsock_sk(sk); > + > /* vmci_transport.c doesn't take sk_lock here either. At least we're > * under vsock_table_lock so the sock cannot disappear while we're > * executing. > */ > > + /* Only handle our own sockets */ > + if (vsk->transport != &virtio_transport.transport) > + return; > + > sk->sk_state = TCP_CLOSE; > sk->sk_err = ECONNRESET; > sk_error_report(sk); > diff --git a/net/vmw_vsock/vmci_transport.c b/net/vmw_vsock/vmci_transport.c > index 7aef34e32bdf..cd2f01513fae 100644 > --- a/net/vmw_vsock/vmci_transport.c > +++ b/net/vmw_vsock/vmci_transport.c > @@ -803,6 +803,11 @@ static void vmci_transport_handle_detach(struct sock *sk) > struct vsock_sock *vsk; > > vsk = vsock_sk(sk); > + > + /* Only handle our own sockets */ > + if (vsk->transport != &vmci_transport) > + return; > + > if (!vmci_handle_is_invalid(vmci_trans(vsk)->qp_handle)) { > sock_set_flag(sk, SOCK_DONE); > > -- > 2.35.1.723.g4982287a31-goog _______________________________________________ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH 1/2] vsock: each transport cycles only on its own sockets 2022-03-10 12:53 ` [PATCH 1/2] vsock: each transport cycles only on its own sockets Michael S. Tsirkin @ 2022-03-10 12:54 ` Michael S. Tsirkin 0 siblings, 0 replies; 6+ messages in thread From: Michael S. Tsirkin @ 2022-03-10 12:54 UTC (permalink / raw) To: Jiyong Park Cc: adelva, kvm, netdev, linux-kernel, virtualization, Stefan Hajnoczi, Jakub Kicinski, David S. Miller On Thu, Mar 10, 2022 at 07:53:25AM -0500, Michael S. Tsirkin wrote: > This message had > In-Reply-To: <20220310124936.4179591-1-jiyong@google.com> > in its header but 20220310124936.4179591-2-jiyong@google.com was > not sent to the list. > Please don't do that. Instead, please write and send a proper > cover letter. Thanks! > Also, pls version in subject e.g. PATCH v2, and include full changelog in the cover letter. Thanks! > On Thu, Mar 10, 2022 at 09:49:35PM +0900, Jiyong Park wrote: > > When iterating over sockets using vsock_for_each_connected_socket, make > > sure that a transport filters out sockets that don't belong to the > > transport. > > > > There actually was an issue caused by this; in a nested VM > > configuration, destroying the nested VM (which often involves the > > closing of /dev/vhost-vsock if there was h2g connections to the nested > > VM) kills not only the h2g connections, but also all existing g2h > > connections to the (outmost) host which are totally unrelated. > > > > Tested: Executed the following steps on Cuttlefish (Android running on a > > VM) [1]: (1) Enter into an `adb shell` session - to have a g2h > > connection inside the VM, (2) open and then close /dev/vhost-vsock by > > `exec 3< /dev/vhost-vsock && exec 3<&-`, (3) observe that the adb > > session is not reset. > > > > [1] https://android.googlesource.com/device/google/cuttlefish/ > > > > Fixes: c0cfa2d8a788 ("vsock: add multi-transports support") > > Signed-off-by: Jiyong Park <jiyong@google.com> > > --- > > drivers/vhost/vsock.c | 4 ++++ > > net/vmw_vsock/virtio_transport.c | 7 +++++++ > > net/vmw_vsock/vmci_transport.c | 5 +++++ > > 3 files changed, 16 insertions(+) > > > > diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c > > index 37f0b4274113..853ddac00d5b 100644 > > --- a/drivers/vhost/vsock.c > > +++ b/drivers/vhost/vsock.c > > @@ -722,6 +722,10 @@ static void vhost_vsock_reset_orphans(struct sock *sk) > > * executing. > > */ > > > > + /* Only handle our own sockets */ > > + if (vsk->transport != &vhost_transport.transport) > > + return; > > + > > /* If the peer is still valid, no need to reset connection */ > > if (vhost_vsock_get(vsk->remote_addr.svm_cid)) > > return; > > diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c > > index fb3302fff627..61b24eb31d4b 100644 > > --- a/net/vmw_vsock/virtio_transport.c > > +++ b/net/vmw_vsock/virtio_transport.c > > @@ -24,6 +24,7 @@ > > static struct workqueue_struct *virtio_vsock_workqueue; > > static struct virtio_vsock __rcu *the_virtio_vsock; > > static DEFINE_MUTEX(the_virtio_vsock_mutex); /* protects the_virtio_vsock */ > > +static struct virtio_transport virtio_transport; /* forward declaration */ > > > > struct virtio_vsock { > > struct virtio_device *vdev; > > @@ -357,11 +358,17 @@ static void virtio_vsock_event_fill(struct virtio_vsock *vsock) > > > > static void virtio_vsock_reset_sock(struct sock *sk) > > { > > + struct vsock_sock *vsk = vsock_sk(sk); > > + > > /* vmci_transport.c doesn't take sk_lock here either. At least we're > > * under vsock_table_lock so the sock cannot disappear while we're > > * executing. > > */ > > > > + /* Only handle our own sockets */ > > + if (vsk->transport != &virtio_transport.transport) > > + return; > > + > > sk->sk_state = TCP_CLOSE; > > sk->sk_err = ECONNRESET; > > sk_error_report(sk); > > diff --git a/net/vmw_vsock/vmci_transport.c b/net/vmw_vsock/vmci_transport.c > > index 7aef34e32bdf..cd2f01513fae 100644 > > --- a/net/vmw_vsock/vmci_transport.c > > +++ b/net/vmw_vsock/vmci_transport.c > > @@ -803,6 +803,11 @@ static void vmci_transport_handle_detach(struct sock *sk) > > struct vsock_sock *vsk; > > > > vsk = vsock_sk(sk); > > + > > + /* Only handle our own sockets */ > > + if (vsk->transport != &vmci_transport) > > + return; > > + > > if (!vmci_handle_is_invalid(vmci_trans(vsk)->qp_handle)) { > > sock_set_flag(sk, SOCK_DONE); > > > > -- > > 2.35.1.723.g4982287a31-goog _______________________________________________ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization ^ permalink raw reply [flat|nested] 6+ messages in thread
[parent not found: <20220310125425.4193879-1-jiyong@google.com>]
[parent not found: <20220310125425.4193879-2-jiyong@google.com>]
* Re: [PATCH 1/2] vsock: each transport cycles only on its own sockets [not found] ` <20220310125425.4193879-2-jiyong@google.com> @ 2022-03-10 13:01 ` Michael S. Tsirkin [not found] ` <CALeUXe4V=6WhavV5d0XN_EjtZ9=0_5rD9ZfvQ77M1W4HpYh_2Q@mail.gmail.com> 2022-03-10 13:18 ` Stefano Garzarella 2022-03-11 2:55 ` kernel test robot 1 sibling, 2 replies; 6+ messages in thread From: Michael S. Tsirkin @ 2022-03-10 13:01 UTC (permalink / raw) To: Jiyong Park Cc: adelva, kvm, netdev, linux-kernel, virtualization, stefanha, kuba, davem On Thu, Mar 10, 2022 at 09:54:24PM +0900, Jiyong Park wrote: > When iterating over sockets using vsock_for_each_connected_socket, make > sure that a transport filters out sockets that don't belong to the > transport. > > There actually was an issue caused by this; in a nested VM > configuration, destroying the nested VM (which often involves the > closing of /dev/vhost-vsock if there was h2g connections to the nested > VM) kills not only the h2g connections, but also all existing g2h > connections to the (outmost) host which are totally unrelated. > > Tested: Executed the following steps on Cuttlefish (Android running on a > VM) [1]: (1) Enter into an `adb shell` session - to have a g2h > connection inside the VM, (2) open and then close /dev/vhost-vsock by > `exec 3< /dev/vhost-vsock && exec 3<&-`, (3) observe that the adb > session is not reset. > > [1] https://android.googlesource.com/device/google/cuttlefish/ > > Fixes: c0cfa2d8a788 ("vsock: add multi-transports support") > Signed-off-by: Jiyong Park <jiyong@google.com> > --- > drivers/vhost/vsock.c | 4 ++++ > net/vmw_vsock/virtio_transport.c | 7 +++++++ > net/vmw_vsock/vmci_transport.c | 5 +++++ > 3 files changed, 16 insertions(+) > > diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c > index 37f0b4274113..853ddac00d5b 100644 > --- a/drivers/vhost/vsock.c > +++ b/drivers/vhost/vsock.c > @@ -722,6 +722,10 @@ static void vhost_vsock_reset_orphans(struct sock *sk) > * executing. > */ > > + /* Only handle our own sockets */ > + if (vsk->transport != &vhost_transport.transport) > + return; > + > /* If the peer is still valid, no need to reset connection */ > if (vhost_vsock_get(vsk->remote_addr.svm_cid)) > return; We know this is incomplete though. So I think it's the wrong thing to do when you backport, too. If all you worry about is breaking a binary module interface, how about simply exporting a new function when you backport. Thus you will have downstream both: void vsock_for_each_connected_socket(void (*fn)(struct sock *sk)); void vsock_for_each_connected_socket_new(struct vsock_transport *transport, void (*fn)(struct sock *sk)); and then upstream we can squash these two patches. Hmm? > diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c > index fb3302fff627..61b24eb31d4b 100644 > --- a/net/vmw_vsock/virtio_transport.c > +++ b/net/vmw_vsock/virtio_transport.c > @@ -24,6 +24,7 @@ > static struct workqueue_struct *virtio_vsock_workqueue; > static struct virtio_vsock __rcu *the_virtio_vsock; > static DEFINE_MUTEX(the_virtio_vsock_mutex); /* protects the_virtio_vsock */ > +static struct virtio_transport virtio_transport; /* forward declaration */ > > struct virtio_vsock { > struct virtio_device *vdev; > @@ -357,11 +358,17 @@ static void virtio_vsock_event_fill(struct virtio_vsock *vsock) > > static void virtio_vsock_reset_sock(struct sock *sk) > { > + struct vsock_sock *vsk = vsock_sk(sk); > + > /* vmci_transport.c doesn't take sk_lock here either. At least we're > * under vsock_table_lock so the sock cannot disappear while we're > * executing. > */ > > + /* Only handle our own sockets */ > + if (vsk->transport != &virtio_transport.transport) > + return; > + > sk->sk_state = TCP_CLOSE; > sk->sk_err = ECONNRESET; > sk_error_report(sk); > diff --git a/net/vmw_vsock/vmci_transport.c b/net/vmw_vsock/vmci_transport.c > index 7aef34e32bdf..cd2f01513fae 100644 > --- a/net/vmw_vsock/vmci_transport.c > +++ b/net/vmw_vsock/vmci_transport.c > @@ -803,6 +803,11 @@ static void vmci_transport_handle_detach(struct sock *sk) > struct vsock_sock *vsk; > > vsk = vsock_sk(sk); > + > + /* Only handle our own sockets */ > + if (vsk->transport != &vmci_transport) > + return; > + > if (!vmci_handle_is_invalid(vmci_trans(vsk)->qp_handle)) { > sock_set_flag(sk, SOCK_DONE); > > -- > 2.35.1.723.g4982287a31-goog _______________________________________________ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization ^ permalink raw reply [flat|nested] 6+ messages in thread
[parent not found: <CALeUXe4V=6WhavV5d0XN_EjtZ9=0_5rD9ZfvQ77M1W4HpYh_2Q@mail.gmail.com>]
* Re: [PATCH 1/2] vsock: each transport cycles only on its own sockets [not found] ` <CALeUXe4V=6WhavV5d0XN_EjtZ9=0_5rD9ZfvQ77M1W4HpYh_2Q@mail.gmail.com> @ 2022-03-10 13:16 ` Michael S. Tsirkin 0 siblings, 0 replies; 6+ messages in thread From: Michael S. Tsirkin @ 2022-03-10 13:16 UTC (permalink / raw) To: Jiyong Park Cc: adelva, kvm, netdev, linux-kernel, virtualization, stefanha, kuba, davem On Thu, Mar 10, 2022 at 10:11:32PM +0900, Jiyong Park wrote: > Hi Michael, > > Thanks for looking into this. > > Would you mind if I ask what you mean by incomplete? Is it because non-updated > modules will still have the issue? Please elaborate. What stefano wrote: I think there is the same problem if the g2h driver will be unloaded (or a reset event is received after a VM migration), it will close all sockets of the nested h2g. looks like this will keep happening even with your patch, though I didn't try. I also don't like how patch 1 adds code that patch 2 removes. Untidy. Let's just squash and have downstreams worry about stable ABI. > > On Thu, Mar 10, 2022 at 10:02 PM Michael S. Tsirkin <mst@redhat.com> wrote: > > > > On Thu, Mar 10, 2022 at 09:54:24PM +0900, Jiyong Park wrote: > > > When iterating over sockets using vsock_for_each_connected_socket, make > > > sure that a transport filters out sockets that don't belong to the > > > transport. > > > > > > There actually was an issue caused by this; in a nested VM > > > configuration, destroying the nested VM (which often involves the > > > closing of /dev/vhost-vsock if there was h2g connections to the nested > > > VM) kills not only the h2g connections, but also all existing g2h > > > connections to the (outmost) host which are totally unrelated. > > > > > > Tested: Executed the following steps on Cuttlefish (Android running on a > > > VM) [1]: (1) Enter into an `adb shell` session - to have a g2h > > > connection inside the VM, (2) open and then close /dev/vhost-vsock by > > > `exec 3< /dev/vhost-vsock && exec 3<&-`, (3) observe that the adb > > > session is not reset. > > > > > > [1] https://android.googlesource.com/device/google/cuttlefish/ > > > > > > Fixes: c0cfa2d8a788 ("vsock: add multi-transports support") > > > Signed-off-by: Jiyong Park <jiyong@google.com> > > > --- > > > drivers/vhost/vsock.c | 4 ++++ > > > net/vmw_vsock/virtio_transport.c | 7 +++++++ > > > net/vmw_vsock/vmci_transport.c | 5 +++++ > > > 3 files changed, 16 insertions(+) > > > > > > diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c > > > index 37f0b4274113..853ddac00d5b 100644 > > > --- a/drivers/vhost/vsock.c > > > +++ b/drivers/vhost/vsock.c > > > @@ -722,6 +722,10 @@ static void vhost_vsock_reset_orphans(struct sock *sk) > > > * executing. > > > */ > > > > > > + /* Only handle our own sockets */ > > > + if (vsk->transport != &vhost_transport.transport) > > > + return; > > > + > > > /* If the peer is still valid, no need to reset connection */ > > > if (vhost_vsock_get(vsk->remote_addr.svm_cid)) > > > return; > > > > > > We know this is incomplete though. So I think it's the wrong thing to do > > when you backport, too. If all you worry about is breaking a binary > > module interface, how about simply exporting a new function when you > > backport. Thus you will have downstream both: > > > > void vsock_for_each_connected_socket(void (*fn)(struct sock *sk)); > > > > void vsock_for_each_connected_socket_new(struct vsock_transport *transport, > > void (*fn)(struct sock *sk)); > > > > > > and then upstream we can squash these two patches. > > > > Hmm? > > > > > > > diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c > > > index fb3302fff627..61b24eb31d4b 100644 > > > --- a/net/vmw_vsock/virtio_transport.c > > > +++ b/net/vmw_vsock/virtio_transport.c > > > @@ -24,6 +24,7 @@ > > > static struct workqueue_struct *virtio_vsock_workqueue; > > > static struct virtio_vsock __rcu *the_virtio_vsock; > > > static DEFINE_MUTEX(the_virtio_vsock_mutex); /* protects the_virtio_vsock */ > > > +static struct virtio_transport virtio_transport; /* forward declaration */ > > > > > > struct virtio_vsock { > > > struct virtio_device *vdev; > > > @@ -357,11 +358,17 @@ static void virtio_vsock_event_fill(struct virtio_vsock *vsock) > > > > > > static void virtio_vsock_reset_sock(struct sock *sk) > > > { > > > + struct vsock_sock *vsk = vsock_sk(sk); > > > + > > > /* vmci_transport.c doesn't take sk_lock here either. At least we're > > > * under vsock_table_lock so the sock cannot disappear while we're > > > * executing. > > > */ > > > > > > + /* Only handle our own sockets */ > > > + if (vsk->transport != &virtio_transport.transport) > > > + return; > > > + > > > sk->sk_state = TCP_CLOSE; > > > sk->sk_err = ECONNRESET; > > > sk_error_report(sk); > > > diff --git a/net/vmw_vsock/vmci_transport.c b/net/vmw_vsock/vmci_transport.c > > > index 7aef34e32bdf..cd2f01513fae 100644 > > > --- a/net/vmw_vsock/vmci_transport.c > > > +++ b/net/vmw_vsock/vmci_transport.c > > > @@ -803,6 +803,11 @@ static void vmci_transport_handle_detach(struct sock *sk) > > > struct vsock_sock *vsk; > > > > > > vsk = vsock_sk(sk); > > > + > > > + /* Only handle our own sockets */ > > > + if (vsk->transport != &vmci_transport) > > > + return; > > > + > > > if (!vmci_handle_is_invalid(vmci_trans(vsk)->qp_handle)) { > > > sock_set_flag(sk, SOCK_DONE); > > > > > > -- > > > 2.35.1.723.g4982287a31-goog > > _______________________________________________ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH 1/2] vsock: each transport cycles only on its own sockets 2022-03-10 13:01 ` Michael S. Tsirkin [not found] ` <CALeUXe4V=6WhavV5d0XN_EjtZ9=0_5rD9ZfvQ77M1W4HpYh_2Q@mail.gmail.com> @ 2022-03-10 13:18 ` Stefano Garzarella 1 sibling, 0 replies; 6+ messages in thread From: Stefano Garzarella @ 2022-03-10 13:18 UTC (permalink / raw) To: Michael S. Tsirkin Cc: adelva, Jiyong Park, kvm, netdev, linux-kernel, virtualization, stefanha, kuba, davem On Thu, Mar 10, 2022 at 08:01:53AM -0500, Michael S. Tsirkin wrote: >On Thu, Mar 10, 2022 at 09:54:24PM +0900, Jiyong Park wrote: >> When iterating over sockets using vsock_for_each_connected_socket, make >> sure that a transport filters out sockets that don't belong to the >> transport. >> >> There actually was an issue caused by this; in a nested VM >> configuration, destroying the nested VM (which often involves the >> closing of /dev/vhost-vsock if there was h2g connections to the nested >> VM) kills not only the h2g connections, but also all existing g2h >> connections to the (outmost) host which are totally unrelated. >> >> Tested: Executed the following steps on Cuttlefish (Android running on a >> VM) [1]: (1) Enter into an `adb shell` session - to have a g2h >> connection inside the VM, (2) open and then close /dev/vhost-vsock by >> `exec 3< /dev/vhost-vsock && exec 3<&-`, (3) observe that the adb >> session is not reset. >> >> [1] https://android.googlesource.com/device/google/cuttlefish/ >> >> Fixes: c0cfa2d8a788 ("vsock: add multi-transports support") >> Signed-off-by: Jiyong Park <jiyong@google.com> >> --- >> drivers/vhost/vsock.c | 4 ++++ >> net/vmw_vsock/virtio_transport.c | 7 +++++++ >> net/vmw_vsock/vmci_transport.c | 5 +++++ >> 3 files changed, 16 insertions(+) >> >> diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c >> index 37f0b4274113..853ddac00d5b 100644 >> --- a/drivers/vhost/vsock.c >> +++ b/drivers/vhost/vsock.c >> @@ -722,6 +722,10 @@ static void vhost_vsock_reset_orphans(struct sock *sk) >> * executing. >> */ >> >> + /* Only handle our own sockets */ >> + if (vsk->transport != &vhost_transport.transport) >> + return; >> + >> /* If the peer is still valid, no need to reset connection */ >> if (vhost_vsock_get(vsk->remote_addr.svm_cid)) >> return; > > >We know this is incomplete though. So I think it's the wrong thing to do >when you backport, too. If all you worry about is breaking a binary >module interface, how about simply exporting a new function when you >backport. Thus you will have downstream both: > >void vsock_for_each_connected_socket(void (*fn)(struct sock *sk)); > >void vsock_for_each_connected_socket_new(struct vsock_transport *transport, > void (*fn)(struct sock *sk)); > > >and then upstream we can squash these two patches. > >Hmm? > Yep, reading more of the kernel documentation [1] it seems that upstream we don't worry about this. I agree with Michael, it's better to just have the final patch upstream and downstream will be handled accordingly. This should make it easier upstream to backport into stable branches future patches that depend on this change. Thanks, Stefano [1] https://www.kernel.org/doc/Documentation/process/stable-api-nonsense.rst _______________________________________________ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH 1/2] vsock: each transport cycles only on its own sockets [not found] ` <20220310125425.4193879-2-jiyong@google.com> 2022-03-10 13:01 ` Michael S. Tsirkin @ 2022-03-11 2:55 ` kernel test robot 1 sibling, 0 replies; 6+ messages in thread From: kernel test robot @ 2022-03-11 2:55 UTC (permalink / raw) To: Jiyong Park, sgarzare, stefanha, mst, jasowang, davem, kuba Cc: adelva, Jiyong Park, kbuild-all, kvm, netdev, linux-kernel, virtualization Hi Jiyong, Thank you for the patch! Yet something to improve: [auto build test ERROR on 3bf7edc84a9eb4007dd9a0cb8878a7e1d5ec6a3b] url: https://github.com/0day-ci/linux/commits/Jiyong-Park/vsock-cycle-only-on-its-own-socket/20220310-205638 base: 3bf7edc84a9eb4007dd9a0cb8878a7e1d5ec6a3b config: x86_64-rhel-8.3 (https://download.01.org/0day-ci/archive/20220311/202203111023.SPYFGn7W-lkp@intel.com/config) compiler: gcc-9 (Ubuntu 9.4.0-1ubuntu1~20.04) 9.4.0 reproduce (this is a W=1 build): # https://github.com/0day-ci/linux/commit/6219060e1d706d7055fb0829b3bf23c5ae84790e git remote add linux-review https://github.com/0day-ci/linux git fetch --no-tags linux-review Jiyong-Park/vsock-cycle-only-on-its-own-socket/20220310-205638 git checkout 6219060e1d706d7055fb0829b3bf23c5ae84790e # save the config file to linux build tree mkdir build_dir make W=1 O=build_dir ARCH=x86_64 SHELL=/bin/bash net/vmw_vsock/ If you fix the issue, kindly add following tag as appropriate Reported-by: kernel test robot <lkp@intel.com> All errors (new ones prefixed by >>): net/vmw_vsock/vmci_transport.c: In function 'vmci_transport_handle_detach': >> net/vmw_vsock/vmci_transport.c:808:25: error: 'vmci_transport' undeclared (first use in this function) 808 | if (vsk->transport != &vmci_transport) | ^~~~~~~~~~~~~~ net/vmw_vsock/vmci_transport.c:808:25: note: each undeclared identifier is reported only once for each function it appears in vim +/vmci_transport +808 net/vmw_vsock/vmci_transport.c 800 801 static void vmci_transport_handle_detach(struct sock *sk) 802 { 803 struct vsock_sock *vsk; 804 805 vsk = vsock_sk(sk); 806 807 /* Only handle our own sockets */ > 808 if (vsk->transport != &vmci_transport) 809 return; 810 811 if (!vmci_handle_is_invalid(vmci_trans(vsk)->qp_handle)) { 812 sock_set_flag(sk, SOCK_DONE); 813 814 /* On a detach the peer will not be sending or receiving 815 * anymore. 816 */ 817 vsk->peer_shutdown = SHUTDOWN_MASK; 818 819 /* We should not be sending anymore since the peer won't be 820 * there to receive, but we can still receive if there is data 821 * left in our consume queue. If the local endpoint is a host, 822 * we can't call vsock_stream_has_data, since that may block, 823 * but a host endpoint can't read data once the VM has 824 * detached, so there is no available data in that case. 825 */ 826 if (vsk->local_addr.svm_cid == VMADDR_CID_HOST || 827 vsock_stream_has_data(vsk) <= 0) { 828 if (sk->sk_state == TCP_SYN_SENT) { 829 /* The peer may detach from a queue pair while 830 * we are still in the connecting state, i.e., 831 * if the peer VM is killed after attaching to 832 * a queue pair, but before we complete the 833 * handshake. In that case, we treat the detach 834 * event like a reset. 835 */ 836 837 sk->sk_state = TCP_CLOSE; 838 sk->sk_err = ECONNRESET; 839 sk_error_report(sk); 840 return; 841 } 842 sk->sk_state = TCP_CLOSE; 843 } 844 sk->sk_state_change(sk); 845 } 846 } 847 --- 0-DAY CI Kernel Test Service https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org _______________________________________________ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization ^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2022-03-11 2:55 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <20220310124936.4179591-1-jiyong@google.com>
[not found] ` <20220310124936.4179591-2-jiyong@google.com>
2022-03-10 12:53 ` [PATCH 1/2] vsock: each transport cycles only on its own sockets Michael S. Tsirkin
2022-03-10 12:54 ` Michael S. Tsirkin
[not found] <20220310125425.4193879-1-jiyong@google.com>
[not found] ` <20220310125425.4193879-2-jiyong@google.com>
2022-03-10 13:01 ` Michael S. Tsirkin
[not found] ` <CALeUXe4V=6WhavV5d0XN_EjtZ9=0_5rD9ZfvQ77M1W4HpYh_2Q@mail.gmail.com>
2022-03-10 13:16 ` Michael S. Tsirkin
2022-03-10 13:18 ` Stefano Garzarella
2022-03-11 2:55 ` kernel test robot
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).