virtualization.lists.linux-foundation.org archive mirror
 help / color / mirror / Atom feed
* Re: [PATCH v11 00/18] virtio/vsock: introduce SOCK_SEQPACKET support
       [not found] ` <59b720a8-154f-ad29-e7a9-b86b69408078@kaspersky.com>
@ 2021-06-11 12:25   ` Stefano Garzarella
       [not found]     ` <10a64ff5-86df-85f3-5cf2-2fa7e8ddc294@kaspersky.com>
  0 siblings, 1 reply; 9+ messages in thread
From: Stefano Garzarella @ 2021-06-11 12:25 UTC (permalink / raw)
  To: Arseny Krasnov
  Cc: Andra Paraschiv, kvm@vger.kernel.org, Michael S. Tsirkin,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, oxffffaa@gmail.com,
	Norbert Slusarek, Stefan Hajnoczi, Colin Ian King, Jakub Kicinski,
	David S. Miller

Hi Arseny,

On Fri, Jun 11, 2021 at 02:17:00PM +0300, Arseny Krasnov wrote:
>
>On 11.06.2021 14:07, Arseny Krasnov wrote:
>> 	This patchset implements support of SOCK_SEQPACKET for virtio
>> transport.
>> 	As SOCK_SEQPACKET guarantees to save record boundaries, so to
>> do it, new bit for field 'flags' was added: SEQ_EOR. This bit is
>> set to 1 in last RW packet of message.
>> 	Now as  packets of one socket are not reordered neither on vsock
>> nor on vhost transport layers, such bit allows to restore original
>> message on receiver's side. If user's buffer is smaller than message
>> length, when all out of size data is dropped.
>> 	Maximum length of datagram is limited by 'peer_buf_alloc' value.
>> 	Implementation also supports 'MSG_TRUNC' flags.
>> 	Tests also implemented.
>>
>> 	Thanks to stsp2@yandex.ru for encouragements and initial design
>> recommendations.
>>
>>  Arseny Krasnov (18):
>>   af_vsock: update functions for connectible socket
>>   af_vsock: separate wait data loop
>>   af_vsock: separate receive data loop
>>   af_vsock: implement SEQPACKET receive loop
>>   af_vsock: implement send logic for SEQPACKET
>>   af_vsock: rest of SEQPACKET support
>>   af_vsock: update comments for stream sockets
>>   virtio/vsock: set packet's type in virtio_transport_send_pkt_info()
>>   virtio/vsock: simplify credit update function API
>>   virtio/vsock: defines and constants for SEQPACKET
>>   virtio/vsock: dequeue callback for SOCK_SEQPACKET
>>   virtio/vsock: add SEQPACKET receive logic
>>   virtio/vsock: rest of SOCK_SEQPACKET support
>>   virtio/vsock: enable SEQPACKET for transport
>>   vhost/vsock: enable SEQPACKET for transport
>>   vsock/loopback: enable SEQPACKET for transport
>>   vsock_test: add SOCK_SEQPACKET tests
>>   virtio/vsock: update trace event for SEQPACKET
>>
>>  drivers/vhost/vsock.c                              |  56 ++-
>>  include/linux/virtio_vsock.h                       |  10 +
>>  include/net/af_vsock.h                             |   8 +
>>  .../trace/events/vsock_virtio_transport_common.h   |   5 +-
>>  include/uapi/linux/virtio_vsock.h                  |   9 +
>>  net/vmw_vsock/af_vsock.c                           | 464 ++++++++++++------
>>  net/vmw_vsock/virtio_transport.c                   |  26 ++
>>  net/vmw_vsock/virtio_transport_common.c            | 179 +++++++-
>>  net/vmw_vsock/vsock_loopback.c                     |  12 +
>>  tools/testing/vsock/util.c                         |  32 +-
>>  tools/testing/vsock/util.h                         |   3 +
>>  tools/testing/vsock/vsock_test.c                   | 116 ++++++
>>  12 files changed, 730 insertions(+), 190 deletions(-)
>>
>>  v10 -> v11:
>>  General changelog:
>>   - now data is copied to user's buffer only when
>>     whole message is received.
>>   - reader is woken up when EOR packet is received.
>>   - if read syscall was interrupted by signal or
>>     timeout, error is returned(not 0).
>>
>>  Per patch changelog:
>>   see every patch after '---' line.
>So here is new version for review with updates discussed earlier :)

Thanks, I'll review next week, but I suggest you again to split in two 
series, since patchwork (and netdev maintainers) are not happy with a 
series of 18 patches.

If you still prefer to keep them together during development, then 
please use the RFC tag.

Also did you take a look at the FAQ for netdev that I linked last time?
I don't see the net-next tag...

Thanks,
Stefano

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v11 00/18] virtio/vsock: introduce SOCK_SEQPACKET support
       [not found]     ` <10a64ff5-86df-85f3-5cf2-2fa7e8ddc294@kaspersky.com>
@ 2021-06-11 14:57       ` Stefano Garzarella
  0 siblings, 0 replies; 9+ messages in thread
From: Stefano Garzarella @ 2021-06-11 14:57 UTC (permalink / raw)
  To: Arseny Krasnov
  Cc: Andra Paraschiv, kvm@vger.kernel.org, Michael S. Tsirkin,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, oxffffaa@gmail.com,
	Norbert Slusarek, Stefan Hajnoczi, Colin Ian King, Jakub Kicinski,
	David S. Miller

On Fri, Jun 11, 2021 at 05:39:01PM +0300, Arseny Krasnov wrote:
>
>On 11.06.2021 15:25, Stefano Garzarella wrote:
>> Hi Arseny,
>>
>> On Fri, Jun 11, 2021 at 02:17:00PM +0300, Arseny Krasnov wrote:
>>> On 11.06.2021 14:07, Arseny Krasnov wrote:
>>>> 	This patchset implements support of SOCK_SEQPACKET for virtio
>>>> transport.
>>>> 	As SOCK_SEQPACKET guarantees to save record boundaries, so to
>>>> do it, new bit for field 'flags' was added: SEQ_EOR. This bit is
>>>> set to 1 in last RW packet of message.
>>>> 	Now as  packets of one socket are not reordered neither on vsock
>>>> nor on vhost transport layers, such bit allows to restore original
>>>> message on receiver's side. If user's buffer is smaller than message
>>>> length, when all out of size data is dropped.
>>>> 	Maximum length of datagram is limited by 'peer_buf_alloc' value.
>>>> 	Implementation also supports 'MSG_TRUNC' flags.
>>>> 	Tests also implemented.
>>>>
>>>> 	Thanks to stsp2@yandex.ru for encouragements and initial design
>>>> recommendations.
>>>>
>>>>  Arseny Krasnov (18):
>>>>   af_vsock: update functions for connectible socket
>>>>   af_vsock: separate wait data loop
>>>>   af_vsock: separate receive data loop
>>>>   af_vsock: implement SEQPACKET receive loop
>>>>   af_vsock: implement send logic for SEQPACKET
>>>>   af_vsock: rest of SEQPACKET support
>>>>   af_vsock: update comments for stream sockets
>>>>   virtio/vsock: set packet's type in virtio_transport_send_pkt_info()
>>>>   virtio/vsock: simplify credit update function API
>>>>   virtio/vsock: defines and constants for SEQPACKET
>>>>   virtio/vsock: dequeue callback for SOCK_SEQPACKET
>>>>   virtio/vsock: add SEQPACKET receive logic
>>>>   virtio/vsock: rest of SOCK_SEQPACKET support
>>>>   virtio/vsock: enable SEQPACKET for transport
>>>>   vhost/vsock: enable SEQPACKET for transport
>>>>   vsock/loopback: enable SEQPACKET for transport
>>>>   vsock_test: add SOCK_SEQPACKET tests
>>>>   virtio/vsock: update trace event for SEQPACKET
>>>>
>>>>  drivers/vhost/vsock.c                              |  56 ++-
>>>>  include/linux/virtio_vsock.h                       |  10 +
>>>>  include/net/af_vsock.h                             |   8 +
>>>>  .../trace/events/vsock_virtio_transport_common.h   |   5 +-
>>>>  include/uapi/linux/virtio_vsock.h                  |   9 +
>>>>  net/vmw_vsock/af_vsock.c                           | 464 ++++++++++++------
>>>>  net/vmw_vsock/virtio_transport.c                   |  26 ++
>>>>  net/vmw_vsock/virtio_transport_common.c            | 179 +++++++-
>>>>  net/vmw_vsock/vsock_loopback.c                     |  12 +
>>>>  tools/testing/vsock/util.c                         |  32 +-
>>>>  tools/testing/vsock/util.h                         |   3 +
>>>>  tools/testing/vsock/vsock_test.c                   | 116 ++++++
>>>>  12 files changed, 730 insertions(+), 190 deletions(-)
>>>>
>>>>  v10 -> v11:
>>>>  General changelog:
>>>>   - now data is copied to user's buffer only when
>>>>     whole message is received.
>>>>   - reader is woken up when EOR packet is received.
>>>>   - if read syscall was interrupted by signal or
>>>>     timeout, error is returned(not 0).
>>>>
>>>>  Per patch changelog:
>>>>   see every patch after '---' line.
>>> So here is new version for review with updates discussed earlier :)
>> Thanks, I'll review next week, but I suggest you again to split in two
>> series, since patchwork (and netdev maintainers) are not happy with a
>> series of 18 patches.
>>
>> If you still prefer to keep them together during development, then
>> please use the RFC tag.
>>
>> Also did you take a look at the FAQ for netdev that I linked last 
>> time?
>> I don't see the net-next tag...
>
>I didn't use next tag because two patches from first seven(which was
>
>considered to be sent to netdev) - 0004 and 0006
>
>were changed in this patchset(because of last ideas about queueing
>
>whole message). So i removed R-b line and now there is no sense to
>
>use net-next tag for first patches. When it will be R-b - i'll send it 

Okay, in that case better to use RFC tag.

>to
>
>netdev with such tag and we can continue discussing second part
>
>of patches(virtio specific).

Don't worry for now. You can do it for the next round, but I think all 
the patches will go through netdev and would be better to split in 2 
series, both of them with net-next tag.

Thanks,
Stefano

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v11 11/18] virtio/vsock: dequeue callback for SOCK_SEQPACKET
       [not found] ` <20210611111241.3652274-1-arseny.krasnov@kaspersky.com>
@ 2021-06-18 13:44   ` Stefano Garzarella
  2021-06-18 13:51     ` Michael S. Tsirkin
       [not found]     ` <bb323125-f802-1d16-7530-6e4f4abb00a6@kaspersky.com>
  0 siblings, 2 replies; 9+ messages in thread
From: Stefano Garzarella @ 2021-06-18 13:44 UTC (permalink / raw)
  To: Arseny Krasnov
  Cc: Andra Paraschiv, kvm, Michael S. Tsirkin, netdev, linux-kernel,
	virtualization, oxffffaa, Norbert Slusarek, Stefan Hajnoczi,
	Colin Ian King, Jakub Kicinski, David S. Miller

Hi Arseny,
the series looks great, I have just a question below about 
seqpacket_dequeue.

I also sent a couple a simple fixes, it would be great if you can review 
them: 
https://lore.kernel.org/netdev/20210618133526.300347-1-sgarzare@redhat.com/


On Fri, Jun 11, 2021 at 02:12:38PM +0300, Arseny Krasnov wrote:
>Callback fetches RW packets from rx queue of socket until whole record
>is copied(if user's buffer is full, user is not woken up). This is done
>to not stall sender, because if we wake up user and it leaves syscall,
>nobody will send credit update for rest of record, and sender will wait
>for next enter of read syscall at receiver's side. So if user buffer is
>full, we just send credit update and drop data.
>
>Signed-off-by: Arseny Krasnov <arseny.krasnov@kaspersky.com>
>---
> v10 -> v11:
> 1) 'msg_count' field added to count current number of EORs.
> 2) 'msg_ready' argument removed from callback.
> 3) If 'memcpy_to_msg()' failed during copy loop, there will be
>    no next attempts to copy data, rest of record will be freed.
>
> include/linux/virtio_vsock.h            |  5 ++
> net/vmw_vsock/virtio_transport_common.c | 84 +++++++++++++++++++++++++
> 2 files changed, 89 insertions(+)
>
>diff --git a/include/linux/virtio_vsock.h b/include/linux/virtio_vsock.h
>index dc636b727179..1d9a302cb91d 100644
>--- a/include/linux/virtio_vsock.h
>+++ b/include/linux/virtio_vsock.h
>@@ -36,6 +36,7 @@ struct virtio_vsock_sock {
> 	u32 rx_bytes;
> 	u32 buf_alloc;
> 	struct list_head rx_queue;
>+	u32 msg_count;
> };
>
> struct virtio_vsock_pkt {
>@@ -80,6 +81,10 @@ virtio_transport_dgram_dequeue(struct vsock_sock *vsk,
> 			       struct msghdr *msg,
> 			       size_t len, int flags);
>
>+ssize_t
>+virtio_transport_seqpacket_dequeue(struct vsock_sock *vsk,
>+				   struct msghdr *msg,
>+				   int flags);
> s64 virtio_transport_stream_has_data(struct vsock_sock *vsk);
> s64 virtio_transport_stream_has_space(struct vsock_sock *vsk);
>
>diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c
>index ad0d34d41444..1e1df19ec164 100644
>--- a/net/vmw_vsock/virtio_transport_common.c
>+++ b/net/vmw_vsock/virtio_transport_common.c
>@@ -393,6 +393,78 @@ virtio_transport_stream_do_dequeue(struct vsock_sock *vsk,
> 	return err;
> }
>
>+static int virtio_transport_seqpacket_do_dequeue(struct vsock_sock *vsk,
>+						 struct msghdr *msg,
>+						 int flags)
>+{
>+	struct virtio_vsock_sock *vvs = vsk->trans;
>+	struct virtio_vsock_pkt *pkt;
>+	int dequeued_len = 0;
>+	size_t user_buf_len = msg_data_left(msg);
>+	bool copy_failed = false;
>+	bool msg_ready = false;
>+
>+	spin_lock_bh(&vvs->rx_lock);
>+
>+	if (vvs->msg_count == 0) {
>+		spin_unlock_bh(&vvs->rx_lock);
>+		return 0;
>+	}
>+
>+	while (!msg_ready) {
>+		pkt = list_first_entry(&vvs->rx_queue, struct virtio_vsock_pkt, list);
>+
>+		if (!copy_failed) {
>+			size_t pkt_len;
>+			size_t bytes_to_copy;
>+
>+			pkt_len = (size_t)le32_to_cpu(pkt->hdr.len);
>+			bytes_to_copy = min(user_buf_len, pkt_len);
>+
>+			if (bytes_to_copy) {
>+				int err;
>+
>+				/* sk_lock is held by caller so no one else can dequeue.
>+				 * Unlock rx_lock since memcpy_to_msg() may sleep.
>+				 */
>+				spin_unlock_bh(&vvs->rx_lock);
>+
>+				err = memcpy_to_msg(msg, pkt->buf, bytes_to_copy);
>+				if (err) {
>+					/* Copy of message failed, set flag to skip
>+					 * copy path for rest of fragments. Rest of
>+					 * fragments will be freed without copy.
>+					 */
>+					copy_failed = true;
>+					dequeued_len = err;

If we fail to copy the message we will discard the entire packet.
Is it acceptable for the user point of view, or we should leave the 
packet in the queue and the user can retry, maybe with a different 
buffer?

Then we can remove the packets only when we successfully copied all the 
fragments.

I'm not sure make sense, maybe better to check also other 
implementations :-)

Thanks,
Stefano

>+				} else {
>+					user_buf_len -= bytes_to_copy;
>+				}
>+
>+				spin_lock_bh(&vvs->rx_lock);
>+			}
>+
>+			if (dequeued_len >= 0)
>+				dequeued_len += pkt_len;
>+		}
>+
>+		if (le32_to_cpu(pkt->hdr.flags) & VIRTIO_VSOCK_SEQ_EOR) {
>+			msg_ready = true;
>+			vvs->msg_count--;
>+		}
>+
>+		virtio_transport_dec_rx_pkt(vvs, pkt);
>+		list_del(&pkt->list);
>+		virtio_transport_free_pkt(pkt);
>+	}
>+
>+	spin_unlock_bh(&vvs->rx_lock);
>+
>+	virtio_transport_send_credit_update(vsk);
>+
>+	return dequeued_len;
>+}
>+
> ssize_t
> virtio_transport_stream_dequeue(struct vsock_sock *vsk,
> 				struct msghdr *msg,
>@@ -405,6 +477,18 @@ virtio_transport_stream_dequeue(struct vsock_sock *vsk,
> }
> EXPORT_SYMBOL_GPL(virtio_transport_stream_dequeue);
>
>+ssize_t
>+virtio_transport_seqpacket_dequeue(struct vsock_sock *vsk,
>+				   struct msghdr *msg,
>+				   int flags)
>+{
>+	if (flags & MSG_PEEK)
>+		return -EOPNOTSUPP;
>+
>+	return virtio_transport_seqpacket_do_dequeue(vsk, msg, flags);
>+}
>+EXPORT_SYMBOL_GPL(virtio_transport_seqpacket_dequeue);
>+
> int
> virtio_transport_dgram_dequeue(struct vsock_sock *vsk,
> 			       struct msghdr *msg,
>-- 
>2.25.1
>

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v11 00/18] virtio/vsock: introduce SOCK_SEQPACKET support
       [not found] ` <162344521373.30951.11000282953901961373.git-patchwork-notify@kernel.org>
@ 2021-06-18 13:49   ` Michael S. Tsirkin
  0 siblings, 0 replies; 9+ messages in thread
From: Michael S. Tsirkin @ 2021-06-18 13:49 UTC (permalink / raw)
  To: patchwork-bot+netdevbpf
  Cc: andraprs, kvm, netdev, linux-kernel, virtualization, oxffffaa,
	nslusarek, stefanha, colin.king, kuba, Arseny Krasnov, davem

On Fri, Jun 11, 2021 at 09:00:13PM +0000, patchwork-bot+netdevbpf@kernel.org wrote:
> Hello:
> 
> This series was applied to netdev/net-next.git (refs/heads/master):
> 
> On Fri, 11 Jun 2021 14:07:40 +0300 you wrote:
> > This patchset implements support of SOCK_SEQPACKET for virtio
> > transport.
> > 	As SOCK_SEQPACKET guarantees to save record boundaries, so to
> > do it, new bit for field 'flags' was added: SEQ_EOR. This bit is
> > set to 1 in last RW packet of message.
> > 	Now as  packets of one socket are not reordered neither on vsock
> > nor on vhost transport layers, such bit allows to restore original
> > message on receiver's side. If user's buffer is smaller than message
> > length, when all out of size data is dropped.
> > 	Maximum length of datagram is limited by 'peer_buf_alloc' value.
> > 	Implementation also supports 'MSG_TRUNC' flags.
> > 	Tests also implemented.
> > 
> > [...]
> 
> Here is the summary with links:
>   - [v11,01/18] af_vsock: update functions for connectible socket
>     https://git.kernel.org/netdev/net-next/c/a9e29e5511b9
>   - [v11,02/18] af_vsock: separate wait data loop
>     https://git.kernel.org/netdev/net-next/c/b3f7fd54881b
>   - [v11,03/18] af_vsock: separate receive data loop
>     https://git.kernel.org/netdev/net-next/c/19c1b90e1979
>   - [v11,04/18] af_vsock: implement SEQPACKET receive loop
>     https://git.kernel.org/netdev/net-next/c/9942c192b256
>   - [v11,05/18] af_vsock: implement send logic for SEQPACKET
>     https://git.kernel.org/netdev/net-next/c/fbe70c480796
>   - [v11,06/18] af_vsock: rest of SEQPACKET support
>     https://git.kernel.org/netdev/net-next/c/0798e78b102b
>   - [v11,07/18] af_vsock: update comments for stream sockets
>     https://git.kernel.org/netdev/net-next/c/8cb48554ad82
>   - [v11,08/18] virtio/vsock: set packet's type in virtio_transport_send_pkt_info()
>     https://git.kernel.org/netdev/net-next/c/b93f8877c1f2
>   - [v11,09/18] virtio/vsock: simplify credit update function API
>     https://git.kernel.org/netdev/net-next/c/c10844c59799
>   - [v11,10/18] virtio/vsock: defines and constants for SEQPACKET
>     https://git.kernel.org/netdev/net-next/c/f07b2a5b04d4
>   - [v11,11/18] virtio/vsock: dequeue callback for SOCK_SEQPACKET
>     https://git.kernel.org/netdev/net-next/c/44931195a541
>   - [v11,12/18] virtio/vsock: add SEQPACKET receive logic
>     https://git.kernel.org/netdev/net-next/c/e4b1ef152f53
>   - [v11,13/18] virtio/vsock: rest of SOCK_SEQPACKET support
>     https://git.kernel.org/netdev/net-next/c/9ac841f5e9f2
>   - [v11,14/18] virtio/vsock: enable SEQPACKET for transport
>     https://git.kernel.org/netdev/net-next/c/53efbba12cc7
>   - [v11,15/18] vhost/vsock: support SEQPACKET for transport
>     https://git.kernel.org/netdev/net-next/c/ced7b713711f
>   - [v11,16/18] vsock/loopback: enable SEQPACKET for transport
>     https://git.kernel.org/netdev/net-next/c/6e90a57795aa
>   - [v11,17/18] vsock_test: add SOCK_SEQPACKET tests
>     https://git.kernel.org/netdev/net-next/c/41b792d7a86d
>   - [v11,18/18] virtio/vsock: update trace event for SEQPACKET
>     https://git.kernel.org/netdev/net-next/c/184039eefeae

Hmm so the virtio part was merged before the spec is ready.
What's the plan now?


> You are awesome, thank you!
> --
> Deet-doot-dot, I am a bot.
> https://korg.docs.kernel.org/patchwork/pwbot.html
> 

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v11 11/18] virtio/vsock: dequeue callback for SOCK_SEQPACKET
  2021-06-18 13:44   ` [PATCH v11 11/18] virtio/vsock: dequeue callback for SOCK_SEQPACKET Stefano Garzarella
@ 2021-06-18 13:51     ` Michael S. Tsirkin
  2021-06-18 14:44       ` Stefano Garzarella
       [not found]     ` <bb323125-f802-1d16-7530-6e4f4abb00a6@kaspersky.com>
  1 sibling, 1 reply; 9+ messages in thread
From: Michael S. Tsirkin @ 2021-06-18 13:51 UTC (permalink / raw)
  To: Stefano Garzarella
  Cc: Andra Paraschiv, kvm, netdev, linux-kernel, virtualization,
	oxffffaa, Norbert Slusarek, Stefan Hajnoczi, Colin Ian King,
	Jakub Kicinski, Arseny Krasnov, David S. Miller

On Fri, Jun 18, 2021 at 03:44:23PM +0200, Stefano Garzarella wrote:
> Hi Arseny,
> the series looks great, I have just a question below about
> seqpacket_dequeue.
> 
> I also sent a couple a simple fixes, it would be great if you can review
> them:
> https://lore.kernel.org/netdev/20210618133526.300347-1-sgarzare@redhat.com/

So given this was picked into net next, what's the plan? Just make spec
follow code? We can wait and see, if there are issues with the spec just
remember to mask the feature before release.

-- 
MST

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v11 11/18] virtio/vsock: dequeue callback for SOCK_SEQPACKET
  2021-06-18 13:51     ` Michael S. Tsirkin
@ 2021-06-18 14:44       ` Stefano Garzarella
  0 siblings, 0 replies; 9+ messages in thread
From: Stefano Garzarella @ 2021-06-18 14:44 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Andra Paraschiv, kvm, netdev, linux-kernel, virtualization,
	oxffffaa, Norbert Slusarek, Stefan Hajnoczi, Colin Ian King,
	Jakub Kicinski, Arseny Krasnov, David S. Miller

On Fri, Jun 18, 2021 at 09:51:44AM -0400, Michael S. Tsirkin wrote:
>On Fri, Jun 18, 2021 at 03:44:23PM +0200, Stefano Garzarella wrote:
>> Hi Arseny,
>> the series looks great, I have just a question below about
>> seqpacket_dequeue.
>>
>> I also sent a couple a simple fixes, it would be great if you can review
>> them:
>> https://lore.kernel.org/netdev/20210618133526.300347-1-sgarzare@redhat.com/
>
>So given this was picked into net next, what's the plan? Just make spec
>follow code? We can wait and see, if there are issues with the spec just
>remember to mask the feature before release.

Yep, the spec patches was already posted, but not merged yet: 
https://lists.oasis-open.org/archives/virtio-comment/202105/msg00017.html

The changes are quite small and they are aligned with the current 
implementation.

Anyway, I perfectly agree with you about waiting and mask it before 
v5.14 release if there are any issue.

Thanks,
Stefano

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v11 11/18] virtio/vsock: dequeue callback for SOCK_SEQPACKET
       [not found]     ` <bb323125-f802-1d16-7530-6e4f4abb00a6@kaspersky.com>
@ 2021-06-18 15:55       ` Stefano Garzarella
       [not found]         ` <650673dc-8b29-657e-5bbd-2cc974628ec9@kaspersky.com>
  0 siblings, 1 reply; 9+ messages in thread
From: Stefano Garzarella @ 2021-06-18 15:55 UTC (permalink / raw)
  To: Arseny Krasnov
  Cc: Andra Paraschiv, kvm@vger.kernel.org, Michael S. Tsirkin,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, oxffffaa@gmail.com,
	Norbert Slusarek, Stefan Hajnoczi, Colin Ian King, Jakub Kicinski,
	David S. Miller

On Fri, Jun 18, 2021 at 06:04:37PM +0300, Arseny Krasnov wrote:
>
>On 18.06.2021 16:44, Stefano Garzarella wrote:
>> Hi Arseny,
>> the series looks great, I have just a question below about
>> seqpacket_dequeue.
>>
>> I also sent a couple a simple fixes, it would be great if you can review
>> them:
>> https://lore.kernel.org/netdev/20210618133526.300347-1-sgarzare@redhat.com/
>>
>>
>> On Fri, Jun 11, 2021 at 02:12:38PM +0300, Arseny Krasnov wrote:
>>> Callback fetches RW packets from rx queue of socket until whole record
>>> is copied(if user's buffer is full, user is not woken up). This is done
>>> to not stall sender, because if we wake up user and it leaves syscall,
>>> nobody will send credit update for rest of record, and sender will wait
>>> for next enter of read syscall at receiver's side. So if user buffer is
>>> full, we just send credit update and drop data.
>>>
>>> Signed-off-by: Arseny Krasnov <arseny.krasnov@kaspersky.com>
>>> ---
>>> v10 -> v11:
>>> 1) 'msg_count' field added to count current number of EORs.
>>> 2) 'msg_ready' argument removed from callback.
>>> 3) If 'memcpy_to_msg()' failed during copy loop, there will be
>>>    no next attempts to copy data, rest of record will be freed.
>>>
>>> include/linux/virtio_vsock.h            |  5 ++
>>> net/vmw_vsock/virtio_transport_common.c | 84 +++++++++++++++++++++++++
>>> 2 files changed, 89 insertions(+)
>>>
>>> diff --git a/include/linux/virtio_vsock.h b/include/linux/virtio_vsock.h
>>> index dc636b727179..1d9a302cb91d 100644
>>> --- a/include/linux/virtio_vsock.h
>>> +++ b/include/linux/virtio_vsock.h
>>> @@ -36,6 +36,7 @@ struct virtio_vsock_sock {
>>> 	u32 rx_bytes;
>>> 	u32 buf_alloc;
>>> 	struct list_head rx_queue;
>>> +	u32 msg_count;
>>> };
>>>
>>> struct virtio_vsock_pkt {
>>> @@ -80,6 +81,10 @@ virtio_transport_dgram_dequeue(struct vsock_sock *vsk,
>>> 			       struct msghdr *msg,
>>> 			       size_t len, int flags);
>>>
>>> +ssize_t
>>> +virtio_transport_seqpacket_dequeue(struct vsock_sock *vsk,
>>> +				   struct msghdr *msg,
>>> +				   int flags);
>>> s64 virtio_transport_stream_has_data(struct vsock_sock *vsk);
>>> s64 virtio_transport_stream_has_space(struct vsock_sock *vsk);
>>>
>>> diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c
>>> index ad0d34d41444..1e1df19ec164 100644
>>> --- a/net/vmw_vsock/virtio_transport_common.c
>>> +++ b/net/vmw_vsock/virtio_transport_common.c
>>> @@ -393,6 +393,78 @@ virtio_transport_stream_do_dequeue(struct vsock_sock *vsk,
>>> 	return err;
>>> }
>>>
>>> +static int virtio_transport_seqpacket_do_dequeue(struct vsock_sock *vsk,
>>> +						 struct msghdr *msg,
>>> +						 int flags)
>>> +{
>>> +	struct virtio_vsock_sock *vvs = vsk->trans;
>>> +	struct virtio_vsock_pkt *pkt;
>>> +	int dequeued_len = 0;
>>> +	size_t user_buf_len = msg_data_left(msg);
>>> +	bool copy_failed = false;
>>> +	bool msg_ready = false;
>>> +
>>> +	spin_lock_bh(&vvs->rx_lock);
>>> +
>>> +	if (vvs->msg_count == 0) {
>>> +		spin_unlock_bh(&vvs->rx_lock);
>>> +		return 0;
>>> +	}
>>> +
>>> +	while (!msg_ready) {
>>> +		pkt = list_first_entry(&vvs->rx_queue, struct virtio_vsock_pkt, list);
>>> +
>>> +		if (!copy_failed) {
>>> +			size_t pkt_len;
>>> +			size_t bytes_to_copy;
>>> +
>>> +			pkt_len = (size_t)le32_to_cpu(pkt->hdr.len);
>>> +			bytes_to_copy = min(user_buf_len, pkt_len);
>>> +
>>> +			if (bytes_to_copy) {
>>> +				int err;
>>> +
>>> +				/* sk_lock is held by caller so no one else can dequeue.
>>> +				 * Unlock rx_lock since memcpy_to_msg() may sleep.
>>> +				 */
>>> +				spin_unlock_bh(&vvs->rx_lock);
>>> +
>>> +				err = memcpy_to_msg(msg, pkt->buf, bytes_to_copy);
>>> +				if (err) {
>>> +					/* Copy of message failed, set flag to skip
>>> +					 * copy path for rest of fragments. Rest of
>>> +					 * fragments will be freed without copy.
>>> +					 */
>>> +					copy_failed = true;
>>> +					dequeued_len = err;
>> If we fail to copy the message we will discard the entire packet.
>> Is it acceptable for the user point of view, or we should leave the
>> packet in the queue and the user can retry, maybe with a different
>> buffer?
>>
>> Then we can remove the packets only when we successfully copied all the
>> fragments.
>>
>> I'm not sure make sense, maybe better to check also other
>> implementations :-)
>>
>> Thanks,
>> Stefano
>
>Understand, i'll check it on weekend, anyway I think it is
>not critical for implementation.

Yep, I agree.

>
>
>I have another question: may be it is useful to research for
>approach where packets are not queued until whole message
>is received, but copied to user's buffer thus freeing memory.
>(like previous implementation, of course with solution of problem
>where part of message still in queue, while reader was woken
>by timeout or signal).
>
>I think it is better, because  in current version, sender may set
>'peer_alloc_buf' to  for example 1MB, so at receiver we get
>1MB of 'kmalloc()' memory allocated, while having user's buffer
>to copy data there or drop it(if user's buffer is full). This way
>won't change spec(e.g. no message id or SEQ_BEGIN will be added).
>
>What do You think?

Yep, I see your point and it would be great, but I think the main issues 
to fix is how to handle a signal while we are waiting other fragments 
since the other peer can take unspecified time to send them.

Note that the 'peer_alloc_buf' in the sender, is the value get from the 
receiver, so if the receiver doesn't want to allocate 1MB, can advertise 
a small buffer size.

Thanks,
Stefano

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [MASSMAIL KLMS] Re: [PATCH v11 11/18] virtio/vsock: dequeue callback for SOCK_SEQPACKET
       [not found]         ` <650673dc-8b29-657e-5bbd-2cc974628ec9@kaspersky.com>
@ 2021-06-18 16:25           ` Stefano Garzarella
       [not found]             ` <31f58b17-02e6-4246-5ad8-7e8d7892ecb7@kaspersky.com>
  0 siblings, 1 reply; 9+ messages in thread
From: Stefano Garzarella @ 2021-06-18 16:25 UTC (permalink / raw)
  To: Arseny Krasnov
  Cc: Andra Paraschiv, kvm@vger.kernel.org, Michael S. Tsirkin,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, oxffffaa@gmail.com,
	Norbert Slusarek, Stefan Hajnoczi, Colin Ian King, Jakub Kicinski,
	David S. Miller

On Fri, Jun 18, 2021 at 07:08:30PM +0300, Arseny Krasnov wrote:
>
>On 18.06.2021 18:55, Stefano Garzarella wrote:
>> On Fri, Jun 18, 2021 at 06:04:37PM +0300, Arseny Krasnov wrote:
>>> On 18.06.2021 16:44, Stefano Garzarella wrote:
>>>> Hi Arseny,
>>>> the series looks great, I have just a question below about
>>>> seqpacket_dequeue.
>>>>
>>>> I also sent a couple a simple fixes, it would be great if you can review
>>>> them:
>>>> https://lore.kernel.org/netdev/20210618133526.300347-1-sgarzare@redhat.com/
>>>>
>>>>
>>>> On Fri, Jun 11, 2021 at 02:12:38PM +0300, Arseny Krasnov wrote:
>>>>> Callback fetches RW packets from rx queue of socket until whole record
>>>>> is copied(if user's buffer is full, user is not woken up). This is done
>>>>> to not stall sender, because if we wake up user and it leaves syscall,
>>>>> nobody will send credit update for rest of record, and sender will wait
>>>>> for next enter of read syscall at receiver's side. So if user buffer is
>>>>> full, we just send credit update and drop data.
>>>>>
>>>>> Signed-off-by: Arseny Krasnov <arseny.krasnov@kaspersky.com>
>>>>> ---
>>>>> v10 -> v11:
>>>>> 1) 'msg_count' field added to count current number of EORs.
>>>>> 2) 'msg_ready' argument removed from callback.
>>>>> 3) If 'memcpy_to_msg()' failed during copy loop, there will be
>>>>>    no next attempts to copy data, rest of record will be freed.
>>>>>
>>>>> include/linux/virtio_vsock.h            |  5 ++
>>>>> net/vmw_vsock/virtio_transport_common.c | 84 +++++++++++++++++++++++++
>>>>> 2 files changed, 89 insertions(+)
>>>>>
>>>>> diff --git a/include/linux/virtio_vsock.h b/include/linux/virtio_vsock.h
>>>>> index dc636b727179..1d9a302cb91d 100644
>>>>> --- a/include/linux/virtio_vsock.h
>>>>> +++ b/include/linux/virtio_vsock.h
>>>>> @@ -36,6 +36,7 @@ struct virtio_vsock_sock {
>>>>> 	u32 rx_bytes;
>>>>> 	u32 buf_alloc;
>>>>> 	struct list_head rx_queue;
>>>>> +	u32 msg_count;
>>>>> };
>>>>>
>>>>> struct virtio_vsock_pkt {
>>>>> @@ -80,6 +81,10 @@ virtio_transport_dgram_dequeue(struct vsock_sock *vsk,
>>>>> 			       struct msghdr *msg,
>>>>> 			       size_t len, int flags);
>>>>>
>>>>> +ssize_t
>>>>> +virtio_transport_seqpacket_dequeue(struct vsock_sock *vsk,
>>>>> +				   struct msghdr *msg,
>>>>> +				   int flags);
>>>>> s64 virtio_transport_stream_has_data(struct vsock_sock *vsk);
>>>>> s64 virtio_transport_stream_has_space(struct vsock_sock *vsk);
>>>>>
>>>>> diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c
>>>>> index ad0d34d41444..1e1df19ec164 100644
>>>>> --- a/net/vmw_vsock/virtio_transport_common.c
>>>>> +++ b/net/vmw_vsock/virtio_transport_common.c
>>>>> @@ -393,6 +393,78 @@ virtio_transport_stream_do_dequeue(struct vsock_sock *vsk,
>>>>> 	return err;
>>>>> }
>>>>>
>>>>> +static int virtio_transport_seqpacket_do_dequeue(struct vsock_sock *vsk,
>>>>> +						 struct msghdr *msg,
>>>>> +						 int flags)
>>>>> +{
>>>>> +	struct virtio_vsock_sock *vvs = vsk->trans;
>>>>> +	struct virtio_vsock_pkt *pkt;
>>>>> +	int dequeued_len = 0;
>>>>> +	size_t user_buf_len = msg_data_left(msg);
>>>>> +	bool copy_failed = false;
>>>>> +	bool msg_ready = false;
>>>>> +
>>>>> +	spin_lock_bh(&vvs->rx_lock);
>>>>> +
>>>>> +	if (vvs->msg_count == 0) {
>>>>> +		spin_unlock_bh(&vvs->rx_lock);
>>>>> +		return 0;
>>>>> +	}
>>>>> +
>>>>> +	while (!msg_ready) {
>>>>> +		pkt = list_first_entry(&vvs->rx_queue, struct virtio_vsock_pkt, list);
>>>>> +
>>>>> +		if (!copy_failed) {
>>>>> +			size_t pkt_len;
>>>>> +			size_t bytes_to_copy;
>>>>> +
>>>>> +			pkt_len = (size_t)le32_to_cpu(pkt->hdr.len);
>>>>> +			bytes_to_copy = min(user_buf_len, pkt_len);
>>>>> +
>>>>> +			if (bytes_to_copy) {
>>>>> +				int err;
>>>>> +
>>>>> +				/* sk_lock is held by caller so no one else can dequeue.
>>>>> +				 * Unlock rx_lock since memcpy_to_msg() may sleep.
>>>>> +				 */
>>>>> +				spin_unlock_bh(&vvs->rx_lock);
>>>>> +
>>>>> +				err = memcpy_to_msg(msg, pkt->buf, bytes_to_copy);
>>>>> +				if (err) {
>>>>> +					/* Copy of message failed, set flag to skip
>>>>> +					 * copy path for rest of fragments. Rest of
>>>>> +					 * fragments will be freed without copy.
>>>>> +					 */
>>>>> +					copy_failed = true;
>>>>> +					dequeued_len = err;
>>>> If we fail to copy the message we will discard the entire packet.
>>>> Is it acceptable for the user point of view, or we should leave the
>>>> packet in the queue and the user can retry, maybe with a different
>>>> buffer?
>>>>
>>>> Then we can remove the packets only when we successfully copied all the
>>>> fragments.
>>>>
>>>> I'm not sure make sense, maybe better to check also other
>>>> implementations :-)
>>>>
>>>> Thanks,
>>>> Stefano
>>> Understand, i'll check it on weekend, anyway I think it is
>>> not critical for implementation.
>> Yep, I agree.
>>
>>>
>>> I have another question: may be it is useful to research for
>>> approach where packets are not queued until whole message
>>> is received, but copied to user's buffer thus freeing memory.
>>> (like previous implementation, of course with solution of problem
>>> where part of message still in queue, while reader was woken
>>> by timeout or signal).
>>>
>>> I think it is better, because  in current version, sender may set
>>> 'peer_alloc_buf' to  for example 1MB, so at receiver we get
>>> 1MB of 'kmalloc()' memory allocated, while having user's buffer
>>> to copy data there or drop it(if user's buffer is full). This way
>>> won't change spec(e.g. no message id or SEQ_BEGIN will be added).
>>>
>>> What do You think?
>> Yep, I see your point and it would be great, but I think the main issues
>> to fix is how to handle a signal while we are waiting other fragments
>> since the other peer can take unspecified time to send them.
>
>What about transport callback, something like 'seqpacket_drain()' or
>
>'seqpacket_drop_curr()' - when we got signal or timeout, notify transport
>
>to drop current message. In virtio case this will set special flag in transport,
>
>so on next dequeue, this flag is checked and if it is set - we drop all packets
>
>until EOR found. Then we can copy untouched new record.
>

But in this way, we will lose the entire message.

Is it acceptable for seqpacket?

Stefano

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [MASSMAIL KLMS] Re: [PATCH v11 11/18] virtio/vsock: dequeue callback for SOCK_SEQPACKET
       [not found]               ` <b27d3fd1-fa8a-97ff-9035-cf3f525d5866@kaspersky.com>
@ 2021-06-21 10:23                 ` Stefano Garzarella
  0 siblings, 0 replies; 9+ messages in thread
From: Stefano Garzarella @ 2021-06-21 10:23 UTC (permalink / raw)
  To: Arseny Krasnov
  Cc: Andra Paraschiv, kvm@vger.kernel.org, Michael S. Tsirkin,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, oxffffaa@gmail.com,
	Norbert Slusarek, Stefan Hajnoczi, Colin Ian King, Jakub Kicinski,
	David S. Miller

On Mon, Jun 21, 2021 at 09:55:13AM +0300, Arseny Krasnov wrote:
>
>On 18.06.2021 19:26, Arseny Krasnov wrote:
>> On 18.06.2021 19:25, Stefano Garzarella wrote:
>>> On Fri, Jun 18, 2021 at 07:08:30PM +0300, Arseny Krasnov wrote:
>>>> On 18.06.2021 18:55, Stefano Garzarella wrote:
>>>>> On Fri, Jun 18, 2021 at 06:04:37PM +0300, Arseny Krasnov wrote:
>>>>>> On 18.06.2021 16:44, Stefano Garzarella wrote:
>>>>>>> Hi Arseny,
>>>>>>> the series looks great, I have just a question below about
>>>>>>> seqpacket_dequeue.
>>>>>>>
>>>>>>> I also sent a couple a simple fixes, it would be great if you can review
>>>>>>> them:
>>>>>>> https://lore.kernel.org/netdev/20210618133526.300347-1-sgarzare@redhat.com/
>>>>>>>
>>>>>>>
>>>>>>> On Fri, Jun 11, 2021 at 02:12:38PM +0300, Arseny Krasnov wrote:
>>>>>>>> Callback fetches RW packets from rx queue of socket until whole record
>>>>>>>> is copied(if user's buffer is full, user is not woken up). This is done
>>>>>>>> to not stall sender, because if we wake up user and it leaves syscall,
>>>>>>>> nobody will send credit update for rest of record, and sender will wait
>>>>>>>> for next enter of read syscall at receiver's side. So if user buffer is
>>>>>>>> full, we just send credit update and drop data.
>>>>>>>>
>>>>>>>> Signed-off-by: Arseny Krasnov <arseny.krasnov@kaspersky.com>
>>>>>>>> ---
>>>>>>>> v10 -> v11:
>>>>>>>> 1) 'msg_count' field added to count current number of EORs.
>>>>>>>> 2) 'msg_ready' argument removed from callback.
>>>>>>>> 3) If 'memcpy_to_msg()' failed during copy loop, there will be
>>>>>>>>    no next attempts to copy data, rest of record will be freed.
>>>>>>>>
>>>>>>>> include/linux/virtio_vsock.h            |  5 ++
>>>>>>>> net/vmw_vsock/virtio_transport_common.c | 84 +++++++++++++++++++++++++
>>>>>>>> 2 files changed, 89 insertions(+)
>>>>>>>>
>>>>>>>> diff --git a/include/linux/virtio_vsock.h b/include/linux/virtio_vsock.h
>>>>>>>> index dc636b727179..1d9a302cb91d 100644
>>>>>>>> --- a/include/linux/virtio_vsock.h
>>>>>>>> +++ b/include/linux/virtio_vsock.h
>>>>>>>> @@ -36,6 +36,7 @@ struct virtio_vsock_sock {
>>>>>>>> 	u32 rx_bytes;
>>>>>>>> 	u32 buf_alloc;
>>>>>>>> 	struct list_head rx_queue;
>>>>>>>> +	u32 msg_count;
>>>>>>>> };
>>>>>>>>
>>>>>>>> struct virtio_vsock_pkt {
>>>>>>>> @@ -80,6 +81,10 @@ virtio_transport_dgram_dequeue(struct vsock_sock *vsk,
>>>>>>>> 			       struct msghdr *msg,
>>>>>>>> 			       size_t len, int flags);
>>>>>>>>
>>>>>>>> +ssize_t
>>>>>>>> +virtio_transport_seqpacket_dequeue(struct vsock_sock *vsk,
>>>>>>>> +				   struct msghdr *msg,
>>>>>>>> +				   int flags);
>>>>>>>> s64 virtio_transport_stream_has_data(struct vsock_sock *vsk);
>>>>>>>> s64 virtio_transport_stream_has_space(struct vsock_sock *vsk);
>>>>>>>>
>>>>>>>> diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c
>>>>>>>> index ad0d34d41444..1e1df19ec164 100644
>>>>>>>> --- a/net/vmw_vsock/virtio_transport_common.c
>>>>>>>> +++ b/net/vmw_vsock/virtio_transport_common.c
>>>>>>>> @@ -393,6 +393,78 @@ virtio_transport_stream_do_dequeue(struct vsock_sock *vsk,
>>>>>>>> 	return err;
>>>>>>>> }
>>>>>>>>
>>>>>>>> +static int virtio_transport_seqpacket_do_dequeue(struct vsock_sock *vsk,
>>>>>>>> +						 struct msghdr *msg,
>>>>>>>> +						 int flags)
>>>>>>>> +{
>>>>>>>> +	struct virtio_vsock_sock *vvs = vsk->trans;
>>>>>>>> +	struct virtio_vsock_pkt *pkt;
>>>>>>>> +	int dequeued_len = 0;
>>>>>>>> +	size_t user_buf_len = msg_data_left(msg);
>>>>>>>> +	bool copy_failed = false;
>>>>>>>> +	bool msg_ready = false;
>>>>>>>> +
>>>>>>>> +	spin_lock_bh(&vvs->rx_lock);
>>>>>>>> +
>>>>>>>> +	if (vvs->msg_count == 0) {
>>>>>>>> +		spin_unlock_bh(&vvs->rx_lock);
>>>>>>>> +		return 0;
>>>>>>>> +	}
>>>>>>>> +
>>>>>>>> +	while (!msg_ready) {
>>>>>>>> +		pkt = list_first_entry(&vvs->rx_queue, struct virtio_vsock_pkt, list);
>>>>>>>> +
>>>>>>>> +		if (!copy_failed) {
>>>>>>>> +			size_t pkt_len;
>>>>>>>> +			size_t bytes_to_copy;
>>>>>>>> +
>>>>>>>> +			pkt_len = (size_t)le32_to_cpu(pkt->hdr.len);
>>>>>>>> +			bytes_to_copy = min(user_buf_len, pkt_len);
>>>>>>>> +
>>>>>>>> +			if (bytes_to_copy) {
>>>>>>>> +				int err;
>>>>>>>> +
>>>>>>>> +				/* sk_lock is held by caller so no one else can dequeue.
>>>>>>>> +				 * Unlock rx_lock since memcpy_to_msg() may sleep.
>>>>>>>> +				 */
>>>>>>>> +				spin_unlock_bh(&vvs->rx_lock);
>>>>>>>> +
>>>>>>>> +				err = memcpy_to_msg(msg, pkt->buf, bytes_to_copy);
>>>>>>>> +				if (err) {
>>>>>>>> +					/* Copy of message failed, set flag to skip
>>>>>>>> +					 * copy path for rest of fragments. Rest of
>>>>>>>> +					 * fragments will be freed without copy.
>>>>>>>> +					 */
>>>>>>>> +					copy_failed = true;
>>>>>>>> +					dequeued_len = err;
>>>>>>> If we fail to copy the message we will discard the entire packet.
>>>>>>> Is it acceptable for the user point of view, or we should leave the
>>>>>>> packet in the queue and the user can retry, maybe with a different
>>>>>>> buffer?
>>>>>>>
>>>>>>> Then we can remove the packets only when we successfully copied all the
>>>>>>> fragments.
>>>>>>>
>>>>>>> I'm not sure make sense, maybe better to check also other
>>>>>>> implementations :-)
>>>>>>>
>>>>>>> Thanks,
>>>>>>> Stefano
>>>>>> Understand, i'll check it on weekend, anyway I think it is
>>>>>> not critical for implementation.
>>>>> Yep, I agree.
>>>>>
>>>>>> I have another question: may be it is useful to research for
>>>>>> approach where packets are not queued until whole message
>>>>>> is received, but copied to user's buffer thus freeing memory.
>>>>>> (like previous implementation, of course with solution of problem
>>>>>> where part of message still in queue, while reader was woken
>>>>>> by timeout or signal).
>>>>>>
>>>>>> I think it is better, because  in current version, sender may set
>>>>>> 'peer_alloc_buf' to  for example 1MB, so at receiver we get
>>>>>> 1MB of 'kmalloc()' memory allocated, while having user's buffer
>>>>>> to copy data there or drop it(if user's buffer is full). This way
>>>>>> won't change spec(e.g. no message id or SEQ_BEGIN will be added).
>>>>>>
>>>>>> What do You think?
>>>>> Yep, I see your point and it would be great, but I think the main issues
>>>>> to fix is how to handle a signal while we are waiting other fragments
>>>>> since the other peer can take unspecified time to send them.
>>>> What about transport callback, something like 'seqpacket_drain()' or
>>>>
>>>> 'seqpacket_drop_curr()' - when we got signal or timeout, notify transport
>>>>
>>>> to drop current message. In virtio case this will set special flag in transport,
>>>>
>>>> so on next dequeue, this flag is checked and if it is set - we drop all packets
>>>>
>>>> until EOR found. Then we can copy untouched new record.
>>>>
>>> But in this way, we will lose the entire message.
>>>
>>> Is it acceptable for seqpacket?
>>>
>>> Stefano
>> Hm, i'll check it. At least for unix domain sockets - it supports SEQPACKET
>
>Hello, i've checked AF_UNIX and AF_AX25 SEQPACKET implementations,

Great! Thanks for checking!

>
>in both cases:
>
>1) Datagram is dequeued first, then copied to user's buffer.
>
>2) Datagram is also freed when copying to user's buffer fail
>
>(it is not reinserted back).
>
>
>But, in case of virtio vsock, i've got the following concern in

>this approach: in cases of AF_UNIX or AF_AX25 there is maximum
>
>datagram size, strictly limited by spec, so no 'setsockopt()' call allows
>
>to exceed this. Also these limits are significantly smaller that current
>
>amounts of RAM. But, in our case, there is no such limit: peer could
>
>say 'i want to use 100MB datagram', and receiver just answer 'ok',

The receiver sets the limit of its receive buffer and tells the 
transmitter that it should not exceed it. The default should be 256 KB, 
so IIUC this scenario can happen only if the receiver do a 
'setsockopt()' increasing the limit to 100MB. Right?

Maybe we should limit it.

>
> as there is just variable assignment to setup new limit. Now, consider
>
>that there will be 10 peers, 100MB each(no one limit such request,
>
>because each socket doesn't know about each other). I think we get
>
>out-of-service in this case - all kmalloc() memory will be wasted for
>
>pending record.
>
>
>I still think, that approach when we copy data from packet to user's
>
>buffer without waiting EOR is better.

Okay, in this way we can remove the receive buffer limit and maybe if we 
receive a signal, we can set MSG_TRUNC, return the partially received 
packet to the user, but we must free any next fragments.

So, as you proposed, we need a `seqpacket_drop()` to tell to the 
transport that if we were copying an uncompleted message, then it should 
delete the queued fragments and any others until the next EOR.

>
>
>Also i'll rebase QEMU patch today or tomorrow.

Great, please CC me, this is something high priority to test 
SOCK_SEQPACKET with a guest.

>
>
>What do You Think?

I'm fine with both, but I slightly prefer the approach we implemented 
because it's easier to handle.

Thanks,
Stefano

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2021-06-21 10:23 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <20210611110744.3650456-1-arseny.krasnov@kaspersky.com>
     [not found] ` <59b720a8-154f-ad29-e7a9-b86b69408078@kaspersky.com>
2021-06-11 12:25   ` [PATCH v11 00/18] virtio/vsock: introduce SOCK_SEQPACKET support Stefano Garzarella
     [not found]     ` <10a64ff5-86df-85f3-5cf2-2fa7e8ddc294@kaspersky.com>
2021-06-11 14:57       ` Stefano Garzarella
     [not found] ` <20210611111241.3652274-1-arseny.krasnov@kaspersky.com>
2021-06-18 13:44   ` [PATCH v11 11/18] virtio/vsock: dequeue callback for SOCK_SEQPACKET Stefano Garzarella
2021-06-18 13:51     ` Michael S. Tsirkin
2021-06-18 14:44       ` Stefano Garzarella
     [not found]     ` <bb323125-f802-1d16-7530-6e4f4abb00a6@kaspersky.com>
2021-06-18 15:55       ` Stefano Garzarella
     [not found]         ` <650673dc-8b29-657e-5bbd-2cc974628ec9@kaspersky.com>
2021-06-18 16:25           ` [MASSMAIL KLMS] " Stefano Garzarella
     [not found]             ` <31f58b17-02e6-4246-5ad8-7e8d7892ecb7@kaspersky.com>
     [not found]               ` <b27d3fd1-fa8a-97ff-9035-cf3f525d5866@kaspersky.com>
2021-06-21 10:23                 ` Stefano Garzarella
     [not found] ` <162344521373.30951.11000282953901961373.git-patchwork-notify@kernel.org>
2021-06-18 13:49   ` [PATCH v11 00/18] virtio/vsock: introduce SOCK_SEQPACKET support Michael S. Tsirkin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).