* [PATCH v3 0/2] io: Increase unix stream socket buffer size
@ 2025-04-27 16:50 Nir Soffer
2025-04-27 16:50 ` [PATCH v3 1/2] io: Increase unix socket buffers size on macOS Nir Soffer
2025-04-27 16:50 ` [PATCH v3 2/2] io: Increase unix socket buffers on Linux Nir Soffer
0 siblings, 2 replies; 8+ messages in thread
From: Nir Soffer @ 2025-04-27 16:50 UTC (permalink / raw)
To: qemu-devel
Cc: Daniel P. Berrangé, Richard Jones,
Philippe Mathieu-Daudé, Eric Blake, Nir Soffer
On both macOS and Linux, the default send buffer size too small causing poor
performance when reading and writing to qemu-nbd. A simple way to experience
this is to compare TCP and unix sockets, showing that TCP socket is much
faster. Programs like nbdcopy partly mitigate this by using multiple NBD
connections.
On macOS the default send buffer size is 8192 bytes. Increasing the send buffer
size to 2 MiB shows up to *12.6 times higher throughput* and lower cpu usage.
On Linux the default and maximum buffer size is 212992 bytes. Increasing the
send buffer size to 2 MiB shows up to *2.7 times higher throughput* and lower
cpu usage. On older machine we see very little improvement, up to 1.03 times
higher throughput.
We likely have the same issue on other platforms. It should be easy to enable
this change for more platform by defining UNIX_STREAM_SOCKET_SEND_BUFFER_SIZE.
Changes since v2:
- Test with different receive buffer size (Daniel)
- Test with larger send buffer size (2m, 4m)
- Set only send buffer size - setting receive buffer size has no effect
- Increase send buffer size to 2m (based on new tests)
- Enable the change also for Linux
- Change only unix stream socket - datagram socket need different configuration
- Modify the code to make it easy to support unix datagram socket
v2 was here:
https://lists.gnu.org/archive/html/qemu-devel/2025-04/msg03167.html
Nir Soffer (2):
io: Increase unix socket buffers size on macOS
io: Increase unix socket buffers on Linux
io/channel-socket.c | 33 +++++++++++++++++++++++++++++++++
1 file changed, 33 insertions(+)
--
2.39.5 (Apple Git-154)
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH v3 1/2] io: Increase unix socket buffers size on macOS
2025-04-27 16:50 [PATCH v3 0/2] io: Increase unix stream socket buffer size Nir Soffer
@ 2025-04-27 16:50 ` Nir Soffer
2025-05-07 16:37 ` Daniel P. Berrangé
2025-04-27 16:50 ` [PATCH v3 2/2] io: Increase unix socket buffers on Linux Nir Soffer
1 sibling, 1 reply; 8+ messages in thread
From: Nir Soffer @ 2025-04-27 16:50 UTC (permalink / raw)
To: qemu-devel
Cc: Daniel P. Berrangé, Richard Jones,
Philippe Mathieu-Daudé, Eric Blake, Nir Soffer
On macOS we need to increase unix stream socket buffers size on the
client and server to get good performance. We set socket buffers on
macOS after connecting or accepting a client connection. For unix
datagram socket we need different configuration that can be done later.
Testing shows that setting socket receive buffer size (SO_RCVBUF) has no
effect on performance, so we set only the send buffer size (SO_SNDBUF).
It seems to work like Linux but not documented.
Testing shows that optimal buffer size is 512k to 4 MiB, depending on
the test case. The difference is very small, so I chose 2 MiB.
I tested reading from qemu-nbd and writing to qemu-nbd with qemu-img and
computing a blkhash with nbdcopy and blksum.
To focus on NBD communication and get less noisy results, I tested
reading and writing to null-co driver. I added a read-pattern option to
the null-co driver to return data full of 0xff:
NULL="json:{'driver': 'raw', 'file': {'driver': 'null-co', 'size': '10g', 'read-pattern': -1}}"
For testing buffer size I added an environment variable for setting the
socket buffer size.
Read from qemu-nbd via qemu-img convert. In this test buffer size of 2m
is optimal (12.6 times faster).
qemu-nbd -r -t -e 0 -f raw -k /tmp/nbd.sock "$NULL" &
qemu-img convert -f raw -O raw -W -n "nbd+unix:///?socket=/tmp/nbd.sock" "$NULL"
| buffer size | time | user | system |
|-------------|---------|---------|---------|
| default | 13.361 | 2.653 | 5.702 |
| 65536 | 2.283 | 0.204 | 1.318 |
| 131072 | 1.673 | 0.062 | 1.008 |
| 262144 | 1.592 | 0.053 | 0.952 |
| 524288 | 1.496 | 0.049 | 0.887 |
| 1048576 | 1.234 | 0.047 | 0.738 |
| 2097152 | 1.060 | 0.080 | 0.602 |
| 4194304 | 1.061 | 0.076 | 0.604 |
Write to qemu-nbd with qemu-img convert. In this test buffer size of 2m
is optimal (9.2 times faster).
qemu-nbd -t -e 0 -f raw -k /tmp/nbd.sock "$NULL" &
qemu-img convert -f raw -O raw -W -n "$NULL" "nbd+unix:///?socket=/tmp/nbd.sock"
| buffer size | time | user | system |
|-------------|---------|---------|---------|
| default | 8.063 | 2.522 | 4.184 |
| 65536 | 1.472 | 0.430 | 0.867 |
| 131072 | 1.071 | 0.297 | 0.654 |
| 262144 | 1.012 | 0.239 | 0.587 |
| 524288 | 0.970 | 0.201 | 0.514 |
| 1048576 | 0.895 | 0.184 | 0.454 |
| 2097152 | 0.877 | 0.174 | 0.440 |
| 4194304 | 0.944 | 0.231 | 0.535 |
Compute a blkhash with nbdcopy, using 4 NBD connections and 256k request
size. In this test buffer size of 4m is optimal (5.1 times faster).
qemu-nbd -r -t -e 0 -f raw -k /tmp/nbd.sock "$NULL" &
nbdcopy --blkhash "nbd+unix:///?socket=/tmp/nbd.sock" null:
| buffer size | time | user | system |
|-------------|---------|---------|---------|
| default | 8.624 | 5.727 | 6.507 |
| 65536 | 2.563 | 4.760 | 2.498 |
| 131072 | 1.903 | 4.559 | 2.093 |
| 262144 | 1.759 | 4.513 | 1.935 |
| 524288 | 1.729 | 4.489 | 1.924 |
| 1048576 | 1.696 | 4.479 | 1.884 |
| 2097152 | 1.710 | 4.480 | 1.763 |
| 4194304 | 1.687 | 4.479 | 1.712 |
Compute a blkhash with blksum, using 1 NBD connection and 256k read
size. In this test buffer size of 512k is optimal (10.3 times faster).
qemu-nbd -r -t -e 0 -f raw -k /tmp/nbd.sock "$NULL" &
blksum "nbd+unix:///?socket=/tmp/nbd.sock"
| buffer size | time | user | system |
|-------------|---------|---------|---------|
| default | 13.085 | 5.664 | 6.461 |
| 65536 | 3.299 | 5.106 | 2.515 |
| 131072 | 2.396 | 4.989 | 2.069 |
| 262144 | 1.607 | 4.724 | 1.555 |
| 524288 | 1.271 | 4.528 | 1.224 |
| 1048576 | 1.294 | 4.565 | 1.333 |
| 2097152 | 1.299 | 4.569 | 1.344 |
| 4194304 | 1.291 | 4.559 | 1.327 |
Signed-off-by: Nir Soffer <nirsof@gmail.com>
---
io/channel-socket.c | 32 ++++++++++++++++++++++++++++++++
1 file changed, 32 insertions(+)
diff --git a/io/channel-socket.c b/io/channel-socket.c
index 608bcf066e..06901ab694 100644
--- a/io/channel-socket.c
+++ b/io/channel-socket.c
@@ -21,6 +21,7 @@
#include "qapi/error.h"
#include "qapi/qapi-visit-sockets.h"
#include "qemu/module.h"
+#include "qemu/units.h"
#include "io/channel-socket.h"
#include "io/channel-util.h"
#include "io/channel-watch.h"
@@ -37,6 +38,33 @@
#define SOCKET_MAX_FDS 16
+/*
+ * Testing shows that 2m send buffer gives best throuput and lowest cpu usage.
+ * Changing the receive buffer size has no effect on performance.
+ */
+#ifdef __APPLE__
+#define UNIX_STREAM_SOCKET_SEND_BUFFER_SIZE (2 * MiB)
+#endif /* __APPLE__ */
+
+static void qio_channel_socket_set_buffers(QIOChannelSocket *ioc)
+{
+ if (ioc->localAddr.ss_family == AF_UNIX) {
+ int type;
+ socklen_t type_len = sizeof(type);
+
+ if (getsockopt(ioc->fd, SOL_SOCKET, SO_TYPE, &type, &type_len) == -1) {
+ return;
+ }
+
+#ifdef UNIX_STREAM_SOCKET_SEND_BUFFER_SIZE
+ if (type == SOCK_STREAM) {
+ const int value = UNIX_STREAM_SOCKET_SEND_BUFFER_SIZE;
+ setsockopt(ioc->fd, SOL_SOCKET, SO_SNDBUF, &value, sizeof(value));
+ }
+#endif /* UNIX_STREAM_SOCKET_SEND_BUFFER_SIZE */
+ }
+}
+
SocketAddress *
qio_channel_socket_get_local_address(QIOChannelSocket *ioc,
Error **errp)
@@ -174,6 +202,8 @@ int qio_channel_socket_connect_sync(QIOChannelSocket *ioc,
}
#endif
+ qio_channel_socket_set_buffers(ioc);
+
qio_channel_set_feature(QIO_CHANNEL(ioc),
QIO_CHANNEL_FEATURE_READ_MSG_PEEK);
@@ -410,6 +440,8 @@ qio_channel_socket_accept(QIOChannelSocket *ioc,
}
#endif /* WIN32 */
+ qio_channel_socket_set_buffers(cioc);
+
qio_channel_set_feature(QIO_CHANNEL(cioc),
QIO_CHANNEL_FEATURE_READ_MSG_PEEK);
--
2.39.5 (Apple Git-154)
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH v3 2/2] io: Increase unix socket buffers on Linux
2025-04-27 16:50 [PATCH v3 0/2] io: Increase unix stream socket buffer size Nir Soffer
2025-04-27 16:50 ` [PATCH v3 1/2] io: Increase unix socket buffers size on macOS Nir Soffer
@ 2025-04-27 16:50 ` Nir Soffer
2025-04-28 21:37 ` Eric Blake
1 sibling, 1 reply; 8+ messages in thread
From: Nir Soffer @ 2025-04-27 16:50 UTC (permalink / raw)
To: qemu-devel
Cc: Daniel P. Berrangé, Richard Jones,
Philippe Mathieu-Daudé, Eric Blake, Nir Soffer
Like macOS we have similar issue on Linux. For TCP socket the send
buffer size is 2626560 bytes (~2.5 MiB) and we get good performance.
However for unix socket the default and maximum buffer size is 212992
bytes (208 KiB) and we see poor performance when using one NBD
connection, up to 4 times slower than macOS on the same machine.
Tracing shows that for every 2 MiB payload (qemu uses 2 MiB io size), we
do 1 recvmsg call with TCP socket, and 10 recvmsg calls with unix
socket.
Fixing this issue requires changing the maximum send buffer size (the
receive buffer size is ignored). This can be done using:
$ cat /etc/sysctl.d/net-mem-max.conf
net.core.wmem_max = 2097152
$ sudo sysctl -p /etc/sysctl.d/net-mem-max.conf
With this we can set the socket buffer size to 2 MiB. With the defaults
the value requested by qemu is clipped to the maximum size and has no
effect.
I tested on 2 machines:
- Fedora 42 VM on MacBook Pro M2 Max
- Dell PowerEdge R640 (Intel(R) Xeon(R) Gold 6230 CPU @ 2.10GHz)
On the older Dell machine we see very little improvement, up to 1.03
higher throughput. On the M2 machine we see up to 2.67 times higher
throughput. The following results are from the M2 machine.
Reading from qemu-nbd with qemu-img convert. In this test buffer size of
4m is optimal (2.28 times faster).
| buffer size | time | user | system |
|-------------|---------|---------|---------|
| default | 4.292 | 0.243 | 1.604 |
| 524288 | 2.167 | 0.058 | 1.288 |
| 1048576 | 2.041 | 0.060 | 1.238 |
| 2097152 | 1.884 | 0.060 | 1.191 |
| 4194304 | 1.881 | 0.054 | 1.196 |
Writing to qemu-nbd with qemu-img convert. In this test buffer size of
1m is optimal (2.67 times faster).
| buffer size | time | user | system |
|-------------|---------|---------|---------|
| default | 3.113 | 0.334 | 1.094 |
| 524288 | 1.173 | 0.179 | 0.654 |
| 1048576 | 1.164 | 0.164 | 0.670 |
| 2097152 | 1.227 | 0.197 | 0.663 |
| 4194304 | 1.227 | 0.198 | 0.666 |
Computing a blkhash with nbdcopy. In this test buffer size of 512k is
optimal (1.19 times faster).
| buffer size | time | user | system |
|-------------|---------|---------|---------|
| default | 2.140 | 4.483 | 2.681 |
| 524288 | 1.794 | 4.467 | 2.572 |
| 1048576 | 1.807 | 4.447 | 2.644 |
| 2097152 | 1.822 | 4.461 | 2.698 |
| 4194304 | 1.827 | 4.465 | 2.700 |
Computing a blkhash with blksum. In this test buffer size of 4m is
optimal (2.65 times faster).
| buffer size | time | user | system |
|-------------|---------|---------|---------|
| default | 3.582 | 4.595 | 2.392 |
| 524288 | 1.499 | 4.384 | 1.482 |
| 1048576 | 1.377 | 4.381 | 1.345 |
| 2097152 | 1.388 | 4.389 | 1.354 |
| 4194304 | 1.352 | 4.395 | 1.302 |
Signed-off-by: Nir Soffer <nirsof@gmail.com>
---
io/channel-socket.c | 9 +++++----
1 file changed, 5 insertions(+), 4 deletions(-)
diff --git a/io/channel-socket.c b/io/channel-socket.c
index 06901ab694..f2974fab74 100644
--- a/io/channel-socket.c
+++ b/io/channel-socket.c
@@ -39,12 +39,13 @@
#define SOCKET_MAX_FDS 16
/*
- * Testing shows that 2m send buffer gives best throuput and lowest cpu usage.
- * Changing the receive buffer size has no effect on performance.
+ * Testing shows that 2m send buffer is optimal. Changing the receive buffer
+ * size has no effect on performance.
+ * On Linux we need to increase net.core.wmem_max to make this effective.
*/
-#ifdef __APPLE__
+#if defined(__APPLE__) || defined(__linux__)
#define UNIX_STREAM_SOCKET_SEND_BUFFER_SIZE (2 * MiB)
-#endif /* __APPLE__ */
+#endif
static void qio_channel_socket_set_buffers(QIOChannelSocket *ioc)
{
--
2.39.5 (Apple Git-154)
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH v3 2/2] io: Increase unix socket buffers on Linux
2025-04-27 16:50 ` [PATCH v3 2/2] io: Increase unix socket buffers on Linux Nir Soffer
@ 2025-04-28 21:37 ` Eric Blake
2025-04-30 21:02 ` Nir Soffer
0 siblings, 1 reply; 8+ messages in thread
From: Eric Blake @ 2025-04-28 21:37 UTC (permalink / raw)
To: Nir Soffer
Cc: qemu-devel, Daniel P. Berrangé, Richard Jones,
Philippe Mathieu-Daudé
On Sun, Apr 27, 2025 at 07:50:29PM +0300, Nir Soffer wrote:
> Like macOS we have similar issue on Linux. For TCP socket the send
> buffer size is 2626560 bytes (~2.5 MiB) and we get good performance.
> However for unix socket the default and maximum buffer size is 212992
> bytes (208 KiB) and we see poor performance when using one NBD
> connection, up to 4 times slower than macOS on the same machine.
>
> +++ b/io/channel-socket.c
> @@ -39,12 +39,13 @@
> #define SOCKET_MAX_FDS 16
>
> /*
> - * Testing shows that 2m send buffer gives best throuput and lowest cpu usage.
> - * Changing the receive buffer size has no effect on performance.
> + * Testing shows that 2m send buffer is optimal. Changing the receive buffer
> + * size has no effect on performance.
> + * On Linux we need to increase net.core.wmem_max to make this effective.
How can we reliably inform the user of the need to tweak this setting?
Is it worth a bug report to the Linux kernel folks asking them to
reconsider the default cap on this setting, now that modern systems
tend to have more memory than when the cap was first introduced, and
given that we have demonstrable numbers showing why it is beneficial,
especially for parity with TCP?
--
Eric Blake, Principal Software Engineer
Red Hat, Inc.
Virtualization: qemu.org | libguestfs.org
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v3 2/2] io: Increase unix socket buffers on Linux
2025-04-28 21:37 ` Eric Blake
@ 2025-04-30 21:02 ` Nir Soffer
0 siblings, 0 replies; 8+ messages in thread
From: Nir Soffer @ 2025-04-30 21:02 UTC (permalink / raw)
To: Eric Blake
Cc: QEMU Developers, Daniel Berrange, Richard Jones,
Philippe Mathieu-Daudé
> On 29 Apr 2025, at 0:37, Eric Blake <eblake@redhat.com> wrote:
>
> On Sun, Apr 27, 2025 at 07:50:29PM +0300, Nir Soffer wrote:
>> Like macOS we have similar issue on Linux. For TCP socket the send
>> buffer size is 2626560 bytes (~2.5 MiB) and we get good performance.
>> However for unix socket the default and maximum buffer size is 212992
>> bytes (208 KiB) and we see poor performance when using one NBD
>> connection, up to 4 times slower than macOS on the same machine.
>>
>
>> +++ b/io/channel-socket.c
>> @@ -39,12 +39,13 @@
>> #define SOCKET_MAX_FDS 16
>>
>> /*
>> - * Testing shows that 2m send buffer gives best throuput and lowest cpu usage.
>> - * Changing the receive buffer size has no effect on performance.
>> + * Testing shows that 2m send buffer is optimal. Changing the receive buffer
>> + * size has no effect on performance.
>> + * On Linux we need to increase net.core.wmem_max to make this effective.
>
> How can we reliably inform the user of the need to tweak this setting?
Maybe log a warning (or debug message) if net.core.wmem_max is too small?
For example libkrun does this:
https://github.com/containers/libkrun/blob/main/src/devices/src/virtio/net/gvproxy.rs#L70
If we document this some users that read the docs can tune their system in a better way.
What is the best place to document this?
> Is it worth a bug report to the Linux kernel folks asking them to
> reconsider the default cap on this setting, now that modern systems
> tend to have more memory than when the cap was first introduced, and
> given that we have demonstrable numbers showing why it is beneficial,
> especially for parity with TCP?
Makes sense.
What is the best place to discuss this or file a bug?
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v3 1/2] io: Increase unix socket buffers size on macOS
2025-04-27 16:50 ` [PATCH v3 1/2] io: Increase unix socket buffers size on macOS Nir Soffer
@ 2025-05-07 16:37 ` Daniel P. Berrangé
2025-05-07 17:17 ` Nir Soffer
0 siblings, 1 reply; 8+ messages in thread
From: Daniel P. Berrangé @ 2025-05-07 16:37 UTC (permalink / raw)
To: Nir Soffer
Cc: qemu-devel, Richard Jones, Philippe Mathieu-Daudé,
Eric Blake
On Sun, Apr 27, 2025 at 07:50:28PM +0300, Nir Soffer wrote:
> On macOS we need to increase unix stream socket buffers size on the
> client and server to get good performance. We set socket buffers on
> macOS after connecting or accepting a client connection. For unix
> datagram socket we need different configuration that can be done later.
>
> Testing shows that setting socket receive buffer size (SO_RCVBUF) has no
> effect on performance, so we set only the send buffer size (SO_SNDBUF).
> It seems to work like Linux but not documented.
>
> Testing shows that optimal buffer size is 512k to 4 MiB, depending on
> the test case. The difference is very small, so I chose 2 MiB.
>
> I tested reading from qemu-nbd and writing to qemu-nbd with qemu-img and
> computing a blkhash with nbdcopy and blksum.
>
> To focus on NBD communication and get less noisy results, I tested
> reading and writing to null-co driver. I added a read-pattern option to
> the null-co driver to return data full of 0xff:
>
> NULL="json:{'driver': 'raw', 'file': {'driver': 'null-co', 'size': '10g', 'read-pattern': -1}}"
>
> For testing buffer size I added an environment variable for setting the
> socket buffer size.
>
> Read from qemu-nbd via qemu-img convert. In this test buffer size of 2m
> is optimal (12.6 times faster).
>
> qemu-nbd -r -t -e 0 -f raw -k /tmp/nbd.sock "$NULL" &
> qemu-img convert -f raw -O raw -W -n "nbd+unix:///?socket=/tmp/nbd.sock" "$NULL"
>
> | buffer size | time | user | system |
> |-------------|---------|---------|---------|
> | default | 13.361 | 2.653 | 5.702 |
> | 65536 | 2.283 | 0.204 | 1.318 |
> | 131072 | 1.673 | 0.062 | 1.008 |
> | 262144 | 1.592 | 0.053 | 0.952 |
> | 524288 | 1.496 | 0.049 | 0.887 |
> | 1048576 | 1.234 | 0.047 | 0.738 |
> | 2097152 | 1.060 | 0.080 | 0.602 |
> | 4194304 | 1.061 | 0.076 | 0.604 |
>
> Write to qemu-nbd with qemu-img convert. In this test buffer size of 2m
> is optimal (9.2 times faster).
>
> qemu-nbd -t -e 0 -f raw -k /tmp/nbd.sock "$NULL" &
> qemu-img convert -f raw -O raw -W -n "$NULL" "nbd+unix:///?socket=/tmp/nbd.sock"
>
> | buffer size | time | user | system |
> |-------------|---------|---------|---------|
> | default | 8.063 | 2.522 | 4.184 |
> | 65536 | 1.472 | 0.430 | 0.867 |
> | 131072 | 1.071 | 0.297 | 0.654 |
> | 262144 | 1.012 | 0.239 | 0.587 |
> | 524288 | 0.970 | 0.201 | 0.514 |
> | 1048576 | 0.895 | 0.184 | 0.454 |
> | 2097152 | 0.877 | 0.174 | 0.440 |
> | 4194304 | 0.944 | 0.231 | 0.535 |
>
> Compute a blkhash with nbdcopy, using 4 NBD connections and 256k request
> size. In this test buffer size of 4m is optimal (5.1 times faster).
>
> qemu-nbd -r -t -e 0 -f raw -k /tmp/nbd.sock "$NULL" &
> nbdcopy --blkhash "nbd+unix:///?socket=/tmp/nbd.sock" null:
>
> | buffer size | time | user | system |
> |-------------|---------|---------|---------|
> | default | 8.624 | 5.727 | 6.507 |
> | 65536 | 2.563 | 4.760 | 2.498 |
> | 131072 | 1.903 | 4.559 | 2.093 |
> | 262144 | 1.759 | 4.513 | 1.935 |
> | 524288 | 1.729 | 4.489 | 1.924 |
> | 1048576 | 1.696 | 4.479 | 1.884 |
> | 2097152 | 1.710 | 4.480 | 1.763 |
> | 4194304 | 1.687 | 4.479 | 1.712 |
>
> Compute a blkhash with blksum, using 1 NBD connection and 256k read
> size. In this test buffer size of 512k is optimal (10.3 times faster).
>
> qemu-nbd -r -t -e 0 -f raw -k /tmp/nbd.sock "$NULL" &
> blksum "nbd+unix:///?socket=/tmp/nbd.sock"
>
> | buffer size | time | user | system |
> |-------------|---------|---------|---------|
> | default | 13.085 | 5.664 | 6.461 |
> | 65536 | 3.299 | 5.106 | 2.515 |
> | 131072 | 2.396 | 4.989 | 2.069 |
> | 262144 | 1.607 | 4.724 | 1.555 |
> | 524288 | 1.271 | 4.528 | 1.224 |
> | 1048576 | 1.294 | 4.565 | 1.333 |
> | 2097152 | 1.299 | 4.569 | 1.344 |
> | 4194304 | 1.291 | 4.559 | 1.327 |
>
> Signed-off-by: Nir Soffer <nirsof@gmail.com>
> ---
> io/channel-socket.c | 32 ++++++++++++++++++++++++++++++++
> 1 file changed, 32 insertions(+)
>
> diff --git a/io/channel-socket.c b/io/channel-socket.c
> index 608bcf066e..06901ab694 100644
> --- a/io/channel-socket.c
> +++ b/io/channel-socket.c
> @@ -21,6 +21,7 @@
> #include "qapi/error.h"
> #include "qapi/qapi-visit-sockets.h"
> #include "qemu/module.h"
> +#include "qemu/units.h"
> #include "io/channel-socket.h"
> #include "io/channel-util.h"
> #include "io/channel-watch.h"
> @@ -37,6 +38,33 @@
>
> #define SOCKET_MAX_FDS 16
>
> +/*
> + * Testing shows that 2m send buffer gives best throuput and lowest cpu usage.
> + * Changing the receive buffer size has no effect on performance.
> + */
> +#ifdef __APPLE__
> +#define UNIX_STREAM_SOCKET_SEND_BUFFER_SIZE (2 * MiB)
> +#endif /* __APPLE__ */
> +
> +static void qio_channel_socket_set_buffers(QIOChannelSocket *ioc)
> +{
> + if (ioc->localAddr.ss_family == AF_UNIX) {
> + int type;
> + socklen_t type_len = sizeof(type);
> +
> + if (getsockopt(ioc->fd, SOL_SOCKET, SO_TYPE, &type, &type_len) == -1) {
> + return;
> + }
> +
> +#ifdef UNIX_STREAM_SOCKET_SEND_BUFFER_SIZE
> + if (type == SOCK_STREAM) {
> + const int value = UNIX_STREAM_SOCKET_SEND_BUFFER_SIZE;
> + setsockopt(ioc->fd, SOL_SOCKET, SO_SNDBUF, &value, sizeof(value));
> + }
> +#endif /* UNIX_STREAM_SOCKET_SEND_BUFFER_SIZE */
> + }
> +}
While I'm not doubting your benchmark results, I'm a little uneasy about
setting this unconditionally for *all* UNIX sockets QEMU creates. The
benchmarks show NBD benefits from this, but I'm not convinced that all
the other scenarios QEMU creates UNIX sockets for justify it.
On Linux, whatever value you set with SO_SNDBUF appears to get doubled
internally by the kernel.
IOW, this is adding 4 MB fixed overhead for every UNIX socket that
QEMU creates. It doesn't take many UNIX sockets in QEMU for that to
become a significant amount of extra memory overhead on a host.
I'm thinking we might be better with a helper
qio_channel_socket_set_send_buffer(QIOChannelSocket *ioc, size_t size)
that we call from the NBD code, to limit the impact. Also I think this
helper ought not to filter on AF_UNIX - the caller can see the socket
type via qio_channel_socket_get_local_address if it does not already
have a record of the address, and selectively set the buffer size.
> +
> SocketAddress *
> qio_channel_socket_get_local_address(QIOChannelSocket *ioc,
> Error **errp)
> @@ -174,6 +202,8 @@ int qio_channel_socket_connect_sync(QIOChannelSocket *ioc,
> }
> #endif
>
> + qio_channel_socket_set_buffers(ioc);
> +
> qio_channel_set_feature(QIO_CHANNEL(ioc),
> QIO_CHANNEL_FEATURE_READ_MSG_PEEK);
>
> @@ -410,6 +440,8 @@ qio_channel_socket_accept(QIOChannelSocket *ioc,
> }
> #endif /* WIN32 */
>
> + qio_channel_socket_set_buffers(cioc);
> +
> qio_channel_set_feature(QIO_CHANNEL(cioc),
> QIO_CHANNEL_FEATURE_READ_MSG_PEEK);
>
> --
> 2.39.5 (Apple Git-154)
>
With regards,
Daniel
--
|: https://berrange.com -o- https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o- https://fstop138.berrange.com :|
|: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v3 1/2] io: Increase unix socket buffers size on macOS
2025-05-07 16:37 ` Daniel P. Berrangé
@ 2025-05-07 17:17 ` Nir Soffer
2025-05-07 18:23 ` Daniel P. Berrangé
0 siblings, 1 reply; 8+ messages in thread
From: Nir Soffer @ 2025-05-07 17:17 UTC (permalink / raw)
To: "Daniel P. Berrangé"
Cc: qemu-devel, Richard Jones, Philippe Mathieu-Daudé,
Eric Blake
> On 7 May 2025, at 19:37, Daniel P. Berrangé <berrange@redhat.com> wrote:
>
> On Sun, Apr 27, 2025 at 07:50:28PM +0300, Nir Soffer wrote:
>> On macOS we need to increase unix stream socket buffers size on the
>> client and server to get good performance. We set socket buffers on
>> macOS after connecting or accepting a client connection. For unix
>> datagram socket we need different configuration that can be done later.
>>
>> Testing shows that setting socket receive buffer size (SO_RCVBUF) has no
>> effect on performance, so we set only the send buffer size (SO_SNDBUF).
>> It seems to work like Linux but not documented.
>>
>> Testing shows that optimal buffer size is 512k to 4 MiB, depending on
>> the test case. The difference is very small, so I chose 2 MiB.
>>
>> I tested reading from qemu-nbd and writing to qemu-nbd with qemu-img and
>> computing a blkhash with nbdcopy and blksum.
>>
>> To focus on NBD communication and get less noisy results, I tested
>> reading and writing to null-co driver. I added a read-pattern option to
>> the null-co driver to return data full of 0xff:
>>
>> NULL="json:{'driver': 'raw', 'file': {'driver': 'null-co', 'size': '10g', 'read-pattern': -1}}"
>>
>> For testing buffer size I added an environment variable for setting the
>> socket buffer size.
>>
>> Read from qemu-nbd via qemu-img convert. In this test buffer size of 2m
>> is optimal (12.6 times faster).
>>
>> qemu-nbd -r -t -e 0 -f raw -k /tmp/nbd.sock "$NULL" &
>> qemu-img convert -f raw -O raw -W -n "nbd+unix:///?socket=/tmp/nbd.sock" "$NULL"
>>
>> | buffer size | time | user | system |
>> |-------------|---------|---------|---------|
>> | default | 13.361 | 2.653 | 5.702 |
>> | 65536 | 2.283 | 0.204 | 1.318 |
>> | 131072 | 1.673 | 0.062 | 1.008 |
>> | 262144 | 1.592 | 0.053 | 0.952 |
>> | 524288 | 1.496 | 0.049 | 0.887 |
>> | 1048576 | 1.234 | 0.047 | 0.738 |
>> | 2097152 | 1.060 | 0.080 | 0.602 |
>> | 4194304 | 1.061 | 0.076 | 0.604 |
>>
>> Write to qemu-nbd with qemu-img convert. In this test buffer size of 2m
>> is optimal (9.2 times faster).
>>
>> qemu-nbd -t -e 0 -f raw -k /tmp/nbd.sock "$NULL" &
>> qemu-img convert -f raw -O raw -W -n "$NULL" "nbd+unix:///?socket=/tmp/nbd.sock"
>>
>> | buffer size | time | user | system |
>> |-------------|---------|---------|---------|
>> | default | 8.063 | 2.522 | 4.184 |
>> | 65536 | 1.472 | 0.430 | 0.867 |
>> | 131072 | 1.071 | 0.297 | 0.654 |
>> | 262144 | 1.012 | 0.239 | 0.587 |
>> | 524288 | 0.970 | 0.201 | 0.514 |
>> | 1048576 | 0.895 | 0.184 | 0.454 |
>> | 2097152 | 0.877 | 0.174 | 0.440 |
>> | 4194304 | 0.944 | 0.231 | 0.535 |
>>
>> Compute a blkhash with nbdcopy, using 4 NBD connections and 256k request
>> size. In this test buffer size of 4m is optimal (5.1 times faster).
>>
>> qemu-nbd -r -t -e 0 -f raw -k /tmp/nbd.sock "$NULL" &
>> nbdcopy --blkhash "nbd+unix:///?socket=/tmp/nbd.sock" null:
>>
>> | buffer size | time | user | system |
>> |-------------|---------|---------|---------|
>> | default | 8.624 | 5.727 | 6.507 |
>> | 65536 | 2.563 | 4.760 | 2.498 |
>> | 131072 | 1.903 | 4.559 | 2.093 |
>> | 262144 | 1.759 | 4.513 | 1.935 |
>> | 524288 | 1.729 | 4.489 | 1.924 |
>> | 1048576 | 1.696 | 4.479 | 1.884 |
>> | 2097152 | 1.710 | 4.480 | 1.763 |
>> | 4194304 | 1.687 | 4.479 | 1.712 |
>>
>> Compute a blkhash with blksum, using 1 NBD connection and 256k read
>> size. In this test buffer size of 512k is optimal (10.3 times faster).
>>
>> qemu-nbd -r -t -e 0 -f raw -k /tmp/nbd.sock "$NULL" &
>> blksum "nbd+unix:///?socket=/tmp/nbd.sock"
>>
>> | buffer size | time | user | system |
>> |-------------|---------|---------|---------|
>> | default | 13.085 | 5.664 | 6.461 |
>> | 65536 | 3.299 | 5.106 | 2.515 |
>> | 131072 | 2.396 | 4.989 | 2.069 |
>> | 262144 | 1.607 | 4.724 | 1.555 |
>> | 524288 | 1.271 | 4.528 | 1.224 |
>> | 1048576 | 1.294 | 4.565 | 1.333 |
>> | 2097152 | 1.299 | 4.569 | 1.344 |
>> | 4194304 | 1.291 | 4.559 | 1.327 |
>>
>> Signed-off-by: Nir Soffer <nirsof@gmail.com>
>> ---
>> io/channel-socket.c | 32 ++++++++++++++++++++++++++++++++
>> 1 file changed, 32 insertions(+)
>>
>> diff --git a/io/channel-socket.c b/io/channel-socket.c
>> index 608bcf066e..06901ab694 100644
>> --- a/io/channel-socket.c
>> +++ b/io/channel-socket.c
>> @@ -21,6 +21,7 @@
>> #include "qapi/error.h"
>> #include "qapi/qapi-visit-sockets.h"
>> #include "qemu/module.h"
>> +#include "qemu/units.h"
>> #include "io/channel-socket.h"
>> #include "io/channel-util.h"
>> #include "io/channel-watch.h"
>> @@ -37,6 +38,33 @@
>>
>> #define SOCKET_MAX_FDS 16
>>
>> +/*
>> + * Testing shows that 2m send buffer gives best throuput and lowest cpu usage.
>> + * Changing the receive buffer size has no effect on performance.
>> + */
>> +#ifdef __APPLE__
>> +#define UNIX_STREAM_SOCKET_SEND_BUFFER_SIZE (2 * MiB)
>> +#endif /* __APPLE__ */
>> +
>> +static void qio_channel_socket_set_buffers(QIOChannelSocket *ioc)
>> +{
>> + if (ioc->localAddr.ss_family == AF_UNIX) {
>> + int type;
>> + socklen_t type_len = sizeof(type);
>> +
>> + if (getsockopt(ioc->fd, SOL_SOCKET, SO_TYPE, &type, &type_len) == -1) {
>> + return;
>> + }
>> +
>> +#ifdef UNIX_STREAM_SOCKET_SEND_BUFFER_SIZE
>> + if (type == SOCK_STREAM) {
>> + const int value = UNIX_STREAM_SOCKET_SEND_BUFFER_SIZE;
>> + setsockopt(ioc->fd, SOL_SOCKET, SO_SNDBUF, &value, sizeof(value));
>> + }
>> +#endif /* UNIX_STREAM_SOCKET_SEND_BUFFER_SIZE */
>> + }
>> +}
>
> While I'm not doubting your benchmark results, I'm a little uneasy about
> setting this unconditionally for *all* UNIX sockets QEMU creates. The
> benchmarks show NBD benefits from this, but I'm not convinced that all
> the other scenarios QEMU creates UNIX sockets for justify it.
>
> On Linux, whatever value you set with SO_SNDBUF appears to get doubled
> internally by the kernel.
>
> IOW, this is adding 4 MB fixed overhead for every UNIX socket that
> QEMU creates. It doesn't take many UNIX sockets in QEMU for that to
> become a significant amount of extra memory overhead on a host.
>
> I'm thinking we might be better with a helper
>
> qio_channel_socket_set_send_buffer(QIOChannelSocket *ioc, size_t size)
>
> that we call from the NBD code, to limit the impact. Also I think this
> helper ought not to filter on AF_UNIX - the caller can see the socket
> type via qio_channel_socket_get_local_address if it does not already
> have a record of the address, and selectively set the buffer size.
So you suggest to move also UNIX_STREAM_SOCKET_SEND_BUFFER_SIZE to nbd?
If we use this only for nbd this is fine, but once we add another caller we will
to duplicate the code selecting the right size for the OS. But I guess we can
reconsider this when have this problem.
>
>
>> +
>> SocketAddress *
>> qio_channel_socket_get_local_address(QIOChannelSocket *ioc,
>> Error **errp)
>> @@ -174,6 +202,8 @@ int qio_channel_socket_connect_sync(QIOChannelSocket *ioc,
>> }
>> #endif
>>
>> + qio_channel_socket_set_buffers(ioc);
>> +
>> qio_channel_set_feature(QIO_CHANNEL(ioc),
>> QIO_CHANNEL_FEATURE_READ_MSG_PEEK);
>>
>> @@ -410,6 +440,8 @@ qio_channel_socket_accept(QIOChannelSocket *ioc,
>> }
>> #endif /* WIN32 */
>>
>> + qio_channel_socket_set_buffers(cioc);
>> +
>> qio_channel_set_feature(QIO_CHANNEL(cioc),
>> QIO_CHANNEL_FEATURE_READ_MSG_PEEK);
>>
>> --
>> 2.39.5 (Apple Git-154)
>>
>
> With regards,
> Daniel
> --
> |: https://berrange.com -o- https://www.flickr.com/photos/dberrange:|
> |: https://libvirt.org -o- https://fstop138.berrange.com:|
> |: https://entangle-photo.org -o- https://www.instagram.com/dberrange:|
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v3 1/2] io: Increase unix socket buffers size on macOS
2025-05-07 17:17 ` Nir Soffer
@ 2025-05-07 18:23 ` Daniel P. Berrangé
0 siblings, 0 replies; 8+ messages in thread
From: Daniel P. Berrangé @ 2025-05-07 18:23 UTC (permalink / raw)
To: Nir Soffer
Cc: qemu-devel, Richard Jones, Philippe Mathieu-Daudé,
Eric Blake
On Wed, May 07, 2025 at 08:17:19PM +0300, Nir Soffer wrote:
>
>
> > On 7 May 2025, at 19:37, Daniel P. Berrangé <berrange@redhat.com> wrote:
> >
> > On Sun, Apr 27, 2025 at 07:50:28PM +0300, Nir Soffer wrote:
> >> On macOS we need to increase unix stream socket buffers size on the
> >> client and server to get good performance. We set socket buffers on
> >> macOS after connecting or accepting a client connection. For unix
> >> datagram socket we need different configuration that can be done later.
> >>
> >> Testing shows that setting socket receive buffer size (SO_RCVBUF) has no
> >> effect on performance, so we set only the send buffer size (SO_SNDBUF).
> >> It seems to work like Linux but not documented.
> >>
> >> Testing shows that optimal buffer size is 512k to 4 MiB, depending on
> >> the test case. The difference is very small, so I chose 2 MiB.
> >>
> >> I tested reading from qemu-nbd and writing to qemu-nbd with qemu-img and
> >> computing a blkhash with nbdcopy and blksum.
> >>
> >> To focus on NBD communication and get less noisy results, I tested
> >> reading and writing to null-co driver. I added a read-pattern option to
> >> the null-co driver to return data full of 0xff:
> >>
> >> NULL="json:{'driver': 'raw', 'file': {'driver': 'null-co', 'size': '10g', 'read-pattern': -1}}"
> >>
> >> For testing buffer size I added an environment variable for setting the
> >> socket buffer size.
> >>
> >> Read from qemu-nbd via qemu-img convert. In this test buffer size of 2m
> >> is optimal (12.6 times faster).
> >>
> >> qemu-nbd -r -t -e 0 -f raw -k /tmp/nbd.sock "$NULL" &
> >> qemu-img convert -f raw -O raw -W -n "nbd+unix:///?socket=/tmp/nbd.sock" "$NULL"
> >>
> >> | buffer size | time | user | system |
> >> |-------------|---------|---------|---------|
> >> | default | 13.361 | 2.653 | 5.702 |
> >> | 65536 | 2.283 | 0.204 | 1.318 |
> >> | 131072 | 1.673 | 0.062 | 1.008 |
> >> | 262144 | 1.592 | 0.053 | 0.952 |
> >> | 524288 | 1.496 | 0.049 | 0.887 |
> >> | 1048576 | 1.234 | 0.047 | 0.738 |
> >> | 2097152 | 1.060 | 0.080 | 0.602 |
> >> | 4194304 | 1.061 | 0.076 | 0.604 |
> >>
> >> Write to qemu-nbd with qemu-img convert. In this test buffer size of 2m
> >> is optimal (9.2 times faster).
> >>
> >> qemu-nbd -t -e 0 -f raw -k /tmp/nbd.sock "$NULL" &
> >> qemu-img convert -f raw -O raw -W -n "$NULL" "nbd+unix:///?socket=/tmp/nbd.sock"
> >>
> >> | buffer size | time | user | system |
> >> |-------------|---------|---------|---------|
> >> | default | 8.063 | 2.522 | 4.184 |
> >> | 65536 | 1.472 | 0.430 | 0.867 |
> >> | 131072 | 1.071 | 0.297 | 0.654 |
> >> | 262144 | 1.012 | 0.239 | 0.587 |
> >> | 524288 | 0.970 | 0.201 | 0.514 |
> >> | 1048576 | 0.895 | 0.184 | 0.454 |
> >> | 2097152 | 0.877 | 0.174 | 0.440 |
> >> | 4194304 | 0.944 | 0.231 | 0.535 |
> >>
> >> Compute a blkhash with nbdcopy, using 4 NBD connections and 256k request
> >> size. In this test buffer size of 4m is optimal (5.1 times faster).
> >>
> >> qemu-nbd -r -t -e 0 -f raw -k /tmp/nbd.sock "$NULL" &
> >> nbdcopy --blkhash "nbd+unix:///?socket=/tmp/nbd.sock" null:
> >>
> >> | buffer size | time | user | system |
> >> |-------------|---------|---------|---------|
> >> | default | 8.624 | 5.727 | 6.507 |
> >> | 65536 | 2.563 | 4.760 | 2.498 |
> >> | 131072 | 1.903 | 4.559 | 2.093 |
> >> | 262144 | 1.759 | 4.513 | 1.935 |
> >> | 524288 | 1.729 | 4.489 | 1.924 |
> >> | 1048576 | 1.696 | 4.479 | 1.884 |
> >> | 2097152 | 1.710 | 4.480 | 1.763 |
> >> | 4194304 | 1.687 | 4.479 | 1.712 |
> >>
> >> Compute a blkhash with blksum, using 1 NBD connection and 256k read
> >> size. In this test buffer size of 512k is optimal (10.3 times faster).
> >>
> >> qemu-nbd -r -t -e 0 -f raw -k /tmp/nbd.sock "$NULL" &
> >> blksum "nbd+unix:///?socket=/tmp/nbd.sock"
> >>
> >> | buffer size | time | user | system |
> >> |-------------|---------|---------|---------|
> >> | default | 13.085 | 5.664 | 6.461 |
> >> | 65536 | 3.299 | 5.106 | 2.515 |
> >> | 131072 | 2.396 | 4.989 | 2.069 |
> >> | 262144 | 1.607 | 4.724 | 1.555 |
> >> | 524288 | 1.271 | 4.528 | 1.224 |
> >> | 1048576 | 1.294 | 4.565 | 1.333 |
> >> | 2097152 | 1.299 | 4.569 | 1.344 |
> >> | 4194304 | 1.291 | 4.559 | 1.327 |
> >>
> >> Signed-off-by: Nir Soffer <nirsof@gmail.com>
> >> ---
> >> io/channel-socket.c | 32 ++++++++++++++++++++++++++++++++
> >> 1 file changed, 32 insertions(+)
> >>
> >> diff --git a/io/channel-socket.c b/io/channel-socket.c
> >> index 608bcf066e..06901ab694 100644
> >> --- a/io/channel-socket.c
> >> +++ b/io/channel-socket.c
> >> @@ -21,6 +21,7 @@
> >> #include "qapi/error.h"
> >> #include "qapi/qapi-visit-sockets.h"
> >> #include "qemu/module.h"
> >> +#include "qemu/units.h"
> >> #include "io/channel-socket.h"
> >> #include "io/channel-util.h"
> >> #include "io/channel-watch.h"
> >> @@ -37,6 +38,33 @@
> >>
> >> #define SOCKET_MAX_FDS 16
> >>
> >> +/*
> >> + * Testing shows that 2m send buffer gives best throuput and lowest cpu usage.
> >> + * Changing the receive buffer size has no effect on performance.
> >> + */
> >> +#ifdef __APPLE__
> >> +#define UNIX_STREAM_SOCKET_SEND_BUFFER_SIZE (2 * MiB)
> >> +#endif /* __APPLE__ */
> >> +
> >> +static void qio_channel_socket_set_buffers(QIOChannelSocket *ioc)
> >> +{
> >> + if (ioc->localAddr.ss_family == AF_UNIX) {
> >> + int type;
> >> + socklen_t type_len = sizeof(type);
> >> +
> >> + if (getsockopt(ioc->fd, SOL_SOCKET, SO_TYPE, &type, &type_len) == -1) {
> >> + return;
> >> + }
> >> +
> >> +#ifdef UNIX_STREAM_SOCKET_SEND_BUFFER_SIZE
> >> + if (type == SOCK_STREAM) {
> >> + const int value = UNIX_STREAM_SOCKET_SEND_BUFFER_SIZE;
> >> + setsockopt(ioc->fd, SOL_SOCKET, SO_SNDBUF, &value, sizeof(value));
> >> + }
> >> +#endif /* UNIX_STREAM_SOCKET_SEND_BUFFER_SIZE */
> >> + }
> >> +}
> >
> > While I'm not doubting your benchmark results, I'm a little uneasy about
> > setting this unconditionally for *all* UNIX sockets QEMU creates. The
> > benchmarks show NBD benefits from this, but I'm not convinced that all
> > the other scenarios QEMU creates UNIX sockets for justify it.
> >
> > On Linux, whatever value you set with SO_SNDBUF appears to get doubled
> > internally by the kernel.
> >
> > IOW, this is adding 4 MB fixed overhead for every UNIX socket that
> > QEMU creates. It doesn't take many UNIX sockets in QEMU for that to
> > become a significant amount of extra memory overhead on a host.
> >
> > I'm thinking we might be better with a helper
> >
> > qio_channel_socket_set_send_buffer(QIOChannelSocket *ioc, size_t size)
> >
> > that we call from the NBD code, to limit the impact. Also I think this
> > helper ought not to filter on AF_UNIX - the caller can see the socket
> > type via qio_channel_socket_get_local_address if it does not already
> > have a record of the address, and selectively set the buffer size.
>
> So you suggest to move also UNIX_STREAM_SOCKET_SEND_BUFFER_SIZE to nbd?
>
> If we use this only for nbd this is fine, but once we add another caller we will
> to duplicate the code selecting the right size for the OS. But I guess we can
> reconsider this when have this problem.
Yeah, lets worry about that aspect another day and focus on NBD.
With regards,
Daniel
--
|: https://berrange.com -o- https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o- https://fstop138.berrange.com :|
|: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2025-05-07 18:24 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-04-27 16:50 [PATCH v3 0/2] io: Increase unix stream socket buffer size Nir Soffer
2025-04-27 16:50 ` [PATCH v3 1/2] io: Increase unix socket buffers size on macOS Nir Soffer
2025-05-07 16:37 ` Daniel P. Berrangé
2025-05-07 17:17 ` Nir Soffer
2025-05-07 18:23 ` Daniel P. Berrangé
2025-04-27 16:50 ` [PATCH v3 2/2] io: Increase unix socket buffers on Linux Nir Soffer
2025-04-28 21:37 ` Eric Blake
2025-04-30 21:02 ` Nir Soffer
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).