From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C7A37C678D6 for ; Tue, 10 Jan 2023 14:18:10 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pFDfb-0001O0-N9; Tue, 10 Jan 2023 07:24:00 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pFDcw-00079n-FP for qemu-devel@nongnu.org; Tue, 10 Jan 2023 07:21:15 -0500 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pFDcr-0000U3-VB for qemu-devel@nongnu.org; Tue, 10 Jan 2023 07:21:14 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description; bh=mAwF0Xdtkw/YWSfzxAZEktjLvXD79n/Djs1yLctiAnQ=; b=byzRjK3+M3KmLgndXXHUu0T4Lz F5hOnLuv6jvsT5y08pxnhZE1IF2oMuTcVQjioFJE7/0MoXwvgTyb1CLAfiEn5d9l1R+xiBKoxusiZ ChJ12We3AwFU3tSYYNY2g8WVpd08nMZxkwJEgwQ8Oxl0E7GzQ+vVLh6gic7pXk086qO5iXiY7ok6m SCVA9y6CIghetMUhPVwtWChx18RzfXGPogMNaVr+HCs31zf8zjrRk1KlYYQAy9yoBQw3Vfe40Mal/ f8/Ll5UikEExlmjTjtM8BvIs1fBVGefgHfFQV/iv6jwaSWJocOv04ijRppQq3lDkKSA/Ew9M/6Fx0 ilxFW8nA==; Received: from i7.infradead.org ([2001:8b0:10b:1:21e:67ff:fecb:7a92]) by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1pFDcM-0037tq-2F; Tue, 10 Jan 2023 12:20:50 +0000 Received: from dwoodhou by i7.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pFDcT-006Ygp-OQ; Tue, 10 Jan 2023 12:20:45 +0000 From: David Woodhouse To: qemu-devel@nongnu.org Cc: Paolo Bonzini , Paul Durrant , Joao Martins , Ankur Arora , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Thomas Huth , =?UTF-8?q?Alex=20Benn=C3=A9e?= , Juan Quintela , "Dr . David Alan Gilbert" , Claudio Fontana , Julien Grall Subject: [PATCH v6 51/51] hw/xen: Add basic ring handling to xenstore Date: Tue, 10 Jan 2023 12:20:42 +0000 Message-Id: <20230110122042.1562155-52-dwmw2@infradead.org> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20230110122042.1562155-1-dwmw2@infradead.org> References: <20230110122042.1562155-1-dwmw2@infradead.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-SRS-Rewrite: SMTP reverse-path rewritten from by desiato.infradead.org. See http://www.infradead.org/rpr.html Received-SPF: none client-ip=2001:8b0:10b:1:d65d:64ff:fe57:4e05; envelope-from=BATV+6ce08bad6b360d6d5e23+7079+infradead.org+dwmw2@desiato.srs.infradead.org; helo=desiato.infradead.org X-Spam_score_int: -43 X-Spam_score: -4.4 X-Spam_bar: ---- X-Spam_report: (-4.4 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_MED=-2.3, SPF_HELO_NONE=0.001, SPF_NONE=0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org From: David Woodhouse Extract requests, return ENOSYS to all of them. This is enough to allow older Linux guests to boot, as they need *something* back but it doesn't matter much what. In the first instance we're likely to wire this up over a UNIX socket to an actual xenstored implementation, but in the fullness of time it would be nice to have a fully single-tenant "virtual" xenstore within qemu. Signed-off-by: David Woodhouse --- hw/i386/kvm/xen_xenstore.c | 223 ++++++++++++++++++++++++++++++++++++- 1 file changed, 220 insertions(+), 3 deletions(-) diff --git a/hw/i386/kvm/xen_xenstore.c b/hw/i386/kvm/xen_xenstore.c index bb3346f4e3..6369e29f59 100644 --- a/hw/i386/kvm/xen_xenstore.c +++ b/hw/i386/kvm/xen_xenstore.c @@ -188,18 +188,235 @@ uint16_t xen_xenstore_get_port(void) return s->guest_port; } +static bool req_pending(XenXenstoreState *s) +{ + struct xsd_sockmsg *req = (struct xsd_sockmsg *)s->req_data; + + return s->req_offset == XENSTORE_HEADER_SIZE + req->len; +} + +static void reset_req(XenXenstoreState *s) +{ + memset(s->req_data, 0, sizeof(s->req_data)); + s->req_offset = 0; +} + +static void reset_rsp(XenXenstoreState *s) +{ + s->rsp_pending = false; + + memset(s->rsp_data, 0, sizeof(s->rsp_data)); + s->rsp_offset = 0; +} + +static void process_req(XenXenstoreState *s) +{ + struct xsd_sockmsg *req = (struct xsd_sockmsg *)s->req_data; + struct xsd_sockmsg *rsp = (struct xsd_sockmsg *)s->rsp_data; + const char enosys[] = "ENOSYS"; + + assert(req_pending(s)); + assert(!s->rsp_pending); + + rsp->type = XS_ERROR; + rsp->req_id = req->req_id; + rsp->tx_id = req->tx_id; + rsp->len = sizeof(enosys); + memcpy((void *)&rsp[1], enosys, sizeof(enosys)); + + s->rsp_pending = true; + reset_req(s); +} + +static unsigned int copy_from_ring(XenXenstoreState *s, uint8_t *ptr, unsigned int len) +{ + if (!len) + return 0; + + XENSTORE_RING_IDX prod = qatomic_read(&s->xs->req_prod); + XENSTORE_RING_IDX cons = qatomic_read(&s->xs->req_cons); + unsigned int copied = 0; + + smp_mb(); + + while (len) { + unsigned int avail = prod - cons; + unsigned int offset = MASK_XENSTORE_IDX(cons); + unsigned int copylen = avail; + + if (avail > XENSTORE_RING_SIZE) { + error_report("XenStore ring handling error"); + s->fatal_error = true; + break; + } else if (avail == 0) + break; + + if (copylen > len) { + copylen = len; + } + if (copylen > XENSTORE_RING_SIZE - offset) { + copylen = XENSTORE_RING_SIZE - offset; + } + + memcpy(ptr, &s->xs->req[offset], copylen); + copied += copylen; + + ptr += copylen; + len -= copylen; + + cons += copylen; + } + + smp_mb(); + + qatomic_set(&s->xs->req_cons, cons); + + return copied; +} + +static unsigned int copy_to_ring(XenXenstoreState *s, uint8_t *ptr, unsigned int len) +{ + if (!len) + return 0; + + XENSTORE_RING_IDX cons = qatomic_read(&s->xs->rsp_cons); + XENSTORE_RING_IDX prod = qatomic_read(&s->xs->rsp_prod); + unsigned int copied = 0; + + smp_mb(); + + while (len) { + unsigned int avail = cons + XENSTORE_RING_SIZE - prod; + unsigned int offset = MASK_XENSTORE_IDX(prod); + unsigned int copylen = len; + + if (avail > XENSTORE_RING_SIZE) { + error_report("XenStore ring handling error"); + s->fatal_error = true; + break; + } else if (avail == 0) + break; + + if (copylen > avail) { + copylen = avail; + } + if (copylen > XENSTORE_RING_SIZE - offset) { + copylen = XENSTORE_RING_SIZE - offset; + } + + + memcpy(&s->xs->rsp[offset], ptr, copylen); + copied += copylen; + + ptr += copylen; + len -= copylen; + + prod += copylen; + } + + smp_mb(); + + qatomic_set(&s->xs->rsp_prod, prod); + + return copied; +} + +static unsigned int get_req(XenXenstoreState *s) +{ + unsigned int copied = 0; + + if (s->fatal_error) + return 0; + + assert(!req_pending(s)); + + if (s->req_offset < XENSTORE_HEADER_SIZE) { + void *ptr = s->req_data + s->req_offset; + unsigned int len = XENSTORE_HEADER_SIZE; + unsigned int copylen = copy_from_ring(s, ptr, len); + + copied += copylen; + s->req_offset += copylen; + } + + if (s->req_offset >= XENSTORE_HEADER_SIZE) { + struct xsd_sockmsg *req = (struct xsd_sockmsg *)s->req_data; + + if (req->len > (uint32_t)XENSTORE_PAYLOAD_MAX) { + error_report("Illegal XenStore request"); + s->fatal_error = true; + return 0; + } + + void *ptr = s->req_data + s->req_offset; + unsigned int len = XENSTORE_HEADER_SIZE + req->len - s->req_offset; + unsigned int copylen = copy_from_ring(s, ptr, len); + + copied += copylen; + s->req_offset += copylen; + } + + return copied; +} + +static unsigned int put_rsp(XenXenstoreState *s) +{ + if (s->fatal_error) + return 0; + + assert(s->rsp_pending); + + struct xsd_sockmsg *rsp = (struct xsd_sockmsg *)s->rsp_data; + assert(s->rsp_offset < XENSTORE_HEADER_SIZE + rsp->len); + + void *ptr = s->rsp_data + s->rsp_offset; + unsigned int len = XENSTORE_HEADER_SIZE + rsp->len - s->rsp_offset; + unsigned int copylen = copy_to_ring(s, ptr, len); + + s->rsp_offset += copylen; + + /* Have we produced a complete response? */ + if (s->rsp_offset == XENSTORE_HEADER_SIZE + rsp->len) + reset_rsp(s); + + return copylen; +} + static void xen_xenstore_event(void *opaque) { XenXenstoreState *s = opaque; evtchn_port_t port = xen_be_evtchn_pending(s->eh); + unsigned int copied_to, copied_from; + bool processed, notify = false; + if (port != s->be_port) { return; } - printf("xenstore event\n"); + /* We know this is a no-op. */ xen_be_evtchn_unmask(s->eh, port); - qemu_hexdump(stdout, "", s->xs, sizeof(*s->xs)); - xen_be_evtchn_notify(s->eh, s->be_port); + + do { + copied_to = copied_from = 0; + processed = false; + + if (s->rsp_pending) + copied_to = put_rsp(s); + + if (!req_pending(s)) + copied_from = get_req(s); + + if (req_pending(s) && !s->rsp_pending) { + process_req(s); + processed = true; + } + + notify |= copied_to || copied_from; + } while (copied_to || copied_from || processed); + + if (notify) { + xen_be_evtchn_notify(s->eh, s->be_port); + } } static void alloc_guest_port(XenXenstoreState *s) -- 2.35.3