From: Matthew Rosato <mjrosato@linux.ibm.com>
To: Thomas Huth <thuth@redhat.com>,
alex.williamson@redhat.com, cohuck@redhat.com
Cc: pmorel@linux.ibm.com, schnelle@linux.ibm.com, david@redhat.com,
qemu-devel@nongnu.org, pasic@linux.ibm.com,
borntraeger@de.ibm.com, qemu-s390x@nongnu.org, rth@twiddle.net
Subject: Re: [PATCH v2 2/3] s390x/pci: Honor DMA limits set by vfio
Date: Tue, 15 Sep 2020 10:18:55 -0400 [thread overview]
Message-ID: <bb684ffd-1f12-f6a1-ea5b-3973faaa2037@linux.ibm.com> (raw)
In-Reply-To: <6d835b47-5935-8eb7-f0f7-d81f0cec4028@redhat.com>
On 9/15/20 8:54 AM, Thomas Huth wrote:
> On 15/09/2020 00.29, Matthew Rosato wrote:
>> When an s390 guest is using lazy unmapping, it can result in a very
>> large number of oustanding DMA requests, far beyond the default
>> limit configured for vfio. Let's track DMA usage similar to vfio
>> in the host, and trigger the guest to flush their DMA mappings
>> before vfio runs out.
>>
>> Signed-off-by: Matthew Rosato <mjrosato@linux.ibm.com>
>> ---
>> hw/s390x/s390-pci-bus.c | 99 +++++++++++++++++++++++++++++++++++++++++++++---
>> hw/s390x/s390-pci-bus.h | 9 +++++
>> hw/s390x/s390-pci-inst.c | 29 +++++++++++---
>> hw/s390x/s390-pci-inst.h | 3 ++
>> 4 files changed, 129 insertions(+), 11 deletions(-)
>>
>> diff --git a/hw/s390x/s390-pci-bus.c b/hw/s390x/s390-pci-bus.c
>> index 92146a2..23474cd 100644
>> --- a/hw/s390x/s390-pci-bus.c
>> +++ b/hw/s390x/s390-pci-bus.c
>> @@ -11,6 +11,8 @@
>> * directory.
>> */
>>
>> +#include <sys/ioctl.h>
>> +
>> #include "qemu/osdep.h"
>> #include "qapi/error.h"
>> #include "qapi/visitor.h"
>> @@ -24,6 +26,9 @@
>> #include "qemu/error-report.h"
>> #include "qemu/module.h"
>>
>> +#include "hw/vfio/pci.h"
>> +#include "hw/vfio/vfio-common.h"
>> +
>> #ifndef DEBUG_S390PCI_BUS
>> #define DEBUG_S390PCI_BUS 0
>> #endif
>> @@ -737,6 +742,82 @@ static void s390_pci_iommu_free(S390pciState *s, PCIBus *bus, int32_t devfn)
>> object_unref(OBJECT(iommu));
>> }
>>
>> +static bool s390_sync_dma_avail(int fd, unsigned int *avail)
>> +{
>> + struct vfio_iommu_type1_info *info;
>
> You could use g_autofree to get rid of the g_free() at the end.
>
OK
>> + uint32_t argsz;
>> + bool rval = false;
>> + int ret;
>> +
>> + if (avail == NULL) {
>> + return false;
>> + }
>
> Since this is a "static" local function, and calling it with avail ==
> NULL does not make too much sense, I think I'd rather turn this into an
> assert() instead. >
Sure, sounds good.
>> + argsz = sizeof(struct vfio_iommu_type1_info);
>> + info = g_malloc0(argsz);
>> + info->argsz = argsz;
>> + /*
>> + * If the specified argsz is not large enough to contain all
>> + * capabilities it will be updated upon return. In this case
>> + * use the updated value to get the entire capability chain.
>> + */
>> + ret = ioctl(fd, VFIO_IOMMU_GET_INFO, info);
>> + if (argsz != info->argsz) {
>> + argsz = info->argsz;
>> + info = g_realloc(info, argsz);
>> + info->argsz = argsz;
>> + ret = ioctl(fd, VFIO_IOMMU_GET_INFO, info);
>> + }
>> +
>> + if (ret) {
>> + goto out;
>> + }
>> +
>> + /* If the capability exists, update with the current value */
>> + rval = vfio_get_info_dma_avail(info, avail);
>> +
>> +out:
>> + g_free(info);
>> + return rval;
>> +}
>> +
>> +static S390PCIDMACount *s390_start_dma_count(S390pciState *s, VFIODevice *vdev)
>> +{
>> + int id = vdev->group->container->fd;
>> + S390PCIDMACount *cnt;
>> + uint32_t avail;
>> +
>> + if (!s390_sync_dma_avail(id, &avail)) {
>> + return NULL;
>> + }
>> +
>> + QTAILQ_FOREACH(cnt, &s->zpci_dma_limit, link) {
>> + if (cnt->id == id) {
>> + cnt->users++;
>> + return cnt;
>> + }
>> + }
>> +
>> + cnt = g_new0(S390PCIDMACount, 1);
>> + cnt->id = id;
>> + cnt->users = 1;
>> + cnt->avail = avail;
>> + QTAILQ_INSERT_TAIL(&s->zpci_dma_limit, cnt, link);
>> + return cnt;
>> +}
>> +
>> +static void s390_end_dma_count(S390pciState *s, S390PCIDMACount *cnt)
>> +{
>> + if (cnt == NULL) {
>> + return;
>> + }
>
> Either use assert() or drop this completely (since you're checking it at
> the caller site already).
>
Fair - I'll assert() here. Thanks!
next prev parent reply other threads:[~2020-09-15 14:19 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-09-14 22:29 [PATCH v2 0/3] s390x/pci: Accomodate vfio DMA limiting Matthew Rosato
2020-09-14 22:29 ` [PATCH v2 1/3] vfio: Find DMA available capability Matthew Rosato
2020-09-15 6:14 ` Philippe Mathieu-Daudé
2020-09-15 10:10 ` Cornelia Huck
2020-09-15 13:39 ` Matthew Rosato
2020-09-15 10:33 ` Cornelia Huck
2020-09-15 13:57 ` Matthew Rosato
2020-09-15 14:37 ` Cornelia Huck
2020-09-14 22:29 ` [PATCH v2 2/3] s390x/pci: Honor DMA limits set by vfio Matthew Rosato
2020-09-15 11:28 ` Cornelia Huck
2020-09-15 14:16 ` Matthew Rosato
2020-09-15 14:50 ` Cornelia Huck
2020-09-15 12:54 ` Thomas Huth
2020-09-15 14:18 ` Matthew Rosato [this message]
2020-09-14 22:29 ` [PATCH v2 3/3] vfio: Create shared routine for scanning info capabilities Matthew Rosato
2020-09-15 6:16 ` Philippe Mathieu-Daudé
2020-09-15 13:43 ` Matthew Rosato
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=bb684ffd-1f12-f6a1-ea5b-3973faaa2037@linux.ibm.com \
--to=mjrosato@linux.ibm.com \
--cc=alex.williamson@redhat.com \
--cc=borntraeger@de.ibm.com \
--cc=cohuck@redhat.com \
--cc=david@redhat.com \
--cc=pasic@linux.ibm.com \
--cc=pmorel@linux.ibm.com \
--cc=qemu-devel@nongnu.org \
--cc=qemu-s390x@nongnu.org \
--cc=rth@twiddle.net \
--cc=schnelle@linux.ibm.com \
--cc=thuth@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).