qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Cornelia Huck <cohuck@redhat.com>
To: Halil Pasic <pasic@linux.ibm.com>
Cc: Eric Farman <farman@linux.ibm.com>,
	Farhan Ali <alifm@linux.ibm.com>,
	Pierre Morel <pmorel@linux.ibm.com>,
	linux-s390@vger.kernel.org, kvm@vger.kernel.org,
	qemu-devel@nongnu.org, qemu-s390x@nongnu.org,
	Alex Williamson <alex.williamson@redhat.com>
Subject: Re: [Qemu-devel] [PATCH v2 2/5] vfio-ccw: concurrent I/O handling
Date: Mon, 28 Jan 2019 18:09:48 +0100	[thread overview]
Message-ID: <20190128180948.506a9695.cohuck@redhat.com> (raw)
In-Reply-To: <20190125150101.3b61f0a1@oc2783563651>

On Fri, 25 Jan 2019 15:01:01 +0100
Halil Pasic <pasic@linux.ibm.com> wrote:

> On Fri, 25 Jan 2019 13:58:35 +0100
> Cornelia Huck <cohuck@redhat.com> wrote:

> > - The code should not be interrupted while we process the channel
> >   program, do the ssch etc. We want the caller to try again later (i.e.
> >   return -EAGAIN)  

(...)

> > - With the async interface, we want user space to be able to submit a
> >   halt/clear while a start request is still in flight, but not while
> >   we're processing a start request with translation etc. We probably
> >   want to do -EAGAIN in that case.  
> 
> This reads very similar to your first point.

Not quite. ssch() means that we have a cp around; for hsch()/csch() we
don't have such a thing. So we want to protect the process of
translating the cp etc., but we don't need such protection for the
halt/clear processing.

> 
> > 
> > My idea would be:
> > 
> > - The BUSY state denotes "I'm busy processing a request right now, try
> >   again". We hold it while processing the cp and doing the ssch and
> >   leave it afterwards (i.e., while the start request is processed by
> >   the hardware). I/O requests and async requests get -EAGAIN in that
> >   state.
> > - A new state (CP_PENDING?) is entered after ssch returned with cc 0
> >   (from the BUSY state). We stay in there as long as no final state for
> >   that request has been received and delivered. (This may be final
> >   interrupt for that request, a deferred cc, or successful halt/clear.)
> >   I/O requests get -EBUSY, async requests are processed. This state can
> >   be removed again once we are able to handle more than one outstanding
> >   cp.
> > 
> > Does that make sense?
> >   
> 
> AFAIU your idea is to split up the busy state into two states: CP_PENDING
> and of busy without CP_PENDING called BUSY. I like the idea of having a
> separate state for CP_PENDING but I don't like the new semantic of BUSY.
> 
> Hm mashing a conceptual state machine and the jumptabe stuff ain't
> making reasoning about this simpler either. I'm taking about the
> conceptual state machine. It would be nice to have a picture of it and
> then think about how to express that in code.

Sorry, I'm having a hard time parsing your comments. Are you looking
for something like the below?

IDLE --- IO_REQ --> BUSY ---> CP_PENDING --- IRQ ---> IDLE (if final
state for I/O)
(normal ssch)

BUSY --- IO_REQ ---> return -EAGAIN, stay in BUSY
(user space is supposed to retry, as we'll eventually progress from
BUSY)

CP_PENDING --- IO_REQ ---> return -EBUSY, stay in CP_PENDING
(user space is supposed to map this to the appropriate cc for the guest)

IDLE --- ASYNC_REQ ---> IDLE
(user space is welcome to do anything else right away)

BUSY --- ASYNC_REQ ---> return -EAGAIN, stay in BUSY
(user space is supposed to retry, as above)

CP_PENDING --- ASYNC_REQ ---> return success, stay in CP_PENDING
(the interrupt will get us out of CP_PENDING eventually)

  parent reply	other threads:[~2019-01-28 17:10 UTC|newest]

Thread overview: 67+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-01-21 11:03 [Qemu-devel] [PATCH v2 0/5] vfio-ccw: support hsch/csch (kernel part) Cornelia Huck
2019-01-21 11:03 ` [Qemu-devel] [PATCH v2 1/5] vfio-ccw: make it safe to access channel programs Cornelia Huck
2019-01-22 14:56   ` Halil Pasic
2019-01-22 15:19     ` Cornelia Huck
2019-01-21 11:03 ` [Qemu-devel] [PATCH v2 2/5] vfio-ccw: concurrent I/O handling Cornelia Huck
2019-01-21 20:20   ` Halil Pasic
2019-01-22 10:29     ` Cornelia Huck
2019-01-22 11:17       ` Halil Pasic
2019-01-22 11:53         ` Cornelia Huck
2019-01-22 12:46           ` Halil Pasic
2019-01-22 17:26             ` Cornelia Huck
2019-01-22 19:03               ` Halil Pasic
2019-01-23 10:34                 ` Cornelia Huck
2019-01-23 13:06                   ` Halil Pasic
2019-01-23 13:34                     ` Cornelia Huck
2019-01-24 19:16                       ` Eric Farman
2019-01-25 10:13                         ` Cornelia Huck
2019-01-22 18:33   ` Halil Pasic
2019-01-23 10:21     ` Cornelia Huck
2019-01-23 13:30       ` Halil Pasic
2019-01-24 10:05         ` Cornelia Huck
2019-01-24 10:08       ` Pierre Morel
2019-01-24 10:19         ` Cornelia Huck
2019-01-24 11:18           ` Pierre Morel
2019-01-24 11:45           ` Halil Pasic
2019-01-24 19:14           ` Eric Farman
2019-01-25  2:25   ` Eric Farman
2019-01-25  2:37     ` Eric Farman
2019-01-25 10:24       ` Cornelia Huck
2019-01-25 12:58         ` Cornelia Huck
2019-01-25 14:01           ` Halil Pasic
2019-01-25 14:21             ` Cornelia Huck
2019-01-25 16:04               ` Halil Pasic
2019-01-28 17:13                 ` Cornelia Huck
2019-01-28 19:30                   ` Halil Pasic
2019-01-29  9:58                     ` Cornelia Huck
2019-01-29 19:39                       ` Halil Pasic
2019-01-30 13:29                         ` Cornelia Huck
2019-01-30 14:32                           ` Farhan Ali
2019-01-28 17:09             ` Cornelia Huck [this message]
2019-01-28 19:15               ` Halil Pasic
2019-01-28 21:48                 ` Eric Farman
2019-01-29 10:20                   ` Cornelia Huck
2019-01-29 14:14                     ` Eric Farman
2019-01-29 18:53                       ` Cornelia Huck
2019-01-29 10:10                 ` Cornelia Huck
2019-01-25 15:57           ` Eric Farman
2019-01-28 17:24             ` Cornelia Huck
2019-01-28 21:50               ` Eric Farman
2019-01-25 20:22         ` Eric Farman
2019-01-28 17:31           ` Cornelia Huck
2019-01-25 13:09       ` Halil Pasic
2019-01-25 12:58     ` Halil Pasic
2019-01-25 20:21       ` Eric Farman
2019-01-21 11:03 ` [Qemu-devel] [PATCH v2 3/5] vfio-ccw: add capabilities chain Cornelia Huck
2019-01-23 15:57   ` [Qemu-devel] [qemu-s390x] " Halil Pasic
2019-01-25 16:19   ` [Qemu-devel] " Eric Farman
2019-01-25 21:00     ` Eric Farman
2019-01-28 17:34       ` Cornelia Huck
2019-01-21 11:03 ` [Qemu-devel] [PATCH v2 4/5] s390/cio: export hsch to modules Cornelia Huck
2019-01-22 15:21   ` [Qemu-devel] [qemu-s390x] " Halil Pasic
2019-01-21 11:03 ` [Qemu-devel] [PATCH v2 5/5] vfio-ccw: add handling for async channel instructions Cornelia Huck
2019-01-23 15:51   ` Halil Pasic
2019-01-24 10:06     ` Cornelia Huck
2019-01-24 10:37       ` Halil Pasic
2019-01-25 21:00   ` Eric Farman
2019-01-28 17:40     ` Cornelia Huck

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190128180948.506a9695.cohuck@redhat.com \
    --to=cohuck@redhat.com \
    --cc=alex.williamson@redhat.com \
    --cc=alifm@linux.ibm.com \
    --cc=farman@linux.ibm.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-s390@vger.kernel.org \
    --cc=pasic@linux.ibm.com \
    --cc=pmorel@linux.ibm.com \
    --cc=qemu-devel@nongnu.org \
    --cc=qemu-s390x@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).