qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: David Marchand <david.marchand@6wind.com>
To: Gonglei <arei.gonglei@hotmail.com>, qemu-devel@nongnu.org
Cc: 'Olivier Matz' <olivier.matz@6wind.com>,
	kvm@vger.kernel.org, claudio.fontana@huawei.com,
	armbru@redhat.com, arei.gonglei@huawei.com, pbonzini@redhat.com,
	jani.kokkonen@huawei.com, cam@cs.ualberta.ca
Subject: Re: [Qemu-devel] [PATCH v3 1/2] contrib: add ivshmem client and server
Date: Mon, 18 Aug 2014 14:19:52 +0200	[thread overview]
Message-ID: <53F1EF68.9090408@6wind.com> (raw)
In-Reply-To: <BLU436-SMTP5439D90EA992CE4FBD22699FEC0@phx.gbl>

On 08/10/2014 05:57 AM, Gonglei wrote:
>> +    /* can return a peer or the local client */
>> +    peer = ivshmem_client_search_peer(client, peer_id);
>> +
>> +    /* delete peer */
>> +    if (fd == -1) {
>> +
> Maybe the above check should be moved before getting the peer.
> And the next check peer is extra.

We always need to know the peer, either for a deletion, creation or update.


>
>> +        if (peer == NULL || peer == &client->local) {
>> +            debug_log(client, "receive delete for invalid peer %ld",
>
> Missing '\n' ?

Ok.

>
>> peer_id);
>> +            return -1;
>> +        }
>> +
>> +        debug_log(client, "delete peer id = %ld\n", peer_id);
>> +        free_peer(client, peer);
>> +        return 0;
>> +    }
>> +
>> +    /* new peer */
>> +    if (peer == NULL) {
>> +        peer = malloc(sizeof(*peer));
>
> g_malloc0 ?.

Ok, replaced malloc/free with g_malloc/g_free in client and server.


>> +    client->notif_cb = notif_cb;
>> +    client->notif_arg = notif_arg;
>> +    client->verbose = verbose;
>
> Missing client->sock_fd = -1; ?

Ok.


>> +
>> +    /* first, we expect our index + a fd == -1 */
>> +    if (read_one_msg(client, &client->local.id, &fd) < 0 ||
>> +        client->local.id < 0 || fd != -1) {
> Why not fd < 0 ?

Because the server will send us an id and a fd == -1 see 
ivshmem-server.c send_initial_info().

>
>> +        debug_log(client, "cannot read from server\n");
>> +        close(client->sock_fd);
>> +        client->sock_fd = -1;
>> +        return -1;
>> +    }
>> +    debug_log(client, "our_id=%ld\n", client->local.id);
>> +
>> +    /* now, we expect shared mem fd + a -1 index, note that shm fd
>> +     * is not used */
>> +    if (read_one_msg(client, &tmp, &fd) < 0 ||
>> +        tmp != -1 || fd < 0) {
>> +        debug_log(client, "cannot read from server (2)\n");
>> +        close(client->sock_fd);
>> +        client->sock_fd = -1;
>> +        return -1;
> I think the error logic handle can move the end of this function, reducing
> duplicated code. Something like this:
>
> 	goto err;
>    }
> err:
> 	debug_log(client, "cannot read from server (2)\n");
>      close(client->sock_fd);
>      client->sock_fd = -1;
>      return -1;

Ok, I also updated the server.

>> +    int fd;
>> +
>> +    if (vector > peer->vectors_count) {
>
> Maybe if (vector >= peer->vectors_count) , otherwise the
> peer->vectors[] array bounds.

Oh yes, good catch.
It should not happen, at the moment, but it is wrong, indeed.

>> +/* send a notification to all vectors of a peer */
>> +int
>> +ivshmem_client_notify_all_vects(const struct ivshmem_client *client,
>> +                                const struct ivshmem_client_peer
>> *peer)
>> +{
>> +    unsigned vector;
>> +    int ret = 0;
>> +
>> +    for (vector = 0; vector < peer->vectors_count; vector++) {
>> +        if (ivshmem_client_notify(client, peer, vector) < 0) {
>> +            ret = -1;
> The ret's value will be covered when multi clients failed. Do we need
> store the failed status for server?.

It indicates that we could not notify *all* clients.



Thanks Gonglei.


-- 
David Marchand

  reply	other threads:[~2014-08-18 12:20 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-08-08  8:55 [Qemu-devel] [PATCH v3 0/2] ivshmem: update documentation, add client/server tools David Marchand
2014-08-08  8:55 ` [Qemu-devel] [PATCH v3 1/2] contrib: add ivshmem client and server David Marchand
2014-08-08 14:51   ` Stefan Hajnoczi
2014-08-18 12:09     ` David Marchand
2014-08-10  3:57   ` Gonglei
2014-08-18 12:19     ` David Marchand [this message]
2014-08-08  8:55 ` [Qemu-devel] [PATCH v3 2/2] docs: update ivshmem device spec David Marchand
2014-08-08  9:04   ` Claudio Fontana
2014-08-08  9:32     ` David Marchand
2014-08-08 15:02   ` Stefan Hajnoczi
2014-08-26  6:47     ` David Marchand
2014-08-26 11:04       ` Paolo Bonzini
2014-08-28  9:49         ` Stefan Hajnoczi
2014-09-01  9:52           ` David Marchand
2014-09-09 19:04             ` Eric Blake
2014-08-08  9:30 ` [Qemu-devel] [PATCH v3 0/2] ivshmem: update documentation, add client/server tools Gonglei (Arei)
2014-08-08  9:54   ` David Marchand
2014-08-08 10:26     ` Gonglei (Arei)

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=53F1EF68.9090408@6wind.com \
    --to=david.marchand@6wind.com \
    --cc=arei.gonglei@hotmail.com \
    --cc=arei.gonglei@huawei.com \
    --cc=armbru@redhat.com \
    --cc=cam@cs.ualberta.ca \
    --cc=claudio.fontana@huawei.com \
    --cc=jani.kokkonen@huawei.com \
    --cc=kvm@vger.kernel.org \
    --cc=olivier.matz@6wind.com \
    --cc=pbonzini@redhat.com \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).