qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Jan Kiszka <jan.kiszka@web.de>
To: qemu-devel@nongnu.org
Subject: [Qemu-devel] Re: [PATCH] fix gdbstub support for multiple threads in usermode
Date: Tue, 19 May 2009 18:53:44 +0200	[thread overview]
Message-ID: <4A12E418.4040403@web.de> (raw)
In-Reply-To: <20090519150636.GA23911@codesourcery.com>

[-- Attachment #1: Type: text/plain, Size: 2009 bytes --]

Nathan Froyd wrote:
> On Tue, May 12, 2009 at 10:53:22PM +0200, Jan Kiszka wrote:
>> Nathan Froyd wrote:
>>> We fix this by adding a stable gdbstub_index field for each CPU; the
>>> index is incremented for every CPU (thread) created.  We ignore
>>> wraparound issues for now.  Once we have this field, the stub needs to
>>> use this field instead of cpu_index for communicating with GDB.
>>> [...]
>>> @@ -554,6 +556,7 @@ void cpu_exec_init(CPUState *env)
>>>          cpu_index++;
>>>      }
>>>      env->cpu_index = cpu_index;
>>> +    env->gdbstub_index = ++next_gdbstub_index;
>> While this is simple and sufficient for most cases, making
>> next_gdbstub_index robust against collisions due to overflow is not much
>> more complicated - so why not do this right from the beginning?
> 
> We could just make it a 64-bit field. :)

Well... kind of pragmatic.

> 
> The best way I can think of to do this is to maintain a
> separately-chained list of CPUStates (through a new field similar to
> next_cpu) ordered by gdbstub_index.  Grabbing a new gdbstub_index then
> walks through the list, looking for "holes" between adjacent entries in
> the list.  A new gdbstub_index is then picked if we find a hole; we die
> if we can't find a hole.

Why creating a new list? Just generate a new index and then walk through
all so far registered CPUStates until no collision is found.

> 
> Is this what you had in mind, or am I not being clever enough?
> 
>> I don't think we need these #ifdefs here. You assign continuously
>> increasing IDs also to system-mode CPUs, so we can handle them
>> identically (we have to anyway once cpu hotplugging hits upstream).
> 
> Will fix, thanks.
> 
>> Hmm, I bet you now have some use for my good-old vCont patch (=>continue
>> single-stepping on different CPU / in different thread). Will repost soon.
> 
> Yes, I think that would be useful.

On my todo list. I practically just need to include your patch in my queue.

Jan


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 257 bytes --]

  reply	other threads:[~2009-05-19 16:53 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-05-12 19:35 [Qemu-devel] [PATCH] fix gdbstub support for multiple threads in usermode Nathan Froyd
2009-05-12 20:53 ` [Qemu-devel] " Jan Kiszka
2009-05-13  5:08   ` vibisreenivasan
2009-05-13 14:13   ` vibisreenivasan
2009-05-19 14:59     ` Nathan Froyd
2009-05-21  8:19       ` vibi sreenivasan
2009-05-19 15:06   ` Nathan Froyd
2009-05-19 16:53     ` Jan Kiszka [this message]
2009-05-19 18:01       ` Nathan Froyd
2009-05-20  6:39         ` Jan Kiszka
2009-05-20 15:27           ` Jamie Lokier

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4A12E418.4040403@web.de \
    --to=jan.kiszka@web.de \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).