qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH v7 00/16] gdbstub: support for the multiprocess extension
@ 2018-11-23  9:17 Luc Michel
  2018-11-23  9:17 ` [Qemu-devel] [PATCH v7 01/16] hw/cpu: introduce CPU clusters Luc Michel
                   ` (15 more replies)
  0 siblings, 16 replies; 45+ messages in thread
From: Luc Michel @ 2018-11-23  9:17 UTC (permalink / raw)
  To: qemu-devel
  Cc: Luc Michel, qemu-arm, Peter Maydell, saipava, edgari, alistair,
	Philippe Mathieu-Daudé, mark.burton, Eduardo Habkost

changes since v6:
  - patch 4    Fix a refactoring issue in gdb_get_cpu [Edgar]

  - patch 5    Renamed gdb_first_cpu/gdb_next_cpu to
               gdb_first_attached_cpu/gdb_next_attached_cpu [Edgar]

  - patch 7    Added the CPU name and removed the CPU index in the
               ThreadInfo packet in multiprocess mode [Edgar]

changes since v5:
  - patch 1    Rebased on top of master

  - patch 2    Cluster ID handling hardening to ensure uint32_t overflow
               won't cause PID 0 to be used [Edgar]

  - patch 2    Fix the GDB process ordering function to avoid uint32_t
               overflow when comparing two cluster_ids [Edgar]

changes since v4:
  - patch 1    add cpu_cluster.[ch] files into MAINTAINERS Machine core
               section (thanks Eduardo!) [Philippe, Eduardo]

  - patch 1    revert to uint32_t for cluster IDs [Philippe]

  - patch 1    fixed a coding style issue [patchew]

changes since v3:
  - patch 1    cpu_cluster.h: remove QEMU_ from the multiple includes
               guard #ifdef/#define [Alistair]

  - patch 1    cpu_cluster.c: include osdep.h first [Alistair]

  - patch 1    use uint64_t for cluster ID for prosperity :) [Philippe]

  - patch 1    auto-assign a cluster ID to newly created clusters [Philippe]

  - patch 2    fix mid-code variable declaration [Alistair]

  - patch 3    add a comment in gdb_get_cpu_pid() when retrieving CPU
               parent canonical path [Alistair]

  - patch 14   fix a typo in the commit message [Alistair]

changes since v2:
  - patch 1    introducing the cpu-cluster type. I didn't opt for an
               Interface, but I can add one if you think it's necessary.
               For now this class inherits from Device and has a
               cluster-id property, used by the GDB stub to compute a
               PID.

  - patch 2    removed GDB group related code as it has been replaced
               with CPU clusters

  - patch 2/8  moved GDBProcess target_xml field introduction into patch
               8 [Philippe]

  - patch 3    gdb_get_cpu_pid() now search for CPU being a child of a
               cpu-cluster object. Use the cluster-id to compute the PID.

  - patch 4    gdb_get_process() does not rely on s->processes array
               indices anymore as PIDs can now be sparse. Instead, iterate
               over the array to find the process.

  - patch 3/4  removed Reviewed-by tags because of substantial changes.

  - patch 4/7  read_thread_id() hardening [Philippe]

  - patch 12   safer vAttach packet parsing [Phillipe]

  - patch 16   put APUs and RPUs in different clusters instead of GDB
               groups

changes since v1:
  - rename qemu_get_thread_id() to gdb_fmt_thread_id() [Philippe]
  - check qemu_strtoul() return value for 'D' packets [Philippe]


This series adds support for the multiprocess extension of the GDB
remote protocol in the QEMU GDB stub.

This extension is useful to split QEMU emulated CPUs in different
processes from the point of view of the GDB client. It adds the
possibility to debug different kind of processors (e.g. an AArch64 and
an ARMv7 CPU) at the same time (it is not possible otherwise since GDB
expects an SMP view at the thread granularity.

CPUs are grouped using specially named QOM containers. CPUs that are
children of such a container are grouped under the same GDB process.

The last patch groups the CPUs of different model in the zynqmp machines
into separate processes.

To test this patchset, you can use the following commands:

(Note that this requires a recent enough GDB, I think GDB 7.2 is OK.
Also, it must be compiled to support both ARM and AArch64 architectures)

Run QEMU: (-smp 6 in xlnx-zcu102 enables both cortex-a53 and cortex-r5
CPUs)

qemu-system-aarch64 -M xlnx-zcu102 -gdb tcp::1234 -S -smp 6

Run the following commands in GDB:

target extended :1234
add-inferior
inferior 2
attach 2
info threads

I want to thanks the Xilinx's QEMU team who sponsored this work for
their collaboration and their prototype implementation.

Luc Michel (16):
  hw/cpu: introduce CPU clusters
  gdbstub: introduce GDB processes
  gdbstub: add multiprocess support to '?' packets
  gdbstub: add multiprocess support to 'H' and 'T' packets
  gdbstub: add multiprocess support to vCont packets
  gdbstub: add multiprocess support to 'sC' packets
  gdbstub: add multiprocess support to (f|s)ThreadInfo and
    ThreadExtraInfo
  gdbstub: add multiprocess support to Xfer:features:read:
  gdbstub: add multiprocess support to gdb_vm_state_change()
  gdbstub: add multiprocess support to 'D' packets
  gdbstub: add support for extended mode packet
  gdbstub: add support for vAttach packets
  gdbstub: processes initialization on new peer connection
  gdbstub: gdb_set_stop_cpu: ignore request when process is not attached
  gdbstub: add multiprocess extension support
  arm/xlnx-zynqmp: put APUs and RPUs in separate CPU clusters

 include/hw/arm/xlnx-zynqmp.h |   3 +
 include/hw/cpu/cluster.h     |  38 ++
 gdbstub.c                    | 647 +++++++++++++++++++++++++++++++----
 hw/arm/xlnx-zynqmp.c         |  21 +-
 hw/cpu/cluster.c             |  59 ++++
 MAINTAINERS                  |   2 +
 hw/cpu/Makefile.objs         |   2 +-
 7 files changed, 691 insertions(+), 81 deletions(-)
 create mode 100644 include/hw/cpu/cluster.h
 create mode 100644 hw/cpu/cluster.c

-- 
2.19.1

^ permalink raw reply	[flat|nested] 45+ messages in thread

* [Qemu-devel] [PATCH v7 01/16] hw/cpu: introduce CPU clusters
  2018-11-23  9:17 [Qemu-devel] [PATCH v7 00/16] gdbstub: support for the multiprocess extension Luc Michel
@ 2018-11-23  9:17 ` Luc Michel
  2018-11-23 18:10   ` Eduardo Habkost
  2018-11-23  9:17 ` [Qemu-devel] [PATCH v7 02/16] gdbstub: introduce GDB processes Luc Michel
                   ` (14 subsequent siblings)
  15 siblings, 1 reply; 45+ messages in thread
From: Luc Michel @ 2018-11-23  9:17 UTC (permalink / raw)
  To: qemu-devel
  Cc: Luc Michel, qemu-arm, Peter Maydell, saipava, edgari, alistair,
	Philippe Mathieu-Daudé, mark.burton, Eduardo Habkost

This commit adds the cpu-cluster type. It aims at gathering CPUs from
the same cluster in a machine.

For now it only has a `cluster-id` property.

Signed-off-by: Luc Michel <luc.michel@greensocs.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
---
 include/hw/cpu/cluster.h | 38 ++++++++++++++++++++++++++
 hw/cpu/cluster.c         | 59 ++++++++++++++++++++++++++++++++++++++++
 MAINTAINERS              |  2 ++
 hw/cpu/Makefile.objs     |  2 +-
 4 files changed, 100 insertions(+), 1 deletion(-)
 create mode 100644 include/hw/cpu/cluster.h
 create mode 100644 hw/cpu/cluster.c

diff --git a/include/hw/cpu/cluster.h b/include/hw/cpu/cluster.h
new file mode 100644
index 0000000000..11f50d5f6b
--- /dev/null
+++ b/include/hw/cpu/cluster.h
@@ -0,0 +1,38 @@
+/*
+ * QEMU CPU cluster
+ *
+ * Copyright (c) 2018 GreenSocs SAS
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version 2
+ * of the License, or (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see
+ * <http://www.gnu.org/licenses/gpl-2.0.html>
+ */
+#ifndef HW_CPU_CLUSTER_H
+#define HW_CPU_CLUSTER_H
+
+#include "qemu/osdep.h"
+#include "hw/qdev.h"
+
+#define TYPE_CPU_CLUSTER "cpu-cluster"
+#define CPU_CLUSTER(obj) \
+    OBJECT_CHECK(CPUClusterState, (obj), TYPE_CPU_CLUSTER)
+
+typedef struct CPUClusterState {
+    /*< private >*/
+    DeviceState parent_obj;
+
+    /*< public >*/
+    uint32_t cluster_id;
+} CPUClusterState;
+
+#endif
diff --git a/hw/cpu/cluster.c b/hw/cpu/cluster.c
new file mode 100644
index 0000000000..e0ffd76152
--- /dev/null
+++ b/hw/cpu/cluster.c
@@ -0,0 +1,59 @@
+/*
+ * QEMU CPU cluster
+ *
+ * Copyright (c) 2018 GreenSocs SAS
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version 2
+ * of the License, or (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see
+ * <http://www.gnu.org/licenses/gpl-2.0.html>
+ */
+
+#include "qemu/osdep.h"
+#include "hw/cpu/cluster.h"
+#include "qapi/error.h"
+#include "qemu/module.h"
+
+static void cpu_cluster_init(Object *obj)
+{
+    static uint32_t cluster_id_auto_increment;
+    CPUClusterState *cluster = CPU_CLUSTER(obj);
+
+    cluster->cluster_id = cluster_id_auto_increment++;
+}
+
+static Property cpu_cluster_properties[] = {
+    DEFINE_PROP_UINT32("cluster-id", CPUClusterState, cluster_id, 0),
+    DEFINE_PROP_END_OF_LIST()
+};
+
+static void cpu_cluster_class_init(ObjectClass *klass, void *data)
+{
+    DeviceClass *dc = DEVICE_CLASS(klass);
+
+    dc->props = cpu_cluster_properties;
+}
+
+static const TypeInfo cpu_cluster_type_info = {
+    .name = TYPE_CPU_CLUSTER,
+    .parent = TYPE_DEVICE,
+    .instance_size = sizeof(CPUClusterState),
+    .instance_init = cpu_cluster_init,
+    .class_init = cpu_cluster_class_init,
+};
+
+static void cpu_cluster_register_types(void)
+{
+    type_register_static(&cpu_cluster_type_info);
+}
+
+type_init(cpu_cluster_register_types)
diff --git a/MAINTAINERS b/MAINTAINERS
index 1032406c56..f7d47d2b1d 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1120,11 +1120,13 @@ Machine core
 M: Eduardo Habkost <ehabkost@redhat.com>
 M: Marcel Apfelbaum <marcel.apfelbaum@gmail.com>
 S: Supported
 F: hw/core/machine.c
 F: hw/core/null-machine.c
+F: hw/cpu/cluster.c
 F: include/hw/boards.h
+F: include/hw/cpu/cluster.h
 T: git https://github.com/ehabkost/qemu.git machine-next
 
 Xtensa Machines
 ---------------
 sim
diff --git a/hw/cpu/Makefile.objs b/hw/cpu/Makefile.objs
index cd52d20b65..8db9e8a7b3 100644
--- a/hw/cpu/Makefile.objs
+++ b/hw/cpu/Makefile.objs
@@ -1,5 +1,5 @@
 obj-$(CONFIG_ARM11MPCORE) += arm11mpcore.o
 obj-$(CONFIG_REALVIEW) += realview_mpcore.o
 obj-$(CONFIG_A9MPCORE) += a9mpcore.o
 obj-$(CONFIG_A15MPCORE) += a15mpcore.o
-common-obj-y += core.o
+common-obj-y += core.o cluster.o
-- 
2.19.1

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [Qemu-devel] [PATCH v7 02/16] gdbstub: introduce GDB processes
  2018-11-23  9:17 [Qemu-devel] [PATCH v7 00/16] gdbstub: support for the multiprocess extension Luc Michel
  2018-11-23  9:17 ` [Qemu-devel] [PATCH v7 01/16] hw/cpu: introduce CPU clusters Luc Michel
@ 2018-11-23  9:17 ` Luc Michel
  2018-11-25 20:58   ` Philippe Mathieu-Daudé
  2018-11-23  9:17 ` [Qemu-devel] [PATCH v7 03/16] gdbstub: add multiprocess support to '?' packets Luc Michel
                   ` (13 subsequent siblings)
  15 siblings, 1 reply; 45+ messages in thread
From: Luc Michel @ 2018-11-23  9:17 UTC (permalink / raw)
  To: qemu-devel
  Cc: Luc Michel, qemu-arm, Peter Maydell, saipava, edgari, alistair,
	Philippe Mathieu-Daudé, mark.burton, Eduardo Habkost

Add a structure GDBProcess that represent processes from the GDB
semantic point of view.

CPUs can be split into different processes, by grouping them under
different cpu-cluster objects.  Each occurrence of a cpu-cluster object
implies the existence of the corresponding process in the GDB stub. The
GDB process ID is derived from the corresponding cluster ID as follows:

  GDB PID = cluster ID + 1

This is because PIDs -1 and 0 are reserved in GDB and cannot be used by
processes.

When no such container are found, all the CPUs are put in a unique GDB
process (create_unique_process()). This is also the case when compiled
in user mode, where multi-processes do not make much sense for now.

Signed-off-by: Luc Michel <luc.michel@greensocs.com>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
---
 gdbstub.c | 87 +++++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 87 insertions(+)

diff --git a/gdbstub.c b/gdbstub.c
index c4e4f9f082..26f5a7449a 100644
--- a/gdbstub.c
+++ b/gdbstub.c
@@ -27,10 +27,11 @@
 #include "monitor/monitor.h"
 #include "chardev/char.h"
 #include "chardev/char-fe.h"
 #include "sysemu/sysemu.h"
 #include "exec/gdbstub.h"
+#include "hw/cpu/cluster.h"
 #endif
 
 #define MAX_PACKET_LENGTH 4096
 
 #include "qemu/sockets.h"
@@ -294,10 +295,15 @@ typedef struct GDBRegisterState {
     gdb_reg_cb set_reg;
     const char *xml;
     struct GDBRegisterState *next;
 } GDBRegisterState;
 
+typedef struct GDBProcess {
+    uint32_t pid;
+    bool attached;
+} GDBProcess;
+
 enum RSState {
     RS_INACTIVE,
     RS_IDLE,
     RS_GETLINE,
     RS_GETLINE_ESC,
@@ -322,10 +328,13 @@ typedef struct GDBState {
     int running_state;
 #else
     CharBackend chr;
     Chardev *mon_chr;
 #endif
+    bool multiprocess;
+    GDBProcess *processes;
+    int process_num;
     char syscall_buf[256];
     gdb_syscall_complete_cb current_syscall_cb;
 } GDBState;
 
 /* By default use no IRQs and no timers while single stepping so as to
@@ -1749,10 +1758,24 @@ void gdb_exit(CPUArchState *env, int code)
 #ifndef CONFIG_USER_ONLY
   qemu_chr_fe_deinit(&s->chr, true);
 #endif
 }
 
+/*
+ * Create a unique process containing all the CPUs.
+ */
+static void create_unique_process(GDBState *s)
+{
+    GDBProcess *process;
+
+    s->processes = g_malloc0(sizeof(GDBProcess));
+    s->process_num = 1;
+    process = &s->processes[0];
+
+    process->pid = 1;
+}
+
 #ifdef CONFIG_USER_ONLY
 int
 gdb_handlesig(CPUState *cpu, int sig)
 {
     GDBState *s;
@@ -1846,10 +1869,11 @@ static bool gdb_accept(void)
     }
 
     s = g_malloc0(sizeof(GDBState));
     s->c_cpu = first_cpu;
     s->g_cpu = first_cpu;
+    create_unique_process(s);
     s->fd = fd;
     gdb_has_xml = false;
 
     gdbserver_state = s;
     return true;
@@ -2002,10 +2026,69 @@ static const TypeInfo char_gdb_type_info = {
     .name = TYPE_CHARDEV_GDB,
     .parent = TYPE_CHARDEV,
     .class_init = char_gdb_class_init,
 };
 
+static int find_cpu_clusters(Object *child, void *opaque)
+{
+    if (object_dynamic_cast(child, TYPE_CPU_CLUSTER)) {
+        GDBState *s = (GDBState *) opaque;
+        CPUClusterState *cluster = CPU_CLUSTER(child);
+        GDBProcess *process;
+
+        s->processes = g_renew(GDBProcess, s->processes, ++s->process_num);
+
+        process = &s->processes[s->process_num - 1];
+
+        /*
+         * GDB process IDs -1 and 0 are reserved. To avoid subtle errors at
+         * runtime, we enforce here that the machine does not use a cluster ID
+         * that would lead to PID 0. */
+        assert(process->pid != UINT32_MAX);
+        process->pid = cluster->cluster_id + 1;
+        process->attached = false;
+
+        return 0;
+    }
+
+    return object_child_foreach(child, find_cpu_clusters, opaque);
+}
+
+static int pid_order(const void *a, const void *b)
+{
+    GDBProcess *pa = (GDBProcess *) a;
+    GDBProcess *pb = (GDBProcess *) b;
+
+    if (pa->pid < pb->pid) {
+        return -1;
+    } else if (pa->pid > pb->pid) {
+        return 1;
+    } else {
+        return 0;
+    }
+}
+
+static void create_processes(GDBState *s)
+{
+    object_child_foreach(object_get_root(), find_cpu_clusters, s);
+
+    if (!s->processes) {
+        /* No CPU cluster specified by the machine */
+        create_unique_process(s);
+    } else {
+        /* Sort by PID */
+        qsort(s->processes, s->process_num, sizeof(s->processes[0]), pid_order);
+    }
+}
+
+static void cleanup_processes(GDBState *s)
+{
+    g_free(s->processes);
+    s->process_num = 0;
+    s->processes = NULL;
+}
+
 int gdbserver_start(const char *device)
 {
     trace_gdbstub_op_start(device);
 
     GDBState *s;
@@ -2058,15 +2141,19 @@ int gdbserver_start(const char *device)
                                    NULL, &error_abort);
         monitor_init(mon_chr, 0);
     } else {
         qemu_chr_fe_deinit(&s->chr, true);
         mon_chr = s->mon_chr;
+        cleanup_processes(s);
         memset(s, 0, sizeof(GDBState));
         s->mon_chr = mon_chr;
     }
     s->c_cpu = first_cpu;
     s->g_cpu = first_cpu;
+
+    create_processes(s);
+
     if (chr) {
         qemu_chr_fe_init(&s->chr, chr, &error_abort);
         qemu_chr_fe_set_handlers(&s->chr, gdb_chr_can_receive, gdb_chr_receive,
                                  gdb_chr_event, NULL, NULL, NULL, true);
     }
-- 
2.19.1

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [Qemu-devel] [PATCH v7 03/16] gdbstub: add multiprocess support to '?' packets
  2018-11-23  9:17 [Qemu-devel] [PATCH v7 00/16] gdbstub: support for the multiprocess extension Luc Michel
  2018-11-23  9:17 ` [Qemu-devel] [PATCH v7 01/16] hw/cpu: introduce CPU clusters Luc Michel
  2018-11-23  9:17 ` [Qemu-devel] [PATCH v7 02/16] gdbstub: introduce GDB processes Luc Michel
@ 2018-11-23  9:17 ` Luc Michel
  2018-11-25 21:22   ` Philippe Mathieu-Daudé
  2018-11-23  9:17 ` [Qemu-devel] [PATCH v7 04/16] gdbstub: add multiprocess support to 'H' and 'T' packets Luc Michel
                   ` (12 subsequent siblings)
  15 siblings, 1 reply; 45+ messages in thread
From: Luc Michel @ 2018-11-23  9:17 UTC (permalink / raw)
  To: qemu-devel
  Cc: Luc Michel, qemu-arm, Peter Maydell, saipava, edgari, alistair,
	Philippe Mathieu-Daudé, mark.burton, Eduardo Habkost

The gdb_get_cpu_pid() function does the PID lookup for the given CPU. It
checks if the CPU is a direct child of a CPU cluster. If it is, the
returned PID is the cluster ID plus one (cluster IDs start at 0, GDB
PIDs at 1). When the CPU is not a child of such a container, the PID of
the first process is returned.

The gdb_fmt_thread_id() function generates the string to be used to identify
a given thread, in a response packet for the peer. This function
supports generating thread IDs when multiprocess mode is enabled (in the
form `p<pid>.<tid>').

Use them in the reply to a '?' request.

Signed-off-by: Luc Michel <luc.michel@greensocs.com>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
---
 gdbstub.c | 60 +++++++++++++++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 58 insertions(+), 2 deletions(-)

diff --git a/gdbstub.c b/gdbstub.c
index 26f5a7449a..4fbc05dfe3 100644
--- a/gdbstub.c
+++ b/gdbstub.c
@@ -638,10 +638,52 @@ static int memtox(char *buf, const char *mem, int len)
         }
     }
     return p - buf;
 }
 
+static uint32_t gdb_get_cpu_pid(const GDBState *s, CPUState *cpu)
+{
+#ifndef CONFIG_USER_ONLY
+    gchar *path, *name;
+    Object *obj;
+    CPUClusterState *cluster;
+    uint32_t ret;
+
+    path = object_get_canonical_path(OBJECT(cpu));
+    name = object_get_canonical_path_component(OBJECT(cpu));
+
+    if (path == NULL) {
+        ret = s->processes[0].pid;
+        goto out;
+    }
+
+    /*
+     * Retrieve the CPU parent path by removing the last '/' and the CPU name
+     * from the CPU canonical path. */
+    path[strlen(path) - strlen(name) - 1] = '\0';
+
+    obj = object_resolve_path_type(path, TYPE_CPU_CLUSTER, NULL);
+
+    if (obj == NULL) {
+        ret = s->processes[0].pid;
+        goto out;
+    }
+
+    cluster = CPU_CLUSTER(obj);
+    ret = cluster->cluster_id + 1;
+
+out:
+    g_free(name);
+    g_free(path);
+
+    return ret;
+
+#else
+    return s->processes[0].pid;
+#endif
+}
+
 static const char *get_feature_xml(const char *p, const char **newp,
                                    CPUClass *cc)
 {
     size_t len;
     int i;
@@ -907,10 +949,23 @@ static CPUState *find_cpu(uint32_t thread_id)
     }
 
     return NULL;
 }
 
+static char *gdb_fmt_thread_id(const GDBState *s, CPUState *cpu,
+                           char *buf, size_t buf_size)
+{
+    if (s->multiprocess) {
+        snprintf(buf, buf_size, "p%02x.%02x",
+                 gdb_get_cpu_pid(s, cpu), cpu_gdb_index(cpu));
+    } else {
+        snprintf(buf, buf_size, "%02x", cpu_gdb_index(cpu));
+    }
+
+    return buf;
+}
+
 static int is_query_packet(const char *p, const char *query, char separator)
 {
     unsigned int query_len = strlen(query);
 
     return strncmp(p, query, query_len) == 0 &&
@@ -1018,22 +1073,23 @@ static int gdb_handle_packet(GDBState *s, const char *line_buf)
     const char *p;
     uint32_t thread;
     int ch, reg_size, type, res;
     uint8_t mem_buf[MAX_PACKET_LENGTH];
     char buf[sizeof(mem_buf) + 1 /* trailing NUL */];
+    char thread_id[16];
     uint8_t *registers;
     target_ulong addr, len;
 
     trace_gdbstub_io_command(line_buf);
 
     p = line_buf;
     ch = *p++;
     switch(ch) {
     case '?':
         /* TODO: Make this return the correct value for user-mode.  */
-        snprintf(buf, sizeof(buf), "T%02xthread:%02x;", GDB_SIGNAL_TRAP,
-                 cpu_gdb_index(s->c_cpu));
+        snprintf(buf, sizeof(buf), "T%02xthread:%s;", GDB_SIGNAL_TRAP,
+                 gdb_fmt_thread_id(s, s->c_cpu, thread_id, sizeof(thread_id)));
         put_packet(s, buf);
         /* Remove all the breakpoints when this query is issued,
          * because gdb is doing and initial connect and the state
          * should be cleaned up.
          */
-- 
2.19.1

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [Qemu-devel] [PATCH v7 04/16] gdbstub: add multiprocess support to 'H' and 'T' packets
  2018-11-23  9:17 [Qemu-devel] [PATCH v7 00/16] gdbstub: support for the multiprocess extension Luc Michel
                   ` (2 preceding siblings ...)
  2018-11-23  9:17 ` [Qemu-devel] [PATCH v7 03/16] gdbstub: add multiprocess support to '?' packets Luc Michel
@ 2018-11-23  9:17 ` Luc Michel
  2018-11-23  9:17 ` [Qemu-devel] [PATCH v7 05/16] gdbstub: add multiprocess support to vCont packets Luc Michel
                   ` (11 subsequent siblings)
  15 siblings, 0 replies; 45+ messages in thread
From: Luc Michel @ 2018-11-23  9:17 UTC (permalink / raw)
  To: qemu-devel
  Cc: Luc Michel, qemu-arm, Peter Maydell, saipava, edgari, alistair,
	Philippe Mathieu-Daudé, mark.burton, Eduardo Habkost

Add a couple of helper functions to cope with GDB threads and processes.

The gdb_get_process() function looks for a process given a pid.

The gdb_get_cpu() function returns the CPU corresponding to the (pid,
tid) pair given as parameters.

The read_thread_id() function parses the thread-id sent by the peer.
This function supports the multiprocess extension thread-id syntax.  The
return value specifies if the parsing failed, or if a special case was
encountered (all processes or all threads).

Use them in 'H' and 'T' packets handling to support the multiprocess
extension.

Signed-off-by: Luc Michel <luc.michel@greensocs.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
---
 gdbstub.c | 154 +++++++++++++++++++++++++++++++++++++++++++++++-------
 1 file changed, 136 insertions(+), 18 deletions(-)

diff --git a/gdbstub.c b/gdbstub.c
index 4fbc05dfe3..64a1db21b5 100644
--- a/gdbstub.c
+++ b/gdbstub.c
@@ -680,10 +680,75 @@ out:
 #else
     return s->processes[0].pid;
 #endif
 }
 
+static GDBProcess *gdb_get_process(const GDBState *s, uint32_t pid)
+{
+    int i;
+
+    if (!pid) {
+        /* 0 means any process, we take the first one */
+        return &s->processes[0];
+    }
+
+    for (i = 0; i < s->process_num; i++) {
+        if (s->processes[i].pid == pid) {
+            return &s->processes[i];
+        }
+    }
+
+    return NULL;
+}
+
+static GDBProcess *gdb_get_cpu_process(const GDBState *s, CPUState *cpu)
+{
+    return gdb_get_process(s, gdb_get_cpu_pid(s, cpu));
+}
+
+static CPUState *find_cpu(uint32_t thread_id)
+{
+    CPUState *cpu;
+
+    CPU_FOREACH(cpu) {
+        if (cpu_gdb_index(cpu) == thread_id) {
+            return cpu;
+        }
+    }
+
+    return NULL;
+}
+
+static CPUState *gdb_get_cpu(const GDBState *s, uint32_t pid, uint32_t tid)
+{
+    GDBProcess *process;
+    CPUState *cpu;
+
+    if (!tid) {
+        /* 0 means any thread, we take the first one */
+        tid = 1;
+    }
+
+    cpu = find_cpu(tid);
+
+    if (cpu == NULL) {
+        return NULL;
+    }
+
+    process = gdb_get_cpu_process(s, cpu);
+
+    if (process->pid != pid) {
+        return NULL;
+    }
+
+    if (!process->attached) {
+        return NULL;
+    }
+
+    return cpu;
+}
+
 static const char *get_feature_xml(const char *p, const char **newp,
                                    CPUClass *cc)
 {
     size_t len;
     int i;
@@ -936,23 +1001,10 @@ static void gdb_set_cpu_pc(GDBState *s, target_ulong pc)
 
     cpu_synchronize_state(cpu);
     cpu_set_pc(cpu, pc);
 }
 
-static CPUState *find_cpu(uint32_t thread_id)
-{
-    CPUState *cpu;
-
-    CPU_FOREACH(cpu) {
-        if (cpu_gdb_index(cpu) == thread_id) {
-            return cpu;
-        }
-    }
-
-    return NULL;
-}
-
 static char *gdb_fmt_thread_id(const GDBState *s, CPUState *cpu,
                            char *buf, size_t buf_size)
 {
     if (s->multiprocess) {
         snprintf(buf, buf_size, "p%02x.%02x",
@@ -962,10 +1014,64 @@ static char *gdb_fmt_thread_id(const GDBState *s, CPUState *cpu,
     }
 
     return buf;
 }
 
+typedef enum GDBThreadIdKind {
+    GDB_ONE_THREAD = 0,
+    GDB_ALL_THREADS,     /* One process, all threads */
+    GDB_ALL_PROCESSES,
+    GDB_READ_THREAD_ERR
+} GDBThreadIdKind;
+
+static GDBThreadIdKind read_thread_id(const char *buf, const char **end_buf,
+                                      uint32_t *pid, uint32_t *tid)
+{
+    unsigned long p, t;
+    int ret;
+
+    if (*buf == 'p') {
+        buf++;
+        ret = qemu_strtoul(buf, &buf, 16, &p);
+
+        if (ret) {
+            return GDB_READ_THREAD_ERR;
+        }
+
+        /* Skip '.' */
+        buf++;
+    } else {
+        p = 1;
+    }
+
+    ret = qemu_strtoul(buf, &buf, 16, &t);
+
+    if (ret) {
+        return GDB_READ_THREAD_ERR;
+    }
+
+    *end_buf = buf;
+
+    if (p == -1) {
+        return GDB_ALL_PROCESSES;
+    }
+
+    if (pid) {
+        *pid = p;
+    }
+
+    if (t == -1) {
+        return GDB_ALL_THREADS;
+    }
+
+    if (tid) {
+        *tid = t;
+    }
+
+    return GDB_ONE_THREAD;
+}
+
 static int is_query_packet(const char *p, const char *query, char separator)
 {
     unsigned int query_len = strlen(query);
 
     return strncmp(p, query, query_len) == 0 &&
@@ -1070,16 +1176,18 @@ static int gdb_handle_packet(GDBState *s, const char *line_buf)
 {
     CPUState *cpu;
     CPUClass *cc;
     const char *p;
     uint32_t thread;
+    uint32_t pid, tid;
     int ch, reg_size, type, res;
     uint8_t mem_buf[MAX_PACKET_LENGTH];
     char buf[sizeof(mem_buf) + 1 /* trailing NUL */];
     char thread_id[16];
     uint8_t *registers;
     target_ulong addr, len;
+    GDBThreadIdKind thread_kind;
 
     trace_gdbstub_io_command(line_buf);
 
     p = line_buf;
     ch = *p++;
@@ -1283,16 +1391,22 @@ static int gdb_handle_packet(GDBState *s, const char *line_buf)
         else
             put_packet(s, "E22");
         break;
     case 'H':
         type = *p++;
-        thread = strtoull(p, (char **)&p, 16);
-        if (thread == -1 || thread == 0) {
+
+        thread_kind = read_thread_id(p, &p, &pid, &tid);
+        if (thread_kind == GDB_READ_THREAD_ERR) {
+            put_packet(s, "E22");
+            break;
+        }
+
+        if (thread_kind != GDB_ONE_THREAD) {
             put_packet(s, "OK");
             break;
         }
-        cpu = find_cpu(thread);
+        cpu = gdb_get_cpu(s, pid, tid);
         if (cpu == NULL) {
             put_packet(s, "E22");
             break;
         }
         switch (type) {
@@ -1308,12 +1422,16 @@ static int gdb_handle_packet(GDBState *s, const char *line_buf)
              put_packet(s, "E22");
              break;
         }
         break;
     case 'T':
-        thread = strtoull(p, (char **)&p, 16);
-        cpu = find_cpu(thread);
+        thread_kind = read_thread_id(p, &p, &pid, &tid);
+        if (thread_kind == GDB_READ_THREAD_ERR) {
+            put_packet(s, "E22");
+            break;
+        }
+        cpu = gdb_get_cpu(s, pid, tid);
 
         if (cpu != NULL) {
             put_packet(s, "OK");
         } else {
             put_packet(s, "E22");
-- 
2.19.1

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [Qemu-devel] [PATCH v7 05/16] gdbstub: add multiprocess support to vCont packets
  2018-11-23  9:17 [Qemu-devel] [PATCH v7 00/16] gdbstub: support for the multiprocess extension Luc Michel
                   ` (3 preceding siblings ...)
  2018-11-23  9:17 ` [Qemu-devel] [PATCH v7 04/16] gdbstub: add multiprocess support to 'H' and 'T' packets Luc Michel
@ 2018-11-23  9:17 ` Luc Michel
  2018-11-23  9:17 ` [Qemu-devel] [PATCH v7 06/16] gdbstub: add multiprocess support to 'sC' packets Luc Michel
                   ` (10 subsequent siblings)
  15 siblings, 0 replies; 45+ messages in thread
From: Luc Michel @ 2018-11-23  9:17 UTC (permalink / raw)
  To: qemu-devel
  Cc: Luc Michel, qemu-arm, Peter Maydell, saipava, edgari, alistair,
	Philippe Mathieu-Daudé, mark.burton, Eduardo Habkost

Add the gdb_first_attached_cpu() and gdb_next_attached_cpu() to iterate
over all the CPUs in currently attached processes.

Add the gdb_first_cpu_in_process() and gdb_next_cpu_in_process() to
iterate over CPUs of a given process.

Use them to add multiprocess extension support to vCont packets.

Signed-off-by: Luc Michel <luc.michel@greensocs.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
---
 gdbstub.c | 117 +++++++++++++++++++++++++++++++++++++++++++++++-------
 1 file changed, 102 insertions(+), 15 deletions(-)

diff --git a/gdbstub.c b/gdbstub.c
index 64a1db21b5..691b877aa8 100644
--- a/gdbstub.c
+++ b/gdbstub.c
@@ -716,10 +716,40 @@ static CPUState *find_cpu(uint32_t thread_id)
     }
 
     return NULL;
 }
 
+static CPUState *get_first_cpu_in_process(const GDBState *s,
+                                          GDBProcess *process)
+{
+    CPUState *cpu;
+
+    CPU_FOREACH(cpu) {
+        if (gdb_get_cpu_pid(s, cpu) == process->pid) {
+            return cpu;
+        }
+    }
+
+    return NULL;
+}
+
+static CPUState *gdb_next_cpu_in_process(const GDBState *s, CPUState *cpu)
+{
+    uint32_t pid = gdb_get_cpu_pid(s, cpu);
+    cpu = CPU_NEXT(cpu);
+
+    while (cpu) {
+        if (gdb_get_cpu_pid(s, cpu) == pid) {
+            break;
+        }
+
+        cpu = CPU_NEXT(cpu);
+    }
+
+    return cpu;
+}
+
 static CPUState *gdb_get_cpu(const GDBState *s, uint32_t pid, uint32_t tid)
 {
     GDBProcess *process;
     CPUState *cpu;
 
@@ -745,10 +775,41 @@ static CPUState *gdb_get_cpu(const GDBState *s, uint32_t pid, uint32_t tid)
     }
 
     return cpu;
 }
 
+/* Return the cpu following @cpu, while ignoring
+ * unattached processes.
+ */
+static CPUState *gdb_next_attached_cpu(const GDBState *s, CPUState *cpu)
+{
+    cpu = CPU_NEXT(cpu);
+
+    while (cpu) {
+        if (gdb_get_cpu_process(s, cpu)->attached) {
+            break;
+        }
+
+        cpu = CPU_NEXT(cpu);
+    }
+
+    return cpu;
+}
+
+/* Return the first attached cpu */
+static CPUState *gdb_first_attached_cpu(const GDBState *s)
+{
+    CPUState *cpu = first_cpu;
+    GDBProcess *process = gdb_get_cpu_process(s, cpu);
+
+    if (!process->attached) {
+        return gdb_next_attached_cpu(s, cpu);
+    }
+
+    return cpu;
+}
+
 static const char *get_feature_xml(const char *p, const char **newp,
                                    CPUClass *cc)
 {
     size_t len;
     int i;
@@ -1083,14 +1144,16 @@ static int is_query_packet(const char *p, const char *query, char separator)
  * returns -ENOTSUP if a command is unsupported, -EINVAL or -ERANGE if there is
  *         a format error, 0 on success.
  */
 static int gdb_handle_vcont(GDBState *s, const char *p)
 {
-    int res, idx, signal = 0;
+    int res, signal = 0;
     char cur_action;
     char *newstates;
     unsigned long tmp;
+    uint32_t pid, tid;
+    GDBProcess *process;
     CPUState *cpu;
 #ifdef CONFIG_USER_ONLY
     int max_cpus = 1; /* global variable max_cpus exists only in system mode */
 
     CPU_FOREACH(cpu) {
@@ -1129,29 +1192,52 @@ static int gdb_handle_vcont(GDBState *s, const char *p)
         } else if (cur_action != 'c' && cur_action != 's') {
             /* unknown/invalid/unsupported command */
             res = -ENOTSUP;
             goto out;
         }
-        /* thread specification. special values: (none), -1 = all; 0 = any */
-        if ((p[0] == ':' && p[1] == '-' && p[2] == '1') || (p[0] != ':')) {
-            if (*p == ':') {
-                p += 3;
-            }
-            for (idx = 0; idx < max_cpus; idx++) {
-                if (newstates[idx] == 1) {
-                    newstates[idx] = cur_action;
+
+        if (*p++ != ':') {
+            res = -ENOTSUP;
+            goto out;
+        }
+
+        switch (read_thread_id(p, &p, &pid, &tid)) {
+        case GDB_READ_THREAD_ERR:
+            res = -EINVAL;
+            goto out;
+
+        case GDB_ALL_PROCESSES:
+            cpu = gdb_first_attached_cpu(s);
+            while (cpu) {
+                if (newstates[cpu->cpu_index] == 1) {
+                    newstates[cpu->cpu_index] = cur_action;
                 }
+
+                cpu = gdb_next_attached_cpu(s, cpu);
             }
-        } else if (*p == ':') {
-            p++;
-            res = qemu_strtoul(p, &p, 16, &tmp);
-            if (res) {
+            break;
+
+        case GDB_ALL_THREADS:
+            process = gdb_get_process(s, pid);
+
+            if (!process->attached) {
+                res = -EINVAL;
                 goto out;
             }
 
-            /* 0 means any thread, so we pick the first valid CPU */
-            cpu = tmp ? find_cpu(tmp) : first_cpu;
+            cpu = get_first_cpu_in_process(s, process);
+            while (cpu) {
+                if (newstates[cpu->cpu_index] == 1) {
+                    newstates[cpu->cpu_index] = cur_action;
+                }
+
+                cpu = gdb_next_cpu_in_process(s, cpu);
+            }
+            break;
+
+        case GDB_ONE_THREAD:
+            cpu = gdb_get_cpu(s, pid, tid);
 
             /* invalid CPU/thread specified */
             if (!cpu) {
                 res = -EINVAL;
                 goto out;
@@ -1159,10 +1245,11 @@ static int gdb_handle_vcont(GDBState *s, const char *p)
 
             /* only use if no previous match occourred */
             if (newstates[cpu->cpu_index] == 1) {
                 newstates[cpu->cpu_index] = cur_action;
             }
+            break;
         }
     }
     s->signal = signal;
     gdb_continue_partial(s, newstates);
 
-- 
2.19.1

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [Qemu-devel] [PATCH v7 06/16] gdbstub: add multiprocess support to 'sC' packets
  2018-11-23  9:17 [Qemu-devel] [PATCH v7 00/16] gdbstub: support for the multiprocess extension Luc Michel
                   ` (4 preceding siblings ...)
  2018-11-23  9:17 ` [Qemu-devel] [PATCH v7 05/16] gdbstub: add multiprocess support to vCont packets Luc Michel
@ 2018-11-23  9:17 ` Luc Michel
  2018-11-23  9:17 ` [Qemu-devel] [PATCH v7 07/16] gdbstub: add multiprocess support to (f|s)ThreadInfo and ThreadExtraInfo Luc Michel
                   ` (9 subsequent siblings)
  15 siblings, 0 replies; 45+ messages in thread
From: Luc Michel @ 2018-11-23  9:17 UTC (permalink / raw)
  To: qemu-devel
  Cc: Luc Michel, qemu-arm, Peter Maydell, saipava, edgari, alistair,
	Philippe Mathieu-Daudé, mark.burton, Eduardo Habkost

Change the sC packet handling to support the multiprocess extension.
Instead of returning the first thread, we return the first thread of the
current process.

Signed-off-by: Luc Michel <luc.michel@greensocs.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
---
 gdbstub.c | 11 ++++++++---
 1 file changed, 8 insertions(+), 3 deletions(-)

diff --git a/gdbstub.c b/gdbstub.c
index 691b877aa8..728efdabee 100644
--- a/gdbstub.c
+++ b/gdbstub.c
@@ -1548,13 +1548,18 @@ static int gdb_handle_packet(GDBState *s, const char *line_buf)
             type = strtoul(p, (char **)&p, 16);
             sstep_flags = type;
             put_packet(s, "OK");
             break;
         } else if (strcmp(p,"C") == 0) {
-            /* "Current thread" remains vague in the spec, so always return
-             *  the first CPU (gdb returns the first thread). */
-            put_packet(s, "QC1");
+            /* "Current thread" remains vague in the spec, so always return the
+             * first thread of the current process (gdb returns the first
+             * thread).
+             */
+            cpu = get_first_cpu_in_process(s, gdb_get_cpu_process(s, s->g_cpu));
+            snprintf(buf, sizeof(buf), "QC%s",
+                     gdb_fmt_thread_id(s, cpu, thread_id, sizeof(thread_id)));
+            put_packet(s, buf);
             break;
         } else if (strcmp(p,"fThreadInfo") == 0) {
             s->query_cpu = first_cpu;
             goto report_cpuinfo;
         } else if (strcmp(p,"sThreadInfo") == 0) {
-- 
2.19.1

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [Qemu-devel] [PATCH v7 07/16] gdbstub: add multiprocess support to (f|s)ThreadInfo and ThreadExtraInfo
  2018-11-23  9:17 [Qemu-devel] [PATCH v7 00/16] gdbstub: support for the multiprocess extension Luc Michel
                   ` (5 preceding siblings ...)
  2018-11-23  9:17 ` [Qemu-devel] [PATCH v7 06/16] gdbstub: add multiprocess support to 'sC' packets Luc Michel
@ 2018-11-23  9:17 ` Luc Michel
  2018-11-30 10:35   ` Edgar E. Iglesias
  2018-11-23  9:17 ` [Qemu-devel] [PATCH v7 08/16] gdbstub: add multiprocess support to Xfer:features:read: Luc Michel
                   ` (8 subsequent siblings)
  15 siblings, 1 reply; 45+ messages in thread
From: Luc Michel @ 2018-11-23  9:17 UTC (permalink / raw)
  To: qemu-devel
  Cc: Luc Michel, qemu-arm, Peter Maydell, saipava, edgari, alistair,
	Philippe Mathieu-Daudé, mark.burton, Eduardo Habkost

Change the thread info related packets handling to support multiprocess
extension.

Add the CPUs class name in the extra info to help differentiate
them in multiprocess mode.

Signed-off-by: Luc Michel <luc.michel@greensocs.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
---
 gdbstub.c | 37 +++++++++++++++++++++++++++----------
 1 file changed, 27 insertions(+), 10 deletions(-)

diff --git a/gdbstub.c b/gdbstub.c
index 728efdabee..12d4e33c40 100644
--- a/gdbstub.c
+++ b/gdbstub.c
@@ -1262,11 +1262,10 @@ out:
 static int gdb_handle_packet(GDBState *s, const char *line_buf)
 {
     CPUState *cpu;
     CPUClass *cc;
     const char *p;
-    uint32_t thread;
     uint32_t pid, tid;
     int ch, reg_size, type, res;
     uint8_t mem_buf[MAX_PACKET_LENGTH];
     char buf[sizeof(mem_buf) + 1 /* trailing NUL */];
     char thread_id[16];
@@ -1558,30 +1557,48 @@ static int gdb_handle_packet(GDBState *s, const char *line_buf)
             snprintf(buf, sizeof(buf), "QC%s",
                      gdb_fmt_thread_id(s, cpu, thread_id, sizeof(thread_id)));
             put_packet(s, buf);
             break;
         } else if (strcmp(p,"fThreadInfo") == 0) {
-            s->query_cpu = first_cpu;
+            s->query_cpu = gdb_first_attached_cpu(s);
             goto report_cpuinfo;
         } else if (strcmp(p,"sThreadInfo") == 0) {
         report_cpuinfo:
             if (s->query_cpu) {
-                snprintf(buf, sizeof(buf), "m%x", cpu_gdb_index(s->query_cpu));
+                snprintf(buf, sizeof(buf), "m%s",
+                         gdb_fmt_thread_id(s, s->query_cpu,
+                                       thread_id, sizeof(thread_id)));
                 put_packet(s, buf);
-                s->query_cpu = CPU_NEXT(s->query_cpu);
+                s->query_cpu = gdb_next_attached_cpu(s, s->query_cpu);
             } else
                 put_packet(s, "l");
             break;
         } else if (strncmp(p,"ThreadExtraInfo,", 16) == 0) {
-            thread = strtoull(p+16, (char **)&p, 16);
-            cpu = find_cpu(thread);
+            if (read_thread_id(p + 16, &p, &pid, &tid) == GDB_READ_THREAD_ERR) {
+                put_packet(s, "E22");
+                break;
+            }
+            cpu = gdb_get_cpu(s, pid, tid);
             if (cpu != NULL) {
                 cpu_synchronize_state(cpu);
-                /* memtohex() doubles the required space */
-                len = snprintf((char *)mem_buf, sizeof(buf) / 2,
-                               "CPU#%d [%s]", cpu->cpu_index,
-                               cpu->halted ? "halted " : "running");
+
+                if (s->multiprocess && (s->process_num > 1)) {
+                    /* Print the CPU model and name in multiprocess mode */
+                    ObjectClass *oc = object_get_class(OBJECT(cpu));
+                    const char *cpu_model = object_class_get_name(oc);
+                    char *cpu_name =
+                        object_get_canonical_path_component(OBJECT(cpu));
+                    len = snprintf((char *)mem_buf, sizeof(buf) / 2,
+                                   "%s %s [%s]", cpu_model, cpu_name,
+                                   cpu->halted ? "halted " : "running");
+                    g_free(cpu_name);
+                } else {
+                    /* memtohex() doubles the required space */
+                    len = snprintf((char *)mem_buf, sizeof(buf) / 2,
+                                   "CPU#%d [%s]", cpu->cpu_index,
+                                   cpu->halted ? "halted " : "running");
+                }
                 trace_gdbstub_op_extra_info((char *)mem_buf);
                 memtohex(buf, mem_buf, len);
                 put_packet(s, buf);
             }
             break;
-- 
2.19.1

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [Qemu-devel] [PATCH v7 08/16] gdbstub: add multiprocess support to Xfer:features:read:
  2018-11-23  9:17 [Qemu-devel] [PATCH v7 00/16] gdbstub: support for the multiprocess extension Luc Michel
                   ` (6 preceding siblings ...)
  2018-11-23  9:17 ` [Qemu-devel] [PATCH v7 07/16] gdbstub: add multiprocess support to (f|s)ThreadInfo and ThreadExtraInfo Luc Michel
@ 2018-11-23  9:17 ` Luc Michel
  2018-11-23  9:17 ` [Qemu-devel] [PATCH v7 09/16] gdbstub: add multiprocess support to gdb_vm_state_change() Luc Michel
                   ` (7 subsequent siblings)
  15 siblings, 0 replies; 45+ messages in thread
From: Luc Michel @ 2018-11-23  9:17 UTC (permalink / raw)
  To: qemu-devel
  Cc: Luc Michel, qemu-arm, Peter Maydell, saipava, edgari, alistair,
	Philippe Mathieu-Daudé, mark.burton, Eduardo Habkost

Change the Xfer:features:read: packet handling to support the
multiprocess extension. This packet is used to request the XML
description of the CPU. In multiprocess mode, different descriptions can
be sent for different processes.

This function now takes the process to send the description for as a
parameter, and use a buffer in the process structure to store the
generated description.

It takes the first CPU of the process to generate the description.

Signed-off-by: Luc Michel <luc.michel@greensocs.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
---
 gdbstub.c | 47 +++++++++++++++++++++++++++--------------------
 1 file changed, 27 insertions(+), 20 deletions(-)

diff --git a/gdbstub.c b/gdbstub.c
index 12d4e33c40..2be333bbd1 100644
--- a/gdbstub.c
+++ b/gdbstub.c
@@ -298,10 +298,12 @@ typedef struct GDBRegisterState {
 } GDBRegisterState;
 
 typedef struct GDBProcess {
     uint32_t pid;
     bool attached;
+
+    char target_xml[1024];
 } GDBProcess;
 
 enum RSState {
     RS_INACTIVE,
     RS_IDLE,
@@ -806,55 +808,57 @@ static CPUState *gdb_first_attached_cpu(const GDBState *s)
     }
 
     return cpu;
 }
 
-static const char *get_feature_xml(const char *p, const char **newp,
-                                   CPUClass *cc)
+static const char *get_feature_xml(const GDBState *s, const char *p,
+                                   const char **newp, GDBProcess *process)
 {
     size_t len;
     int i;
     const char *name;
-    static char target_xml[1024];
+    CPUState *cpu = get_first_cpu_in_process(s, process);
+    CPUClass *cc = CPU_GET_CLASS(cpu);
 
     len = 0;
     while (p[len] && p[len] != ':')
         len++;
     *newp = p + len;
 
     name = NULL;
     if (strncmp(p, "target.xml", len) == 0) {
+        char *buf = process->target_xml;
+        const size_t buf_sz = sizeof(process->target_xml);
+
         /* Generate the XML description for this CPU.  */
-        if (!target_xml[0]) {
+        if (!buf[0]) {
             GDBRegisterState *r;
-            CPUState *cpu = first_cpu;
 
-            pstrcat(target_xml, sizeof(target_xml),
+            pstrcat(buf, buf_sz,
                     "<?xml version=\"1.0\"?>"
                     "<!DOCTYPE target SYSTEM \"gdb-target.dtd\">"
                     "<target>");
             if (cc->gdb_arch_name) {
                 gchar *arch = cc->gdb_arch_name(cpu);
-                pstrcat(target_xml, sizeof(target_xml), "<architecture>");
-                pstrcat(target_xml, sizeof(target_xml), arch);
-                pstrcat(target_xml, sizeof(target_xml), "</architecture>");
+                pstrcat(buf, buf_sz, "<architecture>");
+                pstrcat(buf, buf_sz, arch);
+                pstrcat(buf, buf_sz, "</architecture>");
                 g_free(arch);
             }
-            pstrcat(target_xml, sizeof(target_xml), "<xi:include href=\"");
-            pstrcat(target_xml, sizeof(target_xml), cc->gdb_core_xml_file);
-            pstrcat(target_xml, sizeof(target_xml), "\"/>");
+            pstrcat(buf, buf_sz, "<xi:include href=\"");
+            pstrcat(buf, buf_sz, cc->gdb_core_xml_file);
+            pstrcat(buf, buf_sz, "\"/>");
             for (r = cpu->gdb_regs; r; r = r->next) {
-                pstrcat(target_xml, sizeof(target_xml), "<xi:include href=\"");
-                pstrcat(target_xml, sizeof(target_xml), r->xml);
-                pstrcat(target_xml, sizeof(target_xml), "\"/>");
+                pstrcat(buf, buf_sz, "<xi:include href=\"");
+                pstrcat(buf, buf_sz, r->xml);
+                pstrcat(buf, buf_sz, "\"/>");
             }
-            pstrcat(target_xml, sizeof(target_xml), "</target>");
+            pstrcat(buf, buf_sz, "</target>");
         }
-        return target_xml;
+        return buf;
     }
     if (cc->gdb_get_dynamic_xml) {
-        CPUState *cpu = first_cpu;
         char *xmlname = g_strndup(p, len);
         const char *xml = cc->gdb_get_dynamic_xml(cpu, xmlname);
 
         g_free(xmlname);
         if (xml) {
@@ -1260,10 +1264,11 @@ out:
 }
 
 static int gdb_handle_packet(GDBState *s, const char *line_buf)
 {
     CPUState *cpu;
+    GDBProcess *process;
     CPUClass *cc;
     const char *p;
     uint32_t pid, tid;
     int ch, reg_size, type, res;
     uint8_t mem_buf[MAX_PACKET_LENGTH];
@@ -1643,18 +1648,19 @@ static int gdb_handle_packet(GDBState *s, const char *line_buf)
         }
         if (strncmp(p, "Xfer:features:read:", 19) == 0) {
             const char *xml;
             target_ulong total_len;
 
-            cc = CPU_GET_CLASS(first_cpu);
+            process = gdb_get_cpu_process(s, s->g_cpu);
+            cc = CPU_GET_CLASS(s->g_cpu);
             if (cc->gdb_core_xml_file == NULL) {
                 goto unknown_command;
             }
 
             gdb_has_xml = true;
             p += 19;
-            xml = get_feature_xml(p, &p, cc);
+            xml = get_feature_xml(s, p, &p, process);
             if (!xml) {
                 snprintf(buf, sizeof(buf), "E00");
                 put_packet(s, buf);
                 break;
             }
@@ -2327,10 +2333,11 @@ static int find_cpu_clusters(Object *child, void *opaque)
          * runtime, we enforce here that the machine does not use a cluster ID
          * that would lead to PID 0. */
         assert(process->pid != UINT32_MAX);
         process->pid = cluster->cluster_id + 1;
         process->attached = false;
+        process->target_xml[0] = '\0';
 
         return 0;
     }
 
     return object_child_foreach(child, find_cpu_clusters, opaque);
-- 
2.19.1

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [Qemu-devel] [PATCH v7 09/16] gdbstub: add multiprocess support to gdb_vm_state_change()
  2018-11-23  9:17 [Qemu-devel] [PATCH v7 00/16] gdbstub: support for the multiprocess extension Luc Michel
                   ` (7 preceding siblings ...)
  2018-11-23  9:17 ` [Qemu-devel] [PATCH v7 08/16] gdbstub: add multiprocess support to Xfer:features:read: Luc Michel
@ 2018-11-23  9:17 ` Luc Michel
  2018-11-23  9:17 ` [Qemu-devel] [PATCH v7 10/16] gdbstub: add multiprocess support to 'D' packets Luc Michel
                   ` (6 subsequent siblings)
  15 siblings, 0 replies; 45+ messages in thread
From: Luc Michel @ 2018-11-23  9:17 UTC (permalink / raw)
  To: qemu-devel
  Cc: Luc Michel, qemu-arm, Peter Maydell, saipava, edgari, alistair,
	Philippe Mathieu-Daudé, mark.burton, Eduardo Habkost

Add support for multiprocess extension in gdb_vm_state_change()
function.

Signed-off-by: Luc Michel <luc.michel@greensocs.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
---
 gdbstub.c | 15 ++++++++++++---
 1 file changed, 12 insertions(+), 3 deletions(-)

diff --git a/gdbstub.c b/gdbstub.c
index 2be333bbd1..4c4f519dd9 100644
--- a/gdbstub.c
+++ b/gdbstub.c
@@ -1717,10 +1717,11 @@ void gdb_set_stop_cpu(CPUState *cpu)
 static void gdb_vm_state_change(void *opaque, int running, RunState state)
 {
     GDBState *s = gdbserver_state;
     CPUState *cpu = s->c_cpu;
     char buf[256];
+    char thread_id[16];
     const char *type;
     int ret;
 
     if (running || s->state == RS_INACTIVE) {
         return;
@@ -1728,10 +1729,18 @@ static void gdb_vm_state_change(void *opaque, int running, RunState state)
     /* Is there a GDB syscall waiting to be sent?  */
     if (s->current_syscall_cb) {
         put_packet(s, s->syscall_buf);
         return;
     }
+
+    if (cpu == NULL) {
+        /* No process attached */
+        return;
+    }
+
+    gdb_fmt_thread_id(s, cpu, thread_id, sizeof(thread_id));
+
     switch (state) {
     case RUN_STATE_DEBUG:
         if (cpu->watchpoint_hit) {
             switch (cpu->watchpoint_hit->flags & BP_MEM_ACCESS) {
             case BP_MEM_READ:
@@ -1745,12 +1754,12 @@ static void gdb_vm_state_change(void *opaque, int running, RunState state)
                 break;
             }
             trace_gdbstub_hit_watchpoint(type, cpu_gdb_index(cpu),
                     (target_ulong)cpu->watchpoint_hit->vaddr);
             snprintf(buf, sizeof(buf),
-                     "T%02xthread:%02x;%swatch:" TARGET_FMT_lx ";",
-                     GDB_SIGNAL_TRAP, cpu_gdb_index(cpu), type,
+                     "T%02xthread:%s;%swatch:" TARGET_FMT_lx ";",
+                     GDB_SIGNAL_TRAP, thread_id, type,
                      (target_ulong)cpu->watchpoint_hit->vaddr);
             cpu->watchpoint_hit = NULL;
             goto send_packet;
         } else {
             trace_gdbstub_hit_break();
@@ -1788,11 +1797,11 @@ static void gdb_vm_state_change(void *opaque, int running, RunState state)
         trace_gdbstub_hit_unknown(state);
         ret = GDB_SIGNAL_UNKNOWN;
         break;
     }
     gdb_set_stop_cpu(cpu);
-    snprintf(buf, sizeof(buf), "T%02xthread:%02x;", ret, cpu_gdb_index(cpu));
+    snprintf(buf, sizeof(buf), "T%02xthread:%s;", ret, thread_id);
 
 send_packet:
     put_packet(s, buf);
 
     /* disable single step if it was enabled */
-- 
2.19.1

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [Qemu-devel] [PATCH v7 10/16] gdbstub: add multiprocess support to 'D' packets
  2018-11-23  9:17 [Qemu-devel] [PATCH v7 00/16] gdbstub: support for the multiprocess extension Luc Michel
                   ` (8 preceding siblings ...)
  2018-11-23  9:17 ` [Qemu-devel] [PATCH v7 09/16] gdbstub: add multiprocess support to gdb_vm_state_change() Luc Michel
@ 2018-11-23  9:17 ` Luc Michel
  2018-11-23  9:17 ` [Qemu-devel] [PATCH v7 11/16] gdbstub: add support for extended mode packet Luc Michel
                   ` (5 subsequent siblings)
  15 siblings, 0 replies; 45+ messages in thread
From: Luc Michel @ 2018-11-23  9:17 UTC (permalink / raw)
  To: qemu-devel
  Cc: Luc Michel, qemu-arm, Peter Maydell, saipava, edgari, alistair,
	Philippe Mathieu-Daudé, mark.burton, Eduardo Habkost

'D' packets are used by GDB to detach from a process. In multiprocess
mode, the PID to detach from is sent in the request.

Signed-off-by: Luc Michel <luc.michel@greensocs.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
---
 gdbstub.c | 60 ++++++++++++++++++++++++++++++++++++++++++++++++-------
 1 file changed, 53 insertions(+), 7 deletions(-)

diff --git a/gdbstub.c b/gdbstub.c
index 4c4f519dd9..0cccce1f38 100644
--- a/gdbstub.c
+++ b/gdbstub.c
@@ -1041,24 +1041,39 @@ static int gdb_breakpoint_remove(target_ulong addr, target_ulong len, int type)
     default:
         return -ENOSYS;
     }
 }
 
+static inline void gdb_cpu_breakpoint_remove_all(CPUState *cpu)
+{
+    cpu_breakpoint_remove_all(cpu, BP_GDB);
+#ifndef CONFIG_USER_ONLY
+    cpu_watchpoint_remove_all(cpu, BP_GDB);
+#endif
+}
+
+static void gdb_process_breakpoint_remove_all(const GDBState *s, GDBProcess *p)
+{
+    CPUState *cpu = get_first_cpu_in_process(s, p);
+
+    while (cpu) {
+        gdb_cpu_breakpoint_remove_all(cpu);
+        cpu = gdb_next_cpu_in_process(s, cpu);
+    }
+}
+
 static void gdb_breakpoint_remove_all(void)
 {
     CPUState *cpu;
 
     if (kvm_enabled()) {
         kvm_remove_all_breakpoints(gdbserver_state->c_cpu);
         return;
     }
 
     CPU_FOREACH(cpu) {
-        cpu_breakpoint_remove_all(cpu, BP_GDB);
-#ifndef CONFIG_USER_ONLY
-        cpu_watchpoint_remove_all(cpu, BP_GDB);
-#endif
+        gdb_cpu_breakpoint_remove_all(cpu);
     }
 }
 
 static void gdb_set_cpu_pc(GDBState *s, target_ulong pc)
 {
@@ -1333,13 +1348,44 @@ static int gdb_handle_packet(GDBState *s, const char *line_buf)
         /* Kill the target */
         error_report("QEMU: Terminated via GDBstub");
         exit(0);
     case 'D':
         /* Detach packet */
-        gdb_breakpoint_remove_all();
-        gdb_syscall_mode = GDB_SYS_DISABLED;
-        gdb_continue(s);
+        pid = 1;
+
+        if (s->multiprocess) {
+            unsigned long lpid;
+            if (*p != ';') {
+                put_packet(s, "E22");
+                break;
+            }
+
+            if (qemu_strtoul(p + 1, &p, 16, &lpid)) {
+                put_packet(s, "E22");
+                break;
+            }
+
+            pid = lpid;
+        }
+
+        process = gdb_get_process(s, pid);
+        gdb_process_breakpoint_remove_all(s, process);
+        process->attached = false;
+
+        if (pid == gdb_get_cpu_pid(s, s->c_cpu)) {
+            s->c_cpu = gdb_first_attached_cpu(s);
+        }
+
+        if (pid == gdb_get_cpu_pid(s, s->g_cpu)) {
+            s->g_cpu = gdb_first_attached_cpu(s);
+        }
+
+        if (s->c_cpu == NULL) {
+            /* No more process attached */
+            gdb_syscall_mode = GDB_SYS_DISABLED;
+            gdb_continue(s);
+        }
         put_packet(s, "OK");
         break;
     case 's':
         if (*p != '\0') {
             addr = strtoull(p, (char **)&p, 16);
-- 
2.19.1

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [Qemu-devel] [PATCH v7 11/16] gdbstub: add support for extended mode packet
  2018-11-23  9:17 [Qemu-devel] [PATCH v7 00/16] gdbstub: support for the multiprocess extension Luc Michel
                   ` (9 preceding siblings ...)
  2018-11-23  9:17 ` [Qemu-devel] [PATCH v7 10/16] gdbstub: add multiprocess support to 'D' packets Luc Michel
@ 2018-11-23  9:17 ` Luc Michel
  2018-11-23  9:17 ` [Qemu-devel] [PATCH v7 12/16] gdbstub: add support for vAttach packets Luc Michel
                   ` (4 subsequent siblings)
  15 siblings, 0 replies; 45+ messages in thread
From: Luc Michel @ 2018-11-23  9:17 UTC (permalink / raw)
  To: qemu-devel
  Cc: Luc Michel, qemu-arm, Peter Maydell, saipava, edgari, alistair,
	Philippe Mathieu-Daudé, mark.burton, Eduardo Habkost

Add support for the '!' extended mode packet. This is required for the
multiprocess extension.

Signed-off-by: Luc Michel <luc.michel@greensocs.com>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
---
 gdbstub.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/gdbstub.c b/gdbstub.c
index 0cccce1f38..5cc643b9e0 100644
--- a/gdbstub.c
+++ b/gdbstub.c
@@ -1296,10 +1296,13 @@ static int gdb_handle_packet(GDBState *s, const char *line_buf)
     trace_gdbstub_io_command(line_buf);
 
     p = line_buf;
     ch = *p++;
     switch(ch) {
+    case '!':
+        put_packet(s, "OK");
+        break;
     case '?':
         /* TODO: Make this return the correct value for user-mode.  */
         snprintf(buf, sizeof(buf), "T%02xthread:%s;", GDB_SIGNAL_TRAP,
                  gdb_fmt_thread_id(s, s->c_cpu, thread_id, sizeof(thread_id)));
         put_packet(s, buf);
-- 
2.19.1

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [Qemu-devel] [PATCH v7 12/16] gdbstub: add support for vAttach packets
  2018-11-23  9:17 [Qemu-devel] [PATCH v7 00/16] gdbstub: support for the multiprocess extension Luc Michel
                   ` (10 preceding siblings ...)
  2018-11-23  9:17 ` [Qemu-devel] [PATCH v7 11/16] gdbstub: add support for extended mode packet Luc Michel
@ 2018-11-23  9:17 ` Luc Michel
  2018-11-25 21:00   ` Philippe Mathieu-Daudé
  2018-11-23  9:17 ` [Qemu-devel] [PATCH v7 13/16] gdbstub: processes initialization on new peer connection Luc Michel
                   ` (3 subsequent siblings)
  15 siblings, 1 reply; 45+ messages in thread
From: Luc Michel @ 2018-11-23  9:17 UTC (permalink / raw)
  To: qemu-devel
  Cc: Luc Michel, qemu-arm, Peter Maydell, saipava, edgari, alistair,
	Philippe Mathieu-Daudé, mark.burton, Eduardo Habkost

Add support for the vAttach packets. In multiprocess mode, GDB sends
them to attach to additional processes.

Signed-off-by: Luc Michel <luc.michel@greensocs.com>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
---
 gdbstub.c | 35 +++++++++++++++++++++++++++++++++++
 1 file changed, 35 insertions(+)

diff --git a/gdbstub.c b/gdbstub.c
index 5cc643b9e0..69471ea914 100644
--- a/gdbstub.c
+++ b/gdbstub.c
@@ -1342,10 +1342,45 @@ static int gdb_handle_packet(GDBState *s, const char *line_buf)
                     break;
                 }
                 goto unknown_command;
             }
             break;
+        } else if (strncmp(p, "Attach;", 7) == 0) {
+            unsigned long pid;
+
+            p += 7;
+
+            if (qemu_strtoul(p, &p, 16, &pid)) {
+                put_packet(s, "E22");
+                break;
+            }
+
+            process = gdb_get_process(s, pid);
+
+            if (process == NULL) {
+                put_packet(s, "E22");
+                break;
+            }
+
+            cpu = get_first_cpu_in_process(s, process);
+
+            if (cpu == NULL) {
+                /* Refuse to attach an empty process */
+                put_packet(s, "E22");
+                break;
+            }
+
+            process->attached = true;
+
+            s->g_cpu = cpu;
+            s->c_cpu = cpu;
+
+            snprintf(buf, sizeof(buf), "T%02xthread:%s;", GDB_SIGNAL_TRAP,
+                     gdb_fmt_thread_id(s, cpu, thread_id, sizeof(thread_id)));
+
+            put_packet(s, buf);
+            break;
         } else {
             goto unknown_command;
         }
     case 'k':
         /* Kill the target */
-- 
2.19.1

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [Qemu-devel] [PATCH v7 13/16] gdbstub: processes initialization on new peer connection
  2018-11-23  9:17 [Qemu-devel] [PATCH v7 00/16] gdbstub: support for the multiprocess extension Luc Michel
                   ` (11 preceding siblings ...)
  2018-11-23  9:17 ` [Qemu-devel] [PATCH v7 12/16] gdbstub: add support for vAttach packets Luc Michel
@ 2018-11-23  9:17 ` Luc Michel
  2018-11-25 21:02   ` Philippe Mathieu-Daudé
  2018-11-23  9:17 ` [Qemu-devel] [PATCH v7 14/16] gdbstub: gdb_set_stop_cpu: ignore request when process is not attached Luc Michel
                   ` (2 subsequent siblings)
  15 siblings, 1 reply; 45+ messages in thread
From: Luc Michel @ 2018-11-23  9:17 UTC (permalink / raw)
  To: qemu-devel
  Cc: Luc Michel, qemu-arm, Peter Maydell, saipava, edgari, alistair,
	Philippe Mathieu-Daudé, mark.burton, Eduardo Habkost

When a new connection is established, we set the first process to be
attached, and the others detached. The first CPU of the first process
is selected as the current CPU.

Signed-off-by: Luc Michel <luc.michel@greensocs.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
---
 gdbstub.c | 20 +++++++++++++++-----
 1 file changed, 15 insertions(+), 5 deletions(-)

diff --git a/gdbstub.c b/gdbstub.c
index 69471ea914..6518324d46 100644
--- a/gdbstub.c
+++ b/gdbstub.c
@@ -2249,13 +2249,14 @@ static bool gdb_accept(void)
         close(fd);
         return false;
     }
 
     s = g_malloc0(sizeof(GDBState));
-    s->c_cpu = first_cpu;
-    s->g_cpu = first_cpu;
     create_unique_process(s);
+    s->processes[0].attached = true;
+    s->c_cpu = gdb_first_attached_cpu(s);
+    s->g_cpu = s->c_cpu;
     s->fd = fd;
     gdb_has_xml = false;
 
     gdbserver_state = s;
     return true;
@@ -2337,12 +2338,23 @@ static void gdb_chr_receive(void *opaque, const uint8_t *buf, int size)
     }
 }
 
 static void gdb_chr_event(void *opaque, int event)
 {
+    int i;
+    GDBState *s = (GDBState *) opaque;
+
     switch (event) {
     case CHR_EVENT_OPENED:
+        /* Start with first process attached, others detached */
+        for (i = 0; i < s->process_num; i++) {
+            s->processes[i].attached = !i;
+        }
+
+        s->c_cpu = gdb_first_attached_cpu(s);
+        s->g_cpu = s->c_cpu;
+
         vm_stop(RUN_STATE_PAUSED);
         gdb_has_xml = false;
         break;
     default:
         break;
@@ -2528,19 +2540,17 @@ int gdbserver_start(const char *device)
         mon_chr = s->mon_chr;
         cleanup_processes(s);
         memset(s, 0, sizeof(GDBState));
         s->mon_chr = mon_chr;
     }
-    s->c_cpu = first_cpu;
-    s->g_cpu = first_cpu;
 
     create_processes(s);
 
     if (chr) {
         qemu_chr_fe_init(&s->chr, chr, &error_abort);
         qemu_chr_fe_set_handlers(&s->chr, gdb_chr_can_receive, gdb_chr_receive,
-                                 gdb_chr_event, NULL, NULL, NULL, true);
+                                 gdb_chr_event, NULL, s, NULL, true);
     }
     s->state = chr ? RS_IDLE : RS_INACTIVE;
     s->mon_chr = mon_chr;
     s->current_syscall_cb = NULL;
 
-- 
2.19.1

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [Qemu-devel] [PATCH v7 14/16] gdbstub: gdb_set_stop_cpu: ignore request when process is not attached
  2018-11-23  9:17 [Qemu-devel] [PATCH v7 00/16] gdbstub: support for the multiprocess extension Luc Michel
                   ` (12 preceding siblings ...)
  2018-11-23  9:17 ` [Qemu-devel] [PATCH v7 13/16] gdbstub: processes initialization on new peer connection Luc Michel
@ 2018-11-23  9:17 ` Luc Michel
  2018-11-23  9:17 ` [Qemu-devel] [PATCH v7 15/16] gdbstub: add multiprocess extension support Luc Michel
  2018-11-23  9:17 ` [Qemu-devel] [PATCH v7 16/16] arm/xlnx-zynqmp: put APUs and RPUs in separate CPU clusters Luc Michel
  15 siblings, 0 replies; 45+ messages in thread
From: Luc Michel @ 2018-11-23  9:17 UTC (permalink / raw)
  To: qemu-devel
  Cc: Luc Michel, qemu-arm, Peter Maydell, saipava, edgari, alistair,
	Philippe Mathieu-Daudé, mark.burton, Eduardo Habkost

When gdb_set_stop_cpu() is called with a CPU associated to a process
currently not attached by the GDB client, return without modifying the
stop CPU. Otherwise, GDB gets confused if it receives packets with a
thread-id it does not know about.

Signed-off-by: Luc Michel <luc.michel@greensocs.com>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
---
 gdbstub.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/gdbstub.c b/gdbstub.c
index 6518324d46..6fc1630643 100644
--- a/gdbstub.c
+++ b/gdbstub.c
@@ -1791,10 +1791,19 @@ static int gdb_handle_packet(GDBState *s, const char *line_buf)
     return RS_IDLE;
 }
 
 void gdb_set_stop_cpu(CPUState *cpu)
 {
+    GDBProcess *p = gdb_get_cpu_process(gdbserver_state, cpu);
+
+    if (!p->attached) {
+        /* Having a stop CPU corresponding to a process that is not attached
+         * confuses GDB. So we ignore the request.
+         */
+        return;
+    }
+
     gdbserver_state->c_cpu = cpu;
     gdbserver_state->g_cpu = cpu;
 }
 
 #ifndef CONFIG_USER_ONLY
-- 
2.19.1

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [Qemu-devel] [PATCH v7 15/16] gdbstub: add multiprocess extension support
  2018-11-23  9:17 [Qemu-devel] [PATCH v7 00/16] gdbstub: support for the multiprocess extension Luc Michel
                   ` (13 preceding siblings ...)
  2018-11-23  9:17 ` [Qemu-devel] [PATCH v7 14/16] gdbstub: gdb_set_stop_cpu: ignore request when process is not attached Luc Michel
@ 2018-11-23  9:17 ` Luc Michel
  2018-11-23  9:17 ` [Qemu-devel] [PATCH v7 16/16] arm/xlnx-zynqmp: put APUs and RPUs in separate CPU clusters Luc Michel
  15 siblings, 0 replies; 45+ messages in thread
From: Luc Michel @ 2018-11-23  9:17 UTC (permalink / raw)
  To: qemu-devel
  Cc: Luc Michel, qemu-arm, Peter Maydell, saipava, edgari, alistair,
	Philippe Mathieu-Daudé, mark.burton, Eduardo Habkost

Add multiprocess extension support by enabling multiprocess mode when
the peer requests it, and by replying that we actually support it in the
qSupported reply packet.

Signed-off-by: Luc Michel <luc.michel@greensocs.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
---
 gdbstub.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/gdbstub.c b/gdbstub.c
index 6fc1630643..dc71e3f732 100644
--- a/gdbstub.c
+++ b/gdbstub.c
@@ -1725,10 +1725,16 @@ static int gdb_handle_packet(GDBState *s, const char *line_buf)
             snprintf(buf, sizeof(buf), "PacketSize=%x", MAX_PACKET_LENGTH);
             cc = CPU_GET_CLASS(first_cpu);
             if (cc->gdb_core_xml_file != NULL) {
                 pstrcat(buf, sizeof(buf), ";qXfer:features:read+");
             }
+
+            if (strstr(p, "multiprocess+")) {
+                s->multiprocess = true;
+            }
+            pstrcat(buf, sizeof(buf), ";multiprocess+");
+
             put_packet(s, buf);
             break;
         }
         if (strncmp(p, "Xfer:features:read:", 19) == 0) {
             const char *xml;
-- 
2.19.1

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [Qemu-devel] [PATCH v7 16/16] arm/xlnx-zynqmp: put APUs and RPUs in separate CPU clusters
  2018-11-23  9:17 [Qemu-devel] [PATCH v7 00/16] gdbstub: support for the multiprocess extension Luc Michel
                   ` (14 preceding siblings ...)
  2018-11-23  9:17 ` [Qemu-devel] [PATCH v7 15/16] gdbstub: add multiprocess extension support Luc Michel
@ 2018-11-23  9:17 ` Luc Michel
  2018-11-25 21:10   ` Philippe Mathieu-Daudé
  15 siblings, 1 reply; 45+ messages in thread
From: Luc Michel @ 2018-11-23  9:17 UTC (permalink / raw)
  To: qemu-devel
  Cc: Luc Michel, qemu-arm, Peter Maydell, saipava, edgari, alistair,
	Philippe Mathieu-Daudé, mark.burton, Eduardo Habkost

Create two separate CPU clusters for APUs and RPUs.

Signed-off-by: Luc Michel <luc.michel@greensocs.com>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
---
 include/hw/arm/xlnx-zynqmp.h |  3 +++
 hw/arm/xlnx-zynqmp.c         | 21 +++++++++++++++++----
 2 files changed, 20 insertions(+), 4 deletions(-)

diff --git a/include/hw/arm/xlnx-zynqmp.h b/include/hw/arm/xlnx-zynqmp.h
index 98f925ab84..591515c760 100644
--- a/include/hw/arm/xlnx-zynqmp.h
+++ b/include/hw/arm/xlnx-zynqmp.h
@@ -29,10 +29,11 @@
 #include "hw/dma/xlnx_dpdma.h"
 #include "hw/dma/xlnx-zdma.h"
 #include "hw/display/xlnx_dp.h"
 #include "hw/intc/xlnx-zynqmp-ipi.h"
 #include "hw/timer/xlnx-zynqmp-rtc.h"
+#include "hw/cpu/cluster.h"
 
 #define TYPE_XLNX_ZYNQMP "xlnx,zynqmp"
 #define XLNX_ZYNQMP(obj) OBJECT_CHECK(XlnxZynqMPState, (obj), \
                                        TYPE_XLNX_ZYNQMP)
 
@@ -75,10 +76,12 @@
 typedef struct XlnxZynqMPState {
     /*< private >*/
     DeviceState parent_obj;
 
     /*< public >*/
+    CPUClusterState apu_cluster;
+    CPUClusterState rpu_cluster;
     ARMCPU apu_cpu[XLNX_ZYNQMP_NUM_APU_CPUS];
     ARMCPU rpu_cpu[XLNX_ZYNQMP_NUM_RPU_CPUS];
     GICState gic;
     MemoryRegion gic_mr[XLNX_ZYNQMP_GIC_REGIONS][XLNX_ZYNQMP_GIC_ALIASES];
 
diff --git a/hw/arm/xlnx-zynqmp.c b/hw/arm/xlnx-zynqmp.c
index c195040350..42a138074c 100644
--- a/hw/arm/xlnx-zynqmp.c
+++ b/hw/arm/xlnx-zynqmp.c
@@ -176,16 +176,22 @@ static void xlnx_zynqmp_create_rpu(XlnxZynqMPState *s, const char *boot_cpu,
 {
     Error *err = NULL;
     int i;
     int num_rpus = MIN(smp_cpus - XLNX_ZYNQMP_NUM_APU_CPUS, XLNX_ZYNQMP_NUM_RPU_CPUS);
 
+    object_initialize_child(OBJECT(s), "rpu-cluster", &s->rpu_cluster,
+                            sizeof(s->rpu_cluster), TYPE_CPU_CLUSTER,
+                            &error_abort, NULL);
+
+    qdev_init_nofail(DEVICE(&s->rpu_cluster));
+
     for (i = 0; i < num_rpus; i++) {
         char *name;
 
         object_initialize(&s->rpu_cpu[i], sizeof(s->rpu_cpu[i]),
                           "cortex-r5f-" TYPE_ARM_CPU);
-        object_property_add_child(OBJECT(s), "rpu-cpu[*]",
+        object_property_add_child(OBJECT(&s->rpu_cluster), "rpu-cpu[*]",
                                   OBJECT(&s->rpu_cpu[i]), &error_abort);
 
         name = object_get_canonical_path_component(OBJECT(&s->rpu_cpu[i]));
         if (strcmp(name, boot_cpu)) {
             /* Secondary CPUs start in PSCI powered-down state */
@@ -211,14 +217,19 @@ static void xlnx_zynqmp_init(Object *obj)
 {
     XlnxZynqMPState *s = XLNX_ZYNQMP(obj);
     int i;
     int num_apus = MIN(smp_cpus, XLNX_ZYNQMP_NUM_APU_CPUS);
 
+    object_initialize_child(obj, "apu-cluster", &s->apu_cluster,
+                            sizeof(s->apu_cluster), TYPE_CPU_CLUSTER,
+                            &error_abort, NULL);
+
     for (i = 0; i < num_apus; i++) {
-        object_initialize_child(obj, "apu-cpu[*]", &s->apu_cpu[i],
-                                sizeof(s->apu_cpu[i]),
-                                "cortex-a53-" TYPE_ARM_CPU, &error_abort, NULL);
+        object_initialize_child(OBJECT(&s->apu_cluster), "apu-cpu[*]",
+                                &s->apu_cpu[i], sizeof(s->apu_cpu[i]),
+                                "cortex-a53-" TYPE_ARM_CPU, &error_abort,
+                                NULL);
     }
 
     sysbus_init_child_obj(obj, "gic", &s->gic, sizeof(s->gic),
                           gic_class_name());
 
@@ -331,10 +342,12 @@ static void xlnx_zynqmp_realize(DeviceState *dev, Error **errp)
     qdev_prop_set_uint32(DEVICE(&s->gic), "num-cpu", num_apus);
     qdev_prop_set_bit(DEVICE(&s->gic), "has-security-extensions", s->secure);
     qdev_prop_set_bit(DEVICE(&s->gic),
                       "has-virtualization-extensions", s->virt);
 
+    qdev_init_nofail(DEVICE(&s->apu_cluster));
+
     /* Realize APUs before realizing the GIC. KVM requires this.  */
     for (i = 0; i < num_apus; i++) {
         char *name;
 
         object_property_set_int(OBJECT(&s->apu_cpu[i]), QEMU_PSCI_CONDUIT_SMC,
-- 
2.19.1

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* Re: [Qemu-devel] [PATCH v7 01/16] hw/cpu: introduce CPU clusters
  2018-11-23  9:17 ` [Qemu-devel] [PATCH v7 01/16] hw/cpu: introduce CPU clusters Luc Michel
@ 2018-11-23 18:10   ` Eduardo Habkost
  2018-11-23 18:14     ` Peter Maydell
  2018-11-25 21:27     ` Philippe Mathieu-Daudé
  0 siblings, 2 replies; 45+ messages in thread
From: Eduardo Habkost @ 2018-11-23 18:10 UTC (permalink / raw)
  To: Luc Michel
  Cc: qemu-devel, Peter Maydell, alistair, mark.burton,
	Philippe Mathieu-Daudé, saipava, edgari, qemu-arm

Hi,

Sorry for not reviewing this series earlier.  I just stumbled
upon this part of the code:

On Fri, Nov 23, 2018 at 10:17:14AM +0100, Luc Michel wrote:
> This commit adds the cpu-cluster type. It aims at gathering CPUs from
> the same cluster in a machine.
> 
> For now it only has a `cluster-id` property.
> 
> Signed-off-by: Luc Michel <luc.michel@greensocs.com>
> Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
[...]
> +static void cpu_cluster_init(Object *obj)
> +{
> +    static uint32_t cluster_id_auto_increment;
> +    CPUClusterState *cluster = CPU_CLUSTER(obj);
> +
> +    cluster->cluster_id = cluster_id_auto_increment++;

I see that you implemented this after a suggestion from Philippe,
but I'm worried about this kind of side-effect on object/device
code.  I'm afraid this will bite us back in the future.  We were
bitten by problems caused by automatic cpu_index assignment on
CPU instance_init, and we took a while to fix that.

If you really want to do this and assign cluster_id
automatically, please do it on realize, where it won't have
unexpected side-effects after a simple `qom-list-properties` QMP
command.

I would also add a huge warning above the cluster_id field
declaration, mentioning that the field is not supposed to be used
for anything except debugging.  I think there's a large risk of
people trying to reuse the field incorrectly, just like cpu_index
was reused for multiple (conflicting) purposes in the past.


> +}
> +
> +static Property cpu_cluster_properties[] = {
> +    DEFINE_PROP_UINT32("cluster-id", CPUClusterState, cluster_id, 0),
> +    DEFINE_PROP_END_OF_LIST()
> +};
[...]

-- 
Eduardo

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [Qemu-devel] [PATCH v7 01/16] hw/cpu: introduce CPU clusters
  2018-11-23 18:10   ` Eduardo Habkost
@ 2018-11-23 18:14     ` Peter Maydell
  2018-11-23 18:24       ` Eduardo Habkost
  2018-11-25 21:27     ` Philippe Mathieu-Daudé
  1 sibling, 1 reply; 45+ messages in thread
From: Peter Maydell @ 2018-11-23 18:14 UTC (permalink / raw)
  To: Eduardo Habkost
  Cc: Luc Michel, QEMU Developers, Alistair Francis, Mark Burton,
	Philippe Mathieu-Daudé, Sai Pavan Boddu, Edgar Iglesias,
	qemu-arm

On Fri, 23 Nov 2018 at 18:11, Eduardo Habkost <ehabkost@redhat.com> wrote:
> On Fri, Nov 23, 2018 at 10:17:14AM +0100, Luc Michel wrote:
> > This commit adds the cpu-cluster type. It aims at gathering CPUs from
> > the same cluster in a machine.
> >
> > For now it only has a `cluster-id` property.
> >
> > Signed-off-by: Luc Michel <luc.michel@greensocs.com>
> > Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
> > Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> > Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> > Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
> [...]
> > +static void cpu_cluster_init(Object *obj)
> > +{
> > +    static uint32_t cluster_id_auto_increment;
> > +    CPUClusterState *cluster = CPU_CLUSTER(obj);
> > +
> > +    cluster->cluster_id = cluster_id_auto_increment++;
>
> I see that you implemented this after a suggestion from Philippe,
> but I'm worried about this kind of side-effect on object/device
> code.  I'm afraid this will bite us back in the future.  We were
> bitten by problems caused by automatic cpu_index assignment on
> CPU instance_init, and we took a while to fix that.
>
> If you really want to do this and assign cluster_id
> automatically, please do it on realize, where it won't have
> unexpected side-effects after a simple `qom-list-properties` QMP
> command.
>
> I would also add a huge warning above the cluster_id field
> declaration, mentioning that the field is not supposed to be used
> for anything except debugging.  I think there's a large risk of
> people trying to reuse the field incorrectly, just like cpu_index
> was reused for multiple (conflicting) purposes in the past.

One thing I would like to do with this new "cpu cluster"
concept is to use it to handle a problem we have at the
moment with TCG, where we assume all CPUs have the same
view of physical memory (and so if CPU A executes from physical
address X it can share translated code with CPU B executing
from physical address X). The idea is that we should include
the CPU cluster number in the TCG hash key that we use to
look up cached translation blocks, so that only CPUs in
the same cluster (assumed to have the same view of memory
and to be identical) share TBs.

If we don't have a unique integer key for the cluster, what
should we use instead ?

thanks
-- PMM

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [Qemu-devel] [PATCH v7 01/16] hw/cpu: introduce CPU clusters
  2018-11-23 18:14     ` Peter Maydell
@ 2018-11-23 18:24       ` Eduardo Habkost
  2018-11-23 18:53         ` Peter Maydell
  0 siblings, 1 reply; 45+ messages in thread
From: Eduardo Habkost @ 2018-11-23 18:24 UTC (permalink / raw)
  To: Peter Maydell
  Cc: Luc Michel, QEMU Developers, Alistair Francis, Mark Burton,
	Philippe Mathieu-Daudé, Sai Pavan Boddu, Edgar Iglesias,
	qemu-arm

On Fri, Nov 23, 2018 at 06:14:28PM +0000, Peter Maydell wrote:
> On Fri, 23 Nov 2018 at 18:11, Eduardo Habkost <ehabkost@redhat.com> wrote:
> > On Fri, Nov 23, 2018 at 10:17:14AM +0100, Luc Michel wrote:
> > > This commit adds the cpu-cluster type. It aims at gathering CPUs from
> > > the same cluster in a machine.
> > >
> > > For now it only has a `cluster-id` property.
> > >
> > > Signed-off-by: Luc Michel <luc.michel@greensocs.com>
> > > Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
> > > Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> > > Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> > > Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
> > [...]
> > > +static void cpu_cluster_init(Object *obj)
> > > +{
> > > +    static uint32_t cluster_id_auto_increment;
> > > +    CPUClusterState *cluster = CPU_CLUSTER(obj);
> > > +
> > > +    cluster->cluster_id = cluster_id_auto_increment++;
> >
> > I see that you implemented this after a suggestion from Philippe,
> > but I'm worried about this kind of side-effect on object/device
> > code.  I'm afraid this will bite us back in the future.  We were
> > bitten by problems caused by automatic cpu_index assignment on
> > CPU instance_init, and we took a while to fix that.
> >
> > If you really want to do this and assign cluster_id
> > automatically, please do it on realize, where it won't have
> > unexpected side-effects after a simple `qom-list-properties` QMP
> > command.
> >
> > I would also add a huge warning above the cluster_id field
> > declaration, mentioning that the field is not supposed to be used
> > for anything except debugging.  I think there's a large risk of
> > people trying to reuse the field incorrectly, just like cpu_index
> > was reused for multiple (conflicting) purposes in the past.
> 
> One thing I would like to do with this new "cpu cluster"
> concept is to use it to handle a problem we have at the
> moment with TCG, where we assume all CPUs have the same
> view of physical memory (and so if CPU A executes from physical
> address X it can share translated code with CPU B executing
> from physical address X). The idea is that we should include
> the CPU cluster number in the TCG hash key that we use to
> look up cached translation blocks, so that only CPUs in
> the same cluster (assumed to have the same view of memory
> and to be identical) share TBs.
> 
> If we don't have a unique integer key for the cluster, what
> should we use instead ?

This sounds like a reasonable use of cluster_id as implemented in
this patch.  The ID would be only used internally and not exposed
to the outside, right?

I'm more worried about cases where we could end up exposing the
ID in an external interface (either to guests, or through QMP or
the command-line).  This happened to cpu_index and we took a long
time to fix the mess.

-- 
Eduardo

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [Qemu-devel] [PATCH v7 01/16] hw/cpu: introduce CPU clusters
  2018-11-23 18:24       ` Eduardo Habkost
@ 2018-11-23 18:53         ` Peter Maydell
  2018-11-23 22:38           ` Eduardo Habkost
  0 siblings, 1 reply; 45+ messages in thread
From: Peter Maydell @ 2018-11-23 18:53 UTC (permalink / raw)
  To: Eduardo Habkost
  Cc: Luc Michel, QEMU Developers, Alistair Francis, Mark Burton,
	Philippe Mathieu-Daudé, Sai Pavan Boddu, Edgar Iglesias,
	qemu-arm

On Fri, 23 Nov 2018 at 18:24, Eduardo Habkost <ehabkost@redhat.com> wrote:
>
> On Fri, Nov 23, 2018 at 06:14:28PM +0000, Peter Maydell wrote:
> > One thing I would like to do with this new "cpu cluster"
> > concept is to use it to handle a problem we have at the
> > moment with TCG, where we assume all CPUs have the same
> > view of physical memory (and so if CPU A executes from physical
> > address X it can share translated code with CPU B executing
> > from physical address X). The idea is that we should include
> > the CPU cluster number in the TCG hash key that we use to
> > look up cached translation blocks, so that only CPUs in
> > the same cluster (assumed to have the same view of memory
> > and to be identical) share TBs.
> >
> > If we don't have a unique integer key for the cluster, what
> > should we use instead ?
>
> This sounds like a reasonable use of cluster_id as implemented in
> this patch.  The ID would be only used internally and not exposed
> to the outside, right?

It would be internal to QEMU (not exposed to the guest or
to the user), yes.

> I'm more worried about cases where we could end up exposing the
> ID in an external interface (either to guests, or through QMP or
> the command-line).  This happened to cpu_index and we took a long
> time to fix the mess.

I see, thanks.

My other question about this code was a slightly different one -- are
we guaranteed to be holding the iothread lock when we create
new QOM objects? (ie that we won't have races between two threads
which both try to create new objects and increment the variable)

thanks
-- PMM

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [Qemu-devel] [PATCH v7 01/16] hw/cpu: introduce CPU clusters
  2018-11-23 18:53         ` Peter Maydell
@ 2018-11-23 22:38           ` Eduardo Habkost
  0 siblings, 0 replies; 45+ messages in thread
From: Eduardo Habkost @ 2018-11-23 22:38 UTC (permalink / raw)
  To: Peter Maydell
  Cc: Luc Michel, QEMU Developers, Alistair Francis, Mark Burton,
	Philippe Mathieu-Daudé, Sai Pavan Boddu, Edgar Iglesias,
	qemu-arm

On Fri, Nov 23, 2018 at 06:53:07PM +0000, Peter Maydell wrote:
> On Fri, 23 Nov 2018 at 18:24, Eduardo Habkost <ehabkost@redhat.com> wrote:
> >
> > On Fri, Nov 23, 2018 at 06:14:28PM +0000, Peter Maydell wrote:
> > > One thing I would like to do with this new "cpu cluster"
> > > concept is to use it to handle a problem we have at the
> > > moment with TCG, where we assume all CPUs have the same
> > > view of physical memory (and so if CPU A executes from physical
> > > address X it can share translated code with CPU B executing
> > > from physical address X). The idea is that we should include
> > > the CPU cluster number in the TCG hash key that we use to
> > > look up cached translation blocks, so that only CPUs in
> > > the same cluster (assumed to have the same view of memory
> > > and to be identical) share TBs.
> > >
> > > If we don't have a unique integer key for the cluster, what
> > > should we use instead ?
> >
> > This sounds like a reasonable use of cluster_id as implemented in
> > this patch.  The ID would be only used internally and not exposed
> > to the outside, right?
> 
> It would be internal to QEMU (not exposed to the guest or
> to the user), yes.
> 
> > I'm more worried about cases where we could end up exposing the
> > ID in an external interface (either to guests, or through QMP or
> > the command-line).  This happened to cpu_index and we took a long
> > time to fix the mess.
> 
> I see, thanks.
> 
> My other question about this code was a slightly different one -- are
> we guaranteed to be holding the iothread lock when we create
> new QOM objects? (ie that we won't have races between two threads
> which both try to create new objects and increment the variable)

I assume we are, because type_initialize() (called by
object_new()) isn't thread-safe.

-- 
Eduardo

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [Qemu-devel] [PATCH v7 02/16] gdbstub: introduce GDB processes
  2018-11-23  9:17 ` [Qemu-devel] [PATCH v7 02/16] gdbstub: introduce GDB processes Luc Michel
@ 2018-11-25 20:58   ` Philippe Mathieu-Daudé
  0 siblings, 0 replies; 45+ messages in thread
From: Philippe Mathieu-Daudé @ 2018-11-25 20:58 UTC (permalink / raw)
  To: Luc Michel, qemu-devel
  Cc: Peter Maydell, Eduardo Habkost, alistair, mark.burton,
	Philippe Mathieu-Daudé, saipava, edgari, qemu-arm

On 23/11/18 10:17, Luc Michel wrote:
> Add a structure GDBProcess that represent processes from the GDB
> semantic point of view.
> 
> CPUs can be split into different processes, by grouping them under
> different cpu-cluster objects.  Each occurrence of a cpu-cluster object
> implies the existence of the corresponding process in the GDB stub. The
> GDB process ID is derived from the corresponding cluster ID as follows:
> 
>   GDB PID = cluster ID + 1
> 
> This is because PIDs -1 and 0 are reserved in GDB and cannot be used by
> processes.
> 
> When no such container are found, all the CPUs are put in a unique GDB
> process (create_unique_process()). This is also the case when compiled
> in user mode, where multi-processes do not make much sense for now.
> 
> Signed-off-by: Luc Michel <luc.michel@greensocs.com>
> Acked-by: Alistair Francis <alistair.francis@wdc.com>
> Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
> ---
>  gdbstub.c | 87 +++++++++++++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 87 insertions(+)
> 
> diff --git a/gdbstub.c b/gdbstub.c
> index c4e4f9f082..26f5a7449a 100644
> --- a/gdbstub.c
> +++ b/gdbstub.c
> @@ -27,10 +27,11 @@
>  #include "monitor/monitor.h"
>  #include "chardev/char.h"
>  #include "chardev/char-fe.h"
>  #include "sysemu/sysemu.h"
>  #include "exec/gdbstub.h"
> +#include "hw/cpu/cluster.h"
>  #endif
>  
>  #define MAX_PACKET_LENGTH 4096
>  
>  #include "qemu/sockets.h"
> @@ -294,10 +295,15 @@ typedef struct GDBRegisterState {
>      gdb_reg_cb set_reg;
>      const char *xml;
>      struct GDBRegisterState *next;
>  } GDBRegisterState;
>  
> +typedef struct GDBProcess {
> +    uint32_t pid;
> +    bool attached;
> +} GDBProcess;
> +
>  enum RSState {
>      RS_INACTIVE,
>      RS_IDLE,
>      RS_GETLINE,
>      RS_GETLINE_ESC,
> @@ -322,10 +328,13 @@ typedef struct GDBState {
>      int running_state;
>  #else
>      CharBackend chr;
>      Chardev *mon_chr;
>  #endif
> +    bool multiprocess;
> +    GDBProcess *processes;
> +    int process_num;
>      char syscall_buf[256];
>      gdb_syscall_complete_cb current_syscall_cb;
>  } GDBState;
>  
>  /* By default use no IRQs and no timers while single stepping so as to
> @@ -1749,10 +1758,24 @@ void gdb_exit(CPUArchState *env, int code)
>  #ifndef CONFIG_USER_ONLY
>    qemu_chr_fe_deinit(&s->chr, true);
>  #endif
>  }
>  
> +/*
> + * Create a unique process containing all the CPUs.
> + */
> +static void create_unique_process(GDBState *s)
> +{
> +    GDBProcess *process;
> +
> +    s->processes = g_malloc0(sizeof(GDBProcess));

good place to use:

       s->processes = g_new0(GDBProcess, 1);

Regardless:
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>

> +    s->process_num = 1;
> +    process = &s->processes[0];
> +
> +    process->pid = 1;
> +}
> +
>  #ifdef CONFIG_USER_ONLY
>  int
>  gdb_handlesig(CPUState *cpu, int sig)
>  {
>      GDBState *s;
> @@ -1846,10 +1869,11 @@ static bool gdb_accept(void)
>      }
>  
>      s = g_malloc0(sizeof(GDBState));
>      s->c_cpu = first_cpu;
>      s->g_cpu = first_cpu;
> +    create_unique_process(s);
>      s->fd = fd;
>      gdb_has_xml = false;
>  
>      gdbserver_state = s;
>      return true;
> @@ -2002,10 +2026,69 @@ static const TypeInfo char_gdb_type_info = {
>      .name = TYPE_CHARDEV_GDB,
>      .parent = TYPE_CHARDEV,
>      .class_init = char_gdb_class_init,
>  };
>  
> +static int find_cpu_clusters(Object *child, void *opaque)
> +{
> +    if (object_dynamic_cast(child, TYPE_CPU_CLUSTER)) {
> +        GDBState *s = (GDBState *) opaque;
> +        CPUClusterState *cluster = CPU_CLUSTER(child);
> +        GDBProcess *process;
> +
> +        s->processes = g_renew(GDBProcess, s->processes, ++s->process_num);
> +
> +        process = &s->processes[s->process_num - 1];
> +
> +        /*
> +         * GDB process IDs -1 and 0 are reserved. To avoid subtle errors at
> +         * runtime, we enforce here that the machine does not use a cluster ID
> +         * that would lead to PID 0. */
> +        assert(process->pid != UINT32_MAX);
> +        process->pid = cluster->cluster_id + 1;
> +        process->attached = false;
> +
> +        return 0;
> +    }
> +
> +    return object_child_foreach(child, find_cpu_clusters, opaque);
> +}
> +
> +static int pid_order(const void *a, const void *b)
> +{
> +    GDBProcess *pa = (GDBProcess *) a;
> +    GDBProcess *pb = (GDBProcess *) b;
> +
> +    if (pa->pid < pb->pid) {
> +        return -1;
> +    } else if (pa->pid > pb->pid) {
> +        return 1;
> +    } else {
> +        return 0;
> +    }
> +}
> +
> +static void create_processes(GDBState *s)
> +{
> +    object_child_foreach(object_get_root(), find_cpu_clusters, s);
> +
> +    if (!s->processes) {
> +        /* No CPU cluster specified by the machine */
> +        create_unique_process(s);
> +    } else {
> +        /* Sort by PID */
> +        qsort(s->processes, s->process_num, sizeof(s->processes[0]), pid_order);
> +    }
> +}
> +
> +static void cleanup_processes(GDBState *s)
> +{
> +    g_free(s->processes);
> +    s->process_num = 0;
> +    s->processes = NULL;
> +}
> +
>  int gdbserver_start(const char *device)
>  {
>      trace_gdbstub_op_start(device);
>  
>      GDBState *s;
> @@ -2058,15 +2141,19 @@ int gdbserver_start(const char *device)
>                                     NULL, &error_abort);
>          monitor_init(mon_chr, 0);
>      } else {
>          qemu_chr_fe_deinit(&s->chr, true);
>          mon_chr = s->mon_chr;
> +        cleanup_processes(s);
>          memset(s, 0, sizeof(GDBState));
>          s->mon_chr = mon_chr;
>      }
>      s->c_cpu = first_cpu;
>      s->g_cpu = first_cpu;
> +
> +    create_processes(s);
> +
>      if (chr) {
>          qemu_chr_fe_init(&s->chr, chr, &error_abort);
>          qemu_chr_fe_set_handlers(&s->chr, gdb_chr_can_receive, gdb_chr_receive,
>                                   gdb_chr_event, NULL, NULL, NULL, true);
>      }
> 

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [Qemu-devel] [PATCH v7 12/16] gdbstub: add support for vAttach packets
  2018-11-23  9:17 ` [Qemu-devel] [PATCH v7 12/16] gdbstub: add support for vAttach packets Luc Michel
@ 2018-11-25 21:00   ` Philippe Mathieu-Daudé
  0 siblings, 0 replies; 45+ messages in thread
From: Philippe Mathieu-Daudé @ 2018-11-25 21:00 UTC (permalink / raw)
  To: Luc Michel, qemu-devel
  Cc: Peter Maydell, Eduardo Habkost, alistair, mark.burton,
	Philippe Mathieu-Daudé, saipava, edgari, qemu-arm

On 23/11/18 10:17, Luc Michel wrote:
> Add support for the vAttach packets. In multiprocess mode, GDB sends
> them to attach to additional processes.
> 
> Signed-off-by: Luc Michel <luc.michel@greensocs.com>
> Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
> Acked-by: Alistair Francis <alistair.francis@wdc.com>

Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>

> ---
>  gdbstub.c | 35 +++++++++++++++++++++++++++++++++++
>  1 file changed, 35 insertions(+)
> 
> diff --git a/gdbstub.c b/gdbstub.c
> index 5cc643b9e0..69471ea914 100644
> --- a/gdbstub.c
> +++ b/gdbstub.c
> @@ -1342,10 +1342,45 @@ static int gdb_handle_packet(GDBState *s, const char *line_buf)
>                      break;
>                  }
>                  goto unknown_command;
>              }
>              break;
> +        } else if (strncmp(p, "Attach;", 7) == 0) {
> +            unsigned long pid;
> +
> +            p += 7;
> +
> +            if (qemu_strtoul(p, &p, 16, &pid)) {
> +                put_packet(s, "E22");
> +                break;
> +            }
> +
> +            process = gdb_get_process(s, pid);
> +
> +            if (process == NULL) {
> +                put_packet(s, "E22");
> +                break;
> +            }
> +
> +            cpu = get_first_cpu_in_process(s, process);
> +
> +            if (cpu == NULL) {
> +                /* Refuse to attach an empty process */
> +                put_packet(s, "E22");
> +                break;
> +            }
> +
> +            process->attached = true;
> +
> +            s->g_cpu = cpu;
> +            s->c_cpu = cpu;
> +
> +            snprintf(buf, sizeof(buf), "T%02xthread:%s;", GDB_SIGNAL_TRAP,
> +                     gdb_fmt_thread_id(s, cpu, thread_id, sizeof(thread_id)));
> +
> +            put_packet(s, buf);
> +            break;
>          } else {
>              goto unknown_command;
>          }
>      case 'k':
>          /* Kill the target */
> 

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [Qemu-devel] [PATCH v7 13/16] gdbstub: processes initialization on new peer connection
  2018-11-23  9:17 ` [Qemu-devel] [PATCH v7 13/16] gdbstub: processes initialization on new peer connection Luc Michel
@ 2018-11-25 21:02   ` Philippe Mathieu-Daudé
  0 siblings, 0 replies; 45+ messages in thread
From: Philippe Mathieu-Daudé @ 2018-11-25 21:02 UTC (permalink / raw)
  To: Luc Michel, qemu-devel
  Cc: Peter Maydell, Eduardo Habkost, alistair, mark.burton,
	Philippe Mathieu-Daudé, saipava, edgari, qemu-arm

On 23/11/18 10:17, Luc Michel wrote:
> When a new connection is established, we set the first process to be
> attached, and the others detached. The first CPU of the first process
> is selected as the current CPU.
> 
> Signed-off-by: Luc Michel <luc.michel@greensocs.com>
> Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
> Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>

Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>

> ---
>  gdbstub.c | 20 +++++++++++++++-----
>  1 file changed, 15 insertions(+), 5 deletions(-)
> 
> diff --git a/gdbstub.c b/gdbstub.c
> index 69471ea914..6518324d46 100644
> --- a/gdbstub.c
> +++ b/gdbstub.c
> @@ -2249,13 +2249,14 @@ static bool gdb_accept(void)
>          close(fd);
>          return false;
>      }
>  
>      s = g_malloc0(sizeof(GDBState));
> -    s->c_cpu = first_cpu;
> -    s->g_cpu = first_cpu;
>      create_unique_process(s);
> +    s->processes[0].attached = true;
> +    s->c_cpu = gdb_first_attached_cpu(s);
> +    s->g_cpu = s->c_cpu;
>      s->fd = fd;
>      gdb_has_xml = false;
>  
>      gdbserver_state = s;
>      return true;
> @@ -2337,12 +2338,23 @@ static void gdb_chr_receive(void *opaque, const uint8_t *buf, int size)
>      }
>  }
>  
>  static void gdb_chr_event(void *opaque, int event)
>  {
> +    int i;
> +    GDBState *s = (GDBState *) opaque;
> +
>      switch (event) {
>      case CHR_EVENT_OPENED:
> +        /* Start with first process attached, others detached */
> +        for (i = 0; i < s->process_num; i++) {
> +            s->processes[i].attached = !i;
> +        }
> +
> +        s->c_cpu = gdb_first_attached_cpu(s);
> +        s->g_cpu = s->c_cpu;
> +
>          vm_stop(RUN_STATE_PAUSED);
>          gdb_has_xml = false;
>          break;
>      default:
>          break;
> @@ -2528,19 +2540,17 @@ int gdbserver_start(const char *device)
>          mon_chr = s->mon_chr;
>          cleanup_processes(s);
>          memset(s, 0, sizeof(GDBState));
>          s->mon_chr = mon_chr;
>      }
> -    s->c_cpu = first_cpu;
> -    s->g_cpu = first_cpu;
>  
>      create_processes(s);
>  
>      if (chr) {
>          qemu_chr_fe_init(&s->chr, chr, &error_abort);
>          qemu_chr_fe_set_handlers(&s->chr, gdb_chr_can_receive, gdb_chr_receive,
> -                                 gdb_chr_event, NULL, NULL, NULL, true);
> +                                 gdb_chr_event, NULL, s, NULL, true);
>      }
>      s->state = chr ? RS_IDLE : RS_INACTIVE;
>      s->mon_chr = mon_chr;
>      s->current_syscall_cb = NULL;
>  
> 

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [Qemu-devel] [PATCH v7 16/16] arm/xlnx-zynqmp: put APUs and RPUs in separate CPU clusters
  2018-11-23  9:17 ` [Qemu-devel] [PATCH v7 16/16] arm/xlnx-zynqmp: put APUs and RPUs in separate CPU clusters Luc Michel
@ 2018-11-25 21:10   ` Philippe Mathieu-Daudé
  0 siblings, 0 replies; 45+ messages in thread
From: Philippe Mathieu-Daudé @ 2018-11-25 21:10 UTC (permalink / raw)
  To: Luc Michel, qemu-devel
  Cc: Peter Maydell, Eduardo Habkost, alistair, mark.burton,
	Philippe Mathieu-Daudé, saipava, edgari, qemu-arm

On 23/11/18 10:17, Luc Michel wrote:
> Create two separate CPU clusters for APUs and RPUs.
> 
> Signed-off-by: Luc Michel <luc.michel@greensocs.com>
> Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
> ---
>  include/hw/arm/xlnx-zynqmp.h |  3 +++
>  hw/arm/xlnx-zynqmp.c         | 21 +++++++++++++++++----
>  2 files changed, 20 insertions(+), 4 deletions(-)
> 
> diff --git a/include/hw/arm/xlnx-zynqmp.h b/include/hw/arm/xlnx-zynqmp.h
> index 98f925ab84..591515c760 100644
> --- a/include/hw/arm/xlnx-zynqmp.h
> +++ b/include/hw/arm/xlnx-zynqmp.h
> @@ -29,10 +29,11 @@
>  #include "hw/dma/xlnx_dpdma.h"
>  #include "hw/dma/xlnx-zdma.h"
>  #include "hw/display/xlnx_dp.h"
>  #include "hw/intc/xlnx-zynqmp-ipi.h"
>  #include "hw/timer/xlnx-zynqmp-rtc.h"
> +#include "hw/cpu/cluster.h"
>  
>  #define TYPE_XLNX_ZYNQMP "xlnx,zynqmp"
>  #define XLNX_ZYNQMP(obj) OBJECT_CHECK(XlnxZynqMPState, (obj), \
>                                         TYPE_XLNX_ZYNQMP)
>  
> @@ -75,10 +76,12 @@
>  typedef struct XlnxZynqMPState {
>      /*< private >*/
>      DeviceState parent_obj;
>  
>      /*< public >*/
> +    CPUClusterState apu_cluster;
> +    CPUClusterState rpu_cluster;
>      ARMCPU apu_cpu[XLNX_ZYNQMP_NUM_APU_CPUS];
>      ARMCPU rpu_cpu[XLNX_ZYNQMP_NUM_RPU_CPUS];
>      GICState gic;
>      MemoryRegion gic_mr[XLNX_ZYNQMP_GIC_REGIONS][XLNX_ZYNQMP_GIC_ALIASES];
>  
> diff --git a/hw/arm/xlnx-zynqmp.c b/hw/arm/xlnx-zynqmp.c
> index c195040350..42a138074c 100644
> --- a/hw/arm/xlnx-zynqmp.c
> +++ b/hw/arm/xlnx-zynqmp.c
> @@ -176,16 +176,22 @@ static void xlnx_zynqmp_create_rpu(XlnxZynqMPState *s, const char *boot_cpu,
>  {
>      Error *err = NULL;
>      int i;
>      int num_rpus = MIN(smp_cpus - XLNX_ZYNQMP_NUM_APU_CPUS, XLNX_ZYNQMP_NUM_RPU_CPUS);
>  
> +    object_initialize_child(OBJECT(s), "rpu-cluster", &s->rpu_cluster,
> +                            sizeof(s->rpu_cluster), TYPE_CPU_CLUSTER,
> +                            &error_abort, NULL);
> +
> +    qdev_init_nofail(DEVICE(&s->rpu_cluster));
> +
>      for (i = 0; i < num_rpus; i++) {
>          char *name;
>  
>          object_initialize(&s->rpu_cpu[i], sizeof(s->rpu_cpu[i]),
>                            "cortex-r5f-" TYPE_ARM_CPU);
> -        object_property_add_child(OBJECT(s), "rpu-cpu[*]",
> +        object_property_add_child(OBJECT(&s->rpu_cluster), "rpu-cpu[*]",
>                                    OBJECT(&s->rpu_cpu[i]), &error_abort);
>  
>          name = object_get_canonical_path_component(OBJECT(&s->rpu_cpu[i]));
>          if (strcmp(name, boot_cpu)) {
>              /* Secondary CPUs start in PSCI powered-down state */
> @@ -211,14 +217,19 @@ static void xlnx_zynqmp_init(Object *obj)
>  {
>      XlnxZynqMPState *s = XLNX_ZYNQMP(obj);
>      int i;
>      int num_apus = MIN(smp_cpus, XLNX_ZYNQMP_NUM_APU_CPUS);
>  
> +    object_initialize_child(obj, "apu-cluster", &s->apu_cluster,
> +                            sizeof(s->apu_cluster), TYPE_CPU_CLUSTER,
> +                            &error_abort, NULL);

This is now very clean :) Thanks for the various efforts/versions
improving this series.

Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>

> +
>      for (i = 0; i < num_apus; i++) {
> -        object_initialize_child(obj, "apu-cpu[*]", &s->apu_cpu[i],
> -                                sizeof(s->apu_cpu[i]),
> -                                "cortex-a53-" TYPE_ARM_CPU, &error_abort, NULL);
> +        object_initialize_child(OBJECT(&s->apu_cluster), "apu-cpu[*]",
> +                                &s->apu_cpu[i], sizeof(s->apu_cpu[i]),
> +                                "cortex-a53-" TYPE_ARM_CPU, &error_abort,
> +                                NULL);
>      }
>  
>      sysbus_init_child_obj(obj, "gic", &s->gic, sizeof(s->gic),
>                            gic_class_name());
>  
> @@ -331,10 +342,12 @@ static void xlnx_zynqmp_realize(DeviceState *dev, Error **errp)
>      qdev_prop_set_uint32(DEVICE(&s->gic), "num-cpu", num_apus);
>      qdev_prop_set_bit(DEVICE(&s->gic), "has-security-extensions", s->secure);
>      qdev_prop_set_bit(DEVICE(&s->gic),
>                        "has-virtualization-extensions", s->virt);
>  
> +    qdev_init_nofail(DEVICE(&s->apu_cluster));
> +
>      /* Realize APUs before realizing the GIC. KVM requires this.  */
>      for (i = 0; i < num_apus; i++) {
>          char *name;
>  
>          object_property_set_int(OBJECT(&s->apu_cpu[i]), QEMU_PSCI_CONDUIT_SMC,
> 

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [Qemu-devel] [PATCH v7 03/16] gdbstub: add multiprocess support to '?' packets
  2018-11-23  9:17 ` [Qemu-devel] [PATCH v7 03/16] gdbstub: add multiprocess support to '?' packets Luc Michel
@ 2018-11-25 21:22   ` Philippe Mathieu-Daudé
  2018-12-06 12:27     ` Luc Michel
  0 siblings, 1 reply; 45+ messages in thread
From: Philippe Mathieu-Daudé @ 2018-11-25 21:22 UTC (permalink / raw)
  To: Luc Michel, qemu-devel
  Cc: Peter Maydell, Eduardo Habkost, alistair, mark.burton,
	Philippe Mathieu-Daudé, saipava, edgari, qemu-arm

Hi Luc,

On 23/11/18 10:17, Luc Michel wrote:
> The gdb_get_cpu_pid() function does the PID lookup for the given CPU. It
> checks if the CPU is a direct child of a CPU cluster. If it is, the
> returned PID is the cluster ID plus one (cluster IDs start at 0, GDB
> PIDs at 1). When the CPU is not a child of such a container, the PID of
> the first process is returned.
> 
> The gdb_fmt_thread_id() function generates the string to be used to identify
> a given thread, in a response packet for the peer. This function
> supports generating thread IDs when multiprocess mode is enabled (in the
> form `p<pid>.<tid>').
> 
> Use them in the reply to a '?' request.
> 
> Signed-off-by: Luc Michel <luc.michel@greensocs.com>
> Acked-by: Alistair Francis <alistair.francis@wdc.com>
> Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
> ---
>  gdbstub.c | 60 +++++++++++++++++++++++++++++++++++++++++++++++++++++--
>  1 file changed, 58 insertions(+), 2 deletions(-)
> 
> diff --git a/gdbstub.c b/gdbstub.c
> index 26f5a7449a..4fbc05dfe3 100644
> --- a/gdbstub.c
> +++ b/gdbstub.c
> @@ -638,10 +638,52 @@ static int memtox(char *buf, const char *mem, int len)
>          }
>      }
>      return p - buf;
>  }
>  
> +static uint32_t gdb_get_cpu_pid(const GDBState *s, CPUState *cpu)
> +{
> +#ifndef CONFIG_USER_ONLY
> +    gchar *path, *name;

Setting ...

       gchar *path, *name = NULL;

> +    Object *obj;
> +    CPUClusterState *cluster;
> +    uint32_t ret;
> +
> +    path = object_get_canonical_path(OBJECT(cpu));
> +    name = object_get_canonical_path_component(OBJECT(cpu));

... we might move this line ...

> +
> +    if (path == NULL) {
> +        ret = s->processes[0].pid;
> +        goto out;
> +    }

... here:

       name = object_get_canonical_path_component(OBJECT(cpu));

> +
> +    /*
> +     * Retrieve the CPU parent path by removing the last '/' and the CPU name
> +     * from the CPU canonical path. */
> +    path[strlen(path) - strlen(name) - 1] = '\0';

Can we get there with path != NULL && name == NULL?

> +
> +    obj = object_resolve_path_type(path, TYPE_CPU_CLUSTER, NULL);
> +
> +    if (obj == NULL) {
> +        ret = s->processes[0].pid;
> +        goto out;
> +    }
> +
> +    cluster = CPU_CLUSTER(obj);
> +    ret = cluster->cluster_id + 1;
> +
> +out:
> +    g_free(name);
> +    g_free(path);
> +
> +    return ret;
> +
> +#else

[*]

> +    return s->processes[0].pid;
> +#endif
> +}
> +
>  static const char *get_feature_xml(const char *p, const char **newp,
>                                     CPUClass *cc)
>  {
>      size_t len;
>      int i;
> @@ -907,10 +949,23 @@ static CPUState *find_cpu(uint32_t thread_id)
>      }
>  
>      return NULL;
>  }
>  
> +static char *gdb_fmt_thread_id(const GDBState *s, CPUState *cpu,
> +                           char *buf, size_t buf_size)
> +{
> +    if (s->multiprocess) {
> +        snprintf(buf, buf_size, "p%02x.%02x",
> +                 gdb_get_cpu_pid(s, cpu), cpu_gdb_index(cpu));
> +    } else {
> +        snprintf(buf, buf_size, "%02x", cpu_gdb_index(cpu));
> +    }
> +
> +    return buf;
> +}
> +
>  static int is_query_packet(const char *p, const char *query, char separator)
>  {
>      unsigned int query_len = strlen(query);
>  
>      return strncmp(p, query, query_len) == 0 &&
> @@ -1018,22 +1073,23 @@ static int gdb_handle_packet(GDBState *s, const char *line_buf)
>      const char *p;
>      uint32_t thread;
>      int ch, reg_size, type, res;
>      uint8_t mem_buf[MAX_PACKET_LENGTH];
>      char buf[sizeof(mem_buf) + 1 /* trailing NUL */];
> +    char thread_id[16];
>      uint8_t *registers;
>      target_ulong addr, len;
>  
>      trace_gdbstub_io_command(line_buf);
>  
>      p = line_buf;
>      ch = *p++;
>      switch(ch) {
>      case '?':
>          /* TODO: Make this return the correct value for user-mode.  */

Is this comment still relevant?

If so, wouldn't it be better placed in [*]?

> -        snprintf(buf, sizeof(buf), "T%02xthread:%02x;", GDB_SIGNAL_TRAP,
> -                 cpu_gdb_index(s->c_cpu));
> +        snprintf(buf, sizeof(buf), "T%02xthread:%s;", GDB_SIGNAL_TRAP,
> +                 gdb_fmt_thread_id(s, s->c_cpu, thread_id, sizeof(thread_id)));
>          put_packet(s, buf);
>          /* Remove all the breakpoints when this query is issued,
>           * because gdb is doing and initial connect and the state
>           * should be cleaned up.
>           */
> 

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [Qemu-devel] [PATCH v7 01/16] hw/cpu: introduce CPU clusters
  2018-11-23 18:10   ` Eduardo Habkost
  2018-11-23 18:14     ` Peter Maydell
@ 2018-11-25 21:27     ` Philippe Mathieu-Daudé
  2018-11-26 13:27       ` Eduardo Habkost
  1 sibling, 1 reply; 45+ messages in thread
From: Philippe Mathieu-Daudé @ 2018-11-25 21:27 UTC (permalink / raw)
  To: Eduardo Habkost, Luc Michel
  Cc: Peter Maydell, alistair, mark.burton, qemu-devel,
	Philippe Mathieu-Daudé, saipava, edgari, qemu-arm

Hi Eduardo,

On 23/11/18 19:10, Eduardo Habkost wrote:
> Hi,
> 
> Sorry for not reviewing this series earlier.  I just stumbled
> upon this part of the code:
> 
> On Fri, Nov 23, 2018 at 10:17:14AM +0100, Luc Michel wrote:
>> This commit adds the cpu-cluster type. It aims at gathering CPUs from
>> the same cluster in a machine.
>>
>> For now it only has a `cluster-id` property.
>>
>> Signed-off-by: Luc Michel <luc.michel@greensocs.com>
>> Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
>> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
>> Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
>> Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
> [...]
>> +static void cpu_cluster_init(Object *obj)
>> +{
>> +    static uint32_t cluster_id_auto_increment;
>> +    CPUClusterState *cluster = CPU_CLUSTER(obj);
>> +
>> +    cluster->cluster_id = cluster_id_auto_increment++;
> 
> I see that you implemented this after a suggestion from Philippe,
> but I'm worried about this kind of side-effect on object/device
> code.  I'm afraid this will bite us back in the future.  We were
> bitten by problems caused by automatic cpu_index assignment on
> CPU instance_init, and we took a while to fix that.

Oops, thanks for catching this. I'm not aware of the cpu_index past issue.

> 
> If you really want to do this and assign cluster_id
> automatically, please do it on realize, where it won't have
> unexpected side-effects after a simple `qom-list-properties` QMP
> command.

This looks clean enough to me.
Do you prefer we don't use static auto_increment at all?

> 
> I would also add a huge warning above the cluster_id field
> declaration, mentioning that the field is not supposed to be used
> for anything except debugging.  I think there's a large risk of
> people trying to reuse the field incorrectly, just like cpu_index
> was reused for multiple (conflicting) purposes in the past.
> 
> 
>> +}
>> +
>> +static Property cpu_cluster_properties[] = {
>> +    DEFINE_PROP_UINT32("cluster-id", CPUClusterState, cluster_id, 0),
>> +    DEFINE_PROP_END_OF_LIST()
>> +};
> [...]
> 

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [Qemu-devel] [PATCH v7 01/16] hw/cpu: introduce CPU clusters
  2018-11-25 21:27     ` Philippe Mathieu-Daudé
@ 2018-11-26 13:27       ` Eduardo Habkost
  2018-11-30 16:52         ` Peter Maydell
  0 siblings, 1 reply; 45+ messages in thread
From: Eduardo Habkost @ 2018-11-26 13:27 UTC (permalink / raw)
  To: Philippe Mathieu-Daudé
  Cc: Luc Michel, Peter Maydell, alistair, mark.burton, qemu-devel,
	Philippe Mathieu-Daudé, saipava, edgari, qemu-arm

On Sun, Nov 25, 2018 at 10:27:04PM +0100, Philippe Mathieu-Daudé wrote:
> Hi Eduardo,
> 
> On 23/11/18 19:10, Eduardo Habkost wrote:
> > Hi,
> > 
> > Sorry for not reviewing this series earlier.  I just stumbled
> > upon this part of the code:
> > 
> > On Fri, Nov 23, 2018 at 10:17:14AM +0100, Luc Michel wrote:
> >> This commit adds the cpu-cluster type. It aims at gathering CPUs from
> >> the same cluster in a machine.
> >>
> >> For now it only has a `cluster-id` property.
> >>
> >> Signed-off-by: Luc Michel <luc.michel@greensocs.com>
> >> Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
> >> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> >> Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> >> Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
> > [...]
> >> +static void cpu_cluster_init(Object *obj)
> >> +{
> >> +    static uint32_t cluster_id_auto_increment;
> >> +    CPUClusterState *cluster = CPU_CLUSTER(obj);
> >> +
> >> +    cluster->cluster_id = cluster_id_auto_increment++;
> > 
> > I see that you implemented this after a suggestion from Philippe,
> > but I'm worried about this kind of side-effect on object/device
> > code.  I'm afraid this will bite us back in the future.  We were
> > bitten by problems caused by automatic cpu_index assignment on
> > CPU instance_init, and we took a while to fix that.
> 
> Oops, thanks for catching this. I'm not aware of the cpu_index past issue.
> 
> > 
> > If you really want to do this and assign cluster_id
> > automatically, please do it on realize, where it won't have
> > unexpected side-effects after a simple `qom-list-properties` QMP
> > command.
> 
> This looks clean enough to me.
> Do you prefer we don't use static auto_increment at all?

I would prefer to avoid the static variable if possible, but I
won't reject it if the alternatives make the API too complex to
use.

In either case, I'd like to ensure the caveats of the cluster_id
field are clearly documented: no guarantees about ordering or
predictability, making it not appropriate for external
interfaces.


> 
> > 
> > I would also add a huge warning above the cluster_id field
> > declaration, mentioning that the field is not supposed to be used
> > for anything except debugging.  I think there's a large risk of
> > people trying to reuse the field incorrectly, just like cpu_index
> > was reused for multiple (conflicting) purposes in the past.
> > 
> > 
> >> +}
> >> +
> >> +static Property cpu_cluster_properties[] = {
> >> +    DEFINE_PROP_UINT32("cluster-id", CPUClusterState, cluster_id, 0),
> >> +    DEFINE_PROP_END_OF_LIST()
> >> +};
> > [...]
> > 

-- 
Eduardo

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [Qemu-devel] [PATCH v7 07/16] gdbstub: add multiprocess support to (f|s)ThreadInfo and ThreadExtraInfo
  2018-11-23  9:17 ` [Qemu-devel] [PATCH v7 07/16] gdbstub: add multiprocess support to (f|s)ThreadInfo and ThreadExtraInfo Luc Michel
@ 2018-11-30 10:35   ` Edgar E. Iglesias
  0 siblings, 0 replies; 45+ messages in thread
From: Edgar E. Iglesias @ 2018-11-30 10:35 UTC (permalink / raw)
  To: Luc Michel
  Cc: qemu-devel, qemu-arm, Peter Maydell, saipava, edgari, alistair,
	Philippe Mathieu-Daudé, mark.burton, Eduardo Habkost

On Fri, Nov 23, 2018 at 10:17:20AM +0100, Luc Michel wrote:
> Change the thread info related packets handling to support multiprocess
> extension.
> 
> Add the CPUs class name in the extra info to help differentiate
> them in multiprocess mode.

Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>


> 
> Signed-off-by: Luc Michel <luc.michel@greensocs.com>
> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> ---
>  gdbstub.c | 37 +++++++++++++++++++++++++++----------
>  1 file changed, 27 insertions(+), 10 deletions(-)
> 
> diff --git a/gdbstub.c b/gdbstub.c
> index 728efdabee..12d4e33c40 100644
> --- a/gdbstub.c
> +++ b/gdbstub.c
> @@ -1262,11 +1262,10 @@ out:
>  static int gdb_handle_packet(GDBState *s, const char *line_buf)
>  {
>      CPUState *cpu;
>      CPUClass *cc;
>      const char *p;
> -    uint32_t thread;
>      uint32_t pid, tid;
>      int ch, reg_size, type, res;
>      uint8_t mem_buf[MAX_PACKET_LENGTH];
>      char buf[sizeof(mem_buf) + 1 /* trailing NUL */];
>      char thread_id[16];
> @@ -1558,30 +1557,48 @@ static int gdb_handle_packet(GDBState *s, const char *line_buf)
>              snprintf(buf, sizeof(buf), "QC%s",
>                       gdb_fmt_thread_id(s, cpu, thread_id, sizeof(thread_id)));
>              put_packet(s, buf);
>              break;
>          } else if (strcmp(p,"fThreadInfo") == 0) {
> -            s->query_cpu = first_cpu;
> +            s->query_cpu = gdb_first_attached_cpu(s);
>              goto report_cpuinfo;
>          } else if (strcmp(p,"sThreadInfo") == 0) {
>          report_cpuinfo:
>              if (s->query_cpu) {
> -                snprintf(buf, sizeof(buf), "m%x", cpu_gdb_index(s->query_cpu));
> +                snprintf(buf, sizeof(buf), "m%s",
> +                         gdb_fmt_thread_id(s, s->query_cpu,
> +                                       thread_id, sizeof(thread_id)));
>                  put_packet(s, buf);
> -                s->query_cpu = CPU_NEXT(s->query_cpu);
> +                s->query_cpu = gdb_next_attached_cpu(s, s->query_cpu);
>              } else
>                  put_packet(s, "l");
>              break;
>          } else if (strncmp(p,"ThreadExtraInfo,", 16) == 0) {
> -            thread = strtoull(p+16, (char **)&p, 16);
> -            cpu = find_cpu(thread);
> +            if (read_thread_id(p + 16, &p, &pid, &tid) == GDB_READ_THREAD_ERR) {
> +                put_packet(s, "E22");
> +                break;
> +            }
> +            cpu = gdb_get_cpu(s, pid, tid);
>              if (cpu != NULL) {
>                  cpu_synchronize_state(cpu);
> -                /* memtohex() doubles the required space */
> -                len = snprintf((char *)mem_buf, sizeof(buf) / 2,
> -                               "CPU#%d [%s]", cpu->cpu_index,
> -                               cpu->halted ? "halted " : "running");
> +
> +                if (s->multiprocess && (s->process_num > 1)) {
> +                    /* Print the CPU model and name in multiprocess mode */
> +                    ObjectClass *oc = object_get_class(OBJECT(cpu));
> +                    const char *cpu_model = object_class_get_name(oc);
> +                    char *cpu_name =
> +                        object_get_canonical_path_component(OBJECT(cpu));
> +                    len = snprintf((char *)mem_buf, sizeof(buf) / 2,
> +                                   "%s %s [%s]", cpu_model, cpu_name,
> +                                   cpu->halted ? "halted " : "running");
> +                    g_free(cpu_name);
> +                } else {
> +                    /* memtohex() doubles the required space */
> +                    len = snprintf((char *)mem_buf, sizeof(buf) / 2,
> +                                   "CPU#%d [%s]", cpu->cpu_index,
> +                                   cpu->halted ? "halted " : "running");
> +                }
>                  trace_gdbstub_op_extra_info((char *)mem_buf);
>                  memtohex(buf, mem_buf, len);
>                  put_packet(s, buf);
>              }
>              break;
> -- 
> 2.19.1
> 

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [Qemu-devel] [PATCH v7 01/16] hw/cpu: introduce CPU clusters
  2018-11-26 13:27       ` Eduardo Habkost
@ 2018-11-30 16:52         ` Peter Maydell
  2018-11-30 18:14           ` Eduardo Habkost
  2018-12-03 11:20           ` Luc Michel
  0 siblings, 2 replies; 45+ messages in thread
From: Peter Maydell @ 2018-11-30 16:52 UTC (permalink / raw)
  To: Eduardo Habkost
  Cc: Philippe Mathieu-Daudé, Luc Michel, Alistair Francis,
	Mark Burton, QEMU Developers, Philippe Mathieu-Daudé,
	Sai Pavan Boddu, Edgar Iglesias, qemu-arm

On Mon, 26 Nov 2018 at 13:27, Eduardo Habkost <ehabkost@redhat.com> wrote:
>
> On Sun, Nov 25, 2018 at 10:27:04PM +0100, Philippe Mathieu-Daudé wrote:
> > Hi Eduardo,
> >
> > On 23/11/18 19:10, Eduardo Habkost wrote:
> > > If you really want to do this and assign cluster_id
> > > automatically, please do it on realize, where it won't have
> > > unexpected side-effects after a simple `qom-list-properties` QMP
> > > command.
> >
> > This looks clean enough to me.
> > Do you prefer we don't use static auto_increment at all?
>
> I would prefer to avoid the static variable if possible, but I
> won't reject it if the alternatives make the API too complex to
> use.

I guess the alternative would be to require the cluster ID
to be a QOM property on the cluster object. Usage would then
be something like
    object_initialize_child(OBJECT(s), "rpu-cluster", &s->rpu_cluster,
                            sizeof(s->rpu_cluster), TYPE_CPU_CLUSTER,
                            &error_abort, NULL);
    object_property_set_int(OBJECT(s), 1, "cluster-id", &error_abort);
    qdev_init_nofail(DEVICE(&s->rpu_cluster));

(ie one extra line setting the cluster ID explicitly).

SoC objects can probably reasonably assume that they are
the only SoC in the system, ie that they can just assign
cluster IDs themselves). If that turns out not to be true
we can make the SoC take a property to specify a cluster ID
base or something.

I guess if we've been bitten by cpu-ID auto-assignment
in the past, starting out with "user picks the cluster ID"
would be safer, and it's not too much pain at the callsite.
Plus it avoids the gdbstub changing its mind about which
order the cluster "processes" appear in if we refactor the
board code.

> In either case, I'd like to ensure the caveats of the cluster_id
> field are clearly documented: no guarantees about ordering or
> predictability, making it not appropriate for external
> interfaces.

The gdb stub is sort of an external interface, of course...

Luc: what are the requirements on boards using CPU cluster
objects? I assume these are both OK:
 * does not use cluster objects at all
   (the gdbstub puts all the CPUs in one process?)
 * all CPUs created are in some CPU cluster
   (the gdbstub uses one process per CPU cluster)
but what about
 * some CPUs are created in a CPU cluster, but some
   are "loose", not in any cluster at all
?
Is that just invalid, or do the "loose" CPUs end up in
a default cluster (gdbstub process), or do they get
one cluster each?

If it's invalid (which doesn't seem like a bad idea),
is there an assert somewhere so that getting it wrong
results in the board failing to start (ie a "make check"
failure) ?

thanks
-- PMM

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [Qemu-devel] [PATCH v7 01/16] hw/cpu: introduce CPU clusters
  2018-11-30 16:52         ` Peter Maydell
@ 2018-11-30 18:14           ` Eduardo Habkost
  2018-12-03 11:20           ` Luc Michel
  1 sibling, 0 replies; 45+ messages in thread
From: Eduardo Habkost @ 2018-11-30 18:14 UTC (permalink / raw)
  To: Peter Maydell
  Cc: Philippe Mathieu-Daudé, Luc Michel, Alistair Francis,
	Mark Burton, QEMU Developers, Philippe Mathieu-Daudé,
	Sai Pavan Boddu, Edgar Iglesias, qemu-arm

On Fri, Nov 30, 2018 at 04:52:31PM +0000, Peter Maydell wrote:
> On Mon, 26 Nov 2018 at 13:27, Eduardo Habkost <ehabkost@redhat.com> wrote:
> >
> > On Sun, Nov 25, 2018 at 10:27:04PM +0100, Philippe Mathieu-Daudé wrote:
> > > Hi Eduardo,
> > >
> > > On 23/11/18 19:10, Eduardo Habkost wrote:
> > > > If you really want to do this and assign cluster_id
> > > > automatically, please do it on realize, where it won't have
> > > > unexpected side-effects after a simple `qom-list-properties` QMP
> > > > command.
> > >
> > > This looks clean enough to me.
> > > Do you prefer we don't use static auto_increment at all?
> >
> > I would prefer to avoid the static variable if possible, but I
> > won't reject it if the alternatives make the API too complex to
> > use.
> 
> I guess the alternative would be to require the cluster ID
> to be a QOM property on the cluster object. Usage would then
> be something like
>     object_initialize_child(OBJECT(s), "rpu-cluster", &s->rpu_cluster,
>                             sizeof(s->rpu_cluster), TYPE_CPU_CLUSTER,
>                             &error_abort, NULL);
>     object_property_set_int(OBJECT(s), 1, "cluster-id", &error_abort);
>     qdev_init_nofail(DEVICE(&s->rpu_cluster));
> 
> (ie one extra line setting the cluster ID explicitly).
> 
> SoC objects can probably reasonably assume that they are
> the only SoC in the system, ie that they can just assign
> cluster IDs themselves). If that turns out not to be true
> we can make the SoC take a property to specify a cluster ID
> base or something.
> 
> I guess if we've been bitten by cpu-ID auto-assignment
> in the past, starting out with "user picks the cluster ID"
> would be safer, and it's not too much pain at the callsite.
> Plus it avoids the gdbstub changing its mind about which
> order the cluster "processes" appear in if we refactor the
> board code.

Agreed.

> 
> > In either case, I'd like to ensure the caveats of the cluster_id
> > field are clearly documented: no guarantees about ordering or
> > predictability, making it not appropriate for external
> > interfaces.
> 
> The gdb stub is sort of an external interface, of course...

Right, and it works if you treat it as just an opaque identifier.
It won't work only if you need it to be stable and/or
predictable.

Letting the board set "cluster-id" seems to be safer, and not too
complex.

> 
> Luc: what are the requirements on boards using CPU cluster
> objects? I assume these are both OK:
>  * does not use cluster objects at all
>    (the gdbstub puts all the CPUs in one process?)
>  * all CPUs created are in some CPU cluster
>    (the gdbstub uses one process per CPU cluster)
> but what about
>  * some CPUs are created in a CPU cluster, but some
>    are "loose", not in any cluster at all
> ?
> Is that just invalid, or do the "loose" CPUs end up in
> a default cluster (gdbstub process), or do they get
> one cluster each?
> 
> If it's invalid (which doesn't seem like a bad idea),
> is there an assert somewhere so that getting it wrong
> results in the board failing to start (ie a "make check"
> failure) ?

I'm having the impression that "cluster" has a very clear meaning
and purpose in this discussion, but this is not being documented
anywhere.  Can we clarify at cpu/cluster.c what exactly a cluster
is, and when it's appropriate (or necessary) to group the CPUs
into clusters?

Is it really possible describe the meaning of "cluster" in a way
that doesn't refer to GDB at all?  Is this all about memory
address spaces?  If this is all about memory address spaces,
can't the GDB code detect that automatically by looking at the
address space objects?

-- 
Eduardo

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [Qemu-devel] [PATCH v7 01/16] hw/cpu: introduce CPU clusters
  2018-11-30 16:52         ` Peter Maydell
  2018-11-30 18:14           ` Eduardo Habkost
@ 2018-12-03 11:20           ` Luc Michel
  2018-12-03 11:23             ` Peter Maydell
  1 sibling, 1 reply; 45+ messages in thread
From: Luc Michel @ 2018-12-03 11:20 UTC (permalink / raw)
  To: Peter Maydell, Eduardo Habkost, Philippe Mathieu-Daudé
  Cc: Alistair Francis, Mark Burton, QEMU Developers,
	Philippe Mathieu-Daudé, Sai Pavan Boddu, Edgar Iglesias,
	qemu-arm

On 11/30/18 5:52 PM, Peter Maydell wrote:
> On Mon, 26 Nov 2018 at 13:27, Eduardo Habkost <ehabkost@redhat.com> wrote:
>>
>> On Sun, Nov 25, 2018 at 10:27:04PM +0100, Philippe Mathieu-Daudé wrote:
>>> Hi Eduardo,
>>>
>>> On 23/11/18 19:10, Eduardo Habkost wrote:
>>>> If you really want to do this and assign cluster_id
>>>> automatically, please do it on realize, where it won't have
>>>> unexpected side-effects after a simple `qom-list-properties` QMP
>>>> command.
>>>
>>> This looks clean enough to me.
>>> Do you prefer we don't use static auto_increment at all?
>>
>> I would prefer to avoid the static variable if possible, but I
>> won't reject it if the alternatives make the API too complex to
>> use.
> 
> I guess the alternative would be to require the cluster ID
> to be a QOM property on the cluster object. Usage would then
> be something like
>     object_initialize_child(OBJECT(s), "rpu-cluster", &s->rpu_cluster,
>                             sizeof(s->rpu_cluster), TYPE_CPU_CLUSTER,
>                             &error_abort, NULL);
>     object_property_set_int(OBJECT(s), 1, "cluster-id", &error_abort);
>     qdev_init_nofail(DEVICE(&s->rpu_cluster));
> 
> (ie one extra line setting the cluster ID explicitly).
I had such a line in v3 which was removed with the Philippe's
auto-assign suggestion. Discussions seems to converge to the removal of
the auto-assign mechanism though. I can rollback to manual assign for my
next re-roll.
Philippe: is it OK for you?

> 
> SoC objects can probably reasonably assume that they are
> the only SoC in the system, ie that they can just assign
> cluster IDs themselves). If that turns out not to be true
> we can make the SoC take a property to specify a cluster ID
> base or something.
> 
> I guess if we've been bitten by cpu-ID auto-assignment
> in the past, starting out with "user picks the cluster ID"
> would be safer, and it's not too much pain at the callsite.
> Plus it avoids the gdbstub changing its mind about which
> order the cluster "processes" appear in if we refactor the
> board code.
> 
>> In either case, I'd like to ensure the caveats of the cluster_id
>> field are clearly documented: no guarantees about ordering or
>> predictability, making it not appropriate for external
>> interfaces.
> 
> The gdb stub is sort of an external interface, of course...
> 
> Luc: what are the requirements on boards using CPU cluster
> objects? I assume these are both OK:
>  * does not use cluster objects at all
>    (the gdbstub puts all the CPUs in one process?)
Yes, when no clusters are found, a process with PID 1 is created and
used for all CPUs.
>  * all CPUs created are in some CPU cluster
>    (the gdbstub uses one process per CPU cluster)
> but what about
>  * some CPUs are created in a CPU cluster, but some
>    are "loose", not in any cluster at all> ?
> Is that just invalid, or do the "loose" CPUs end up in
> a default cluster (gdbstub process), or do they get
> one cluster each?
Currently this is valid and the "loose" CPUs end up in the first
available process (which will be PID 1 if we are in your first case,
i.e. no clusters in the machine).

> 
> If it's invalid (which doesn't seem like a bad idea),
> is there an assert somewhere so that getting it wrong
> results in the board failing to start (ie a "make check"
> failure) ?
I could add such a check if you think it's a good idea, i.e. assert that
"no cluster exist" or "all CPUs are in a cluster"

But maybe in the end this check should not be done in the gdb stub? For
the same reasons maybe you want to ensure that all CPUs have a well
known address space or whatever information the cluster will provide?

Thanks.

-- 
Luc

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [Qemu-devel] [PATCH v7 01/16] hw/cpu: introduce CPU clusters
  2018-12-03 11:20           ` Luc Michel
@ 2018-12-03 11:23             ` Peter Maydell
  2018-12-03 14:09               ` Luc Michel
  0 siblings, 1 reply; 45+ messages in thread
From: Peter Maydell @ 2018-12-03 11:23 UTC (permalink / raw)
  To: Luc Michel
  Cc: Eduardo Habkost, Philippe Mathieu-Daudé, Alistair Francis,
	Mark Burton, QEMU Developers, Philippe Mathieu-Daudé,
	Sai Pavan Boddu, Edgar Iglesias, qemu-arm

On Mon, 3 Dec 2018 at 11:21, Luc Michel <luc.michel@greensocs.com> wrote:
>
> On 11/30/18 5:52 PM, Peter Maydell wrote:
> > Luc: what are the requirements on boards using CPU cluster
> > objects? I assume these are both OK:
> >  * does not use cluster objects at all
> >    (the gdbstub puts all the CPUs in one process?)
> Yes, when no clusters are found, a process with PID 1 is created and
> used for all CPUs.
> >  * all CPUs created are in some CPU cluster
> >    (the gdbstub uses one process per CPU cluster)
> > but what about
> >  * some CPUs are created in a CPU cluster, but some
> >    are "loose", not in any cluster at all> ?
> > Is that just invalid, or do the "loose" CPUs end up in
> > a default cluster (gdbstub process), or do they get
> > one cluster each?
> Currently this is valid and the "loose" CPUs end up in the first
> available process (which will be PID 1 if we are in your first case,
> i.e. no clusters in the machine).

So if there are some defined clusters 1 and 2, and some
loose CPUs, the clusters get PID 1 and PID 2, and the
loose CPUs end up in PID 3? That seems reasonable.

thanks
-- PMM

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [Qemu-devel] [PATCH v7 01/16] hw/cpu: introduce CPU clusters
  2018-12-03 11:23             ` Peter Maydell
@ 2018-12-03 14:09               ` Luc Michel
  2018-12-04 18:06                 ` Eduardo Habkost
  0 siblings, 1 reply; 45+ messages in thread
From: Luc Michel @ 2018-12-03 14:09 UTC (permalink / raw)
  To: Peter Maydell
  Cc: Eduardo Habkost, Philippe Mathieu-Daudé, Alistair Francis,
	Mark Burton, QEMU Developers, Philippe Mathieu-Daudé,
	Sai Pavan Boddu, Edgar Iglesias, qemu-arm



On 12/3/18 12:23 PM, Peter Maydell wrote:
> On Mon, 3 Dec 2018 at 11:21, Luc Michel <luc.michel@greensocs.com> wrote:
>>
>> On 11/30/18 5:52 PM, Peter Maydell wrote:
>>> Luc: what are the requirements on boards using CPU cluster
>>> objects? I assume these are both OK:
>>>  * does not use cluster objects at all
>>>    (the gdbstub puts all the CPUs in one process?)
>> Yes, when no clusters are found, a process with PID 1 is created and
>> used for all CPUs.
>>>  * all CPUs created are in some CPU cluster
>>>    (the gdbstub uses one process per CPU cluster)
>>> but what about
>>>  * some CPUs are created in a CPU cluster, but some
>>>    are "loose", not in any cluster at all> ?
>>> Is that just invalid, or do the "loose" CPUs end up in
>>> a default cluster (gdbstub process), or do they get
>>> one cluster each?
>> Currently this is valid and the "loose" CPUs end up in the first
>> available process (which will be PID 1 if we are in your first case,
>> i.e. no clusters in the machine).
> 
> So if there are some defined clusters 1 and 2, and some
> loose CPUs, the clusters get PID 1 and PID 2, and the
> loose CPUs end up in PID 3? That seems reasonable.
No sorry I was not clear, the loose CPUs would end up on PID 1 in that case.

The current behaviour is as follows:
During gdb stub initialization:
  - A process is created per cluster.
  - If no cluster are found, an unique process is created with PID 1

When trying to find a CPU PID:
  - If it is attached to a cluster, return the associated process,
  - If it is loose, return the first available process.

-- 
Luc

> 
> thanks
> -- PMM
> 

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [Qemu-devel] [PATCH v7 01/16] hw/cpu: introduce CPU clusters
  2018-12-03 14:09               ` Luc Michel
@ 2018-12-04 18:06                 ` Eduardo Habkost
  2018-12-04 18:24                   ` Peter Maydell
  2018-12-05 13:47                   ` Luc Michel
  0 siblings, 2 replies; 45+ messages in thread
From: Eduardo Habkost @ 2018-12-04 18:06 UTC (permalink / raw)
  To: Luc Michel
  Cc: Peter Maydell, Philippe Mathieu-Daudé, Alistair Francis,
	Mark Burton, QEMU Developers, Philippe Mathieu-Daudé,
	Sai Pavan Boddu, Edgar Iglesias, qemu-arm

On Mon, Dec 03, 2018 at 03:09:14PM +0100, Luc Michel wrote:
> 
> 
> On 12/3/18 12:23 PM, Peter Maydell wrote:
> > On Mon, 3 Dec 2018 at 11:21, Luc Michel <luc.michel@greensocs.com> wrote:
> >>
> >> On 11/30/18 5:52 PM, Peter Maydell wrote:
> >>> Luc: what are the requirements on boards using CPU cluster
> >>> objects? I assume these are both OK:
> >>>  * does not use cluster objects at all
> >>>    (the gdbstub puts all the CPUs in one process?)
> >> Yes, when no clusters are found, a process with PID 1 is created and
> >> used for all CPUs.
> >>>  * all CPUs created are in some CPU cluster
> >>>    (the gdbstub uses one process per CPU cluster)
> >>> but what about
> >>>  * some CPUs are created in a CPU cluster, but some
> >>>    are "loose", not in any cluster at all> ?
> >>> Is that just invalid, or do the "loose" CPUs end up in
> >>> a default cluster (gdbstub process), or do they get
> >>> one cluster each?
> >> Currently this is valid and the "loose" CPUs end up in the first
> >> available process (which will be PID 1 if we are in your first case,
> >> i.e. no clusters in the machine).
> > 
> > So if there are some defined clusters 1 and 2, and some
> > loose CPUs, the clusters get PID 1 and PID 2, and the
> > loose CPUs end up in PID 3? That seems reasonable.
> No sorry I was not clear, the loose CPUs would end up on PID 1 in that case.
> 
> The current behaviour is as follows:
> During gdb stub initialization:
>   - A process is created per cluster.
>   - If no cluster are found, an unique process is created with PID 1
> 
> When trying to find a CPU PID:
>   - If it is attached to a cluster, return the associated process,
>   - If it is loose, return the first available process.

The behavior described by Peter would be more reasonable to me.
Otherwise what was just an accident (getting the CPU assigned to
the first process) will become expected behavior of the API, and
it will be hard to change it later.

In either case, I'm still missing a clear description of what a
cluster is supposed to represent, exactly (see my previous reply
on this thread).

-- 
Eduardo

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [Qemu-devel] [PATCH v7 01/16] hw/cpu: introduce CPU clusters
  2018-12-04 18:06                 ` Eduardo Habkost
@ 2018-12-04 18:24                   ` Peter Maydell
  2018-12-04 19:05                     ` Eduardo Habkost
  2018-12-05 13:44                     ` Luc Michel
  2018-12-05 13:47                   ` Luc Michel
  1 sibling, 2 replies; 45+ messages in thread
From: Peter Maydell @ 2018-12-04 18:24 UTC (permalink / raw)
  To: Eduardo Habkost
  Cc: Luc Michel, Philippe Mathieu-Daudé, Alistair Francis,
	Mark Burton, QEMU Developers, Philippe Mathieu-Daudé,
	Sai Pavan Boddu, Edgar Iglesias, qemu-arm

On Tue, 4 Dec 2018 at 18:06, Eduardo Habkost <ehabkost@redhat.com> wrote:
> In either case, I'm still missing a clear description of what a
> cluster is supposed to represent, exactly (see my previous reply
> on this thread).

Here's my attempt:

A cluster is a group of CPUs which are all identical and have
the same view of the rest of the system.
If CPUs are not identical (for example, Cortex-A53 and
Cortex-A57 CPUs in an Arm big.LITTLE system) they should be
in different clusters.
If the CPUs do not have the same view of memory (for example
the main CPU and a management controller processor) they should
be in different clusters.

I agree that this is slightly confusing, because the concept
is on the boundary between something that's real in hardware
(eg in a big.LITTLE setup the CPUs are in separate hardware
clusters, and of coures a BMC processor and the main CPU
are definitely different things) and something that we're
defining for its effects on the GDB UI and so we can make
sure we don't share TCG translated code where we shouldn't.

thanks
-- PMM

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [Qemu-devel] [PATCH v7 01/16] hw/cpu: introduce CPU clusters
  2018-12-04 18:24                   ` Peter Maydell
@ 2018-12-04 19:05                     ` Eduardo Habkost
  2018-12-04 19:16                       ` Peter Maydell
  2018-12-05 13:44                     ` Luc Michel
  1 sibling, 1 reply; 45+ messages in thread
From: Eduardo Habkost @ 2018-12-04 19:05 UTC (permalink / raw)
  To: Peter Maydell
  Cc: Luc Michel, Philippe Mathieu-Daudé, Alistair Francis,
	Mark Burton, QEMU Developers, Philippe Mathieu-Daudé,
	Sai Pavan Boddu, Edgar Iglesias, qemu-arm

On Tue, Dec 04, 2018 at 06:24:19PM +0000, Peter Maydell wrote:
> On Tue, 4 Dec 2018 at 18:06, Eduardo Habkost <ehabkost@redhat.com> wrote:
> > In either case, I'm still missing a clear description of what a
> > cluster is supposed to represent, exactly (see my previous reply
> > on this thread).
> 
> Here's my attempt:
> 
> A cluster is a group of CPUs which are all identical and have
> the same view of the rest of the system.
> If CPUs are not identical (for example, Cortex-A53 and
> Cortex-A57 CPUs in an Arm big.LITTLE system) they should be
> in different clusters.
> If the CPUs do not have the same view of memory (for example
> the main CPU and a management controller processor) they should
> be in different clusters.
> 
> I agree that this is slightly confusing, because the concept
> is on the boundary between something that's real in hardware
> (eg in a big.LITTLE setup the CPUs are in separate hardware
> clusters, and of coures a BMC processor and the main CPU
> are definitely different things) and something that we're
> defining for its effects on the GDB UI and so we can make
> sure we don't share TCG translated code where we shouldn't.
> 

Thanks!

With that definition in mind, why can't QEMU cluster CPUs
automatically by looking at CPU models and address space objects?

-- 
Eduardo

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [Qemu-devel] [PATCH v7 01/16] hw/cpu: introduce CPU clusters
  2018-12-04 19:05                     ` Eduardo Habkost
@ 2018-12-04 19:16                       ` Peter Maydell
  2018-12-04 19:45                         ` Eduardo Habkost
  0 siblings, 1 reply; 45+ messages in thread
From: Peter Maydell @ 2018-12-04 19:16 UTC (permalink / raw)
  To: Eduardo Habkost
  Cc: Luc Michel, Philippe Mathieu-Daudé, Alistair Francis,
	Mark Burton, QEMU Developers, Philippe Mathieu-Daudé,
	Sai Pavan Boddu, Edgar Iglesias, qemu-arm

On Tue, 4 Dec 2018 at 19:05, Eduardo Habkost <ehabkost@redhat.com> wrote:
> On Tue, Dec 04, 2018 at 06:24:19PM +0000, Peter Maydell wrote:
> > A cluster is a group of CPUs which are all identical and have
> > the same view of the rest of the system.

> With that definition in mind, why can't QEMU cluster CPUs
> automatically by looking at CPU models and address space objects?

That sounds like it is in theory feasible and in practice
quite tricky. You would have to look not just at the CPU
model name but also introspect all its properties for
ones which change features of the CPU and are set differently
on different CPUs (and I don't think there's any way to
automatically tell which properties are ones which make
the CPU different for which-cluster purposes and which aren't).
And if we automatically checked whether address space objects
were the same it would rule out implementing devices with
per-cpu banked memory mapped registers by mapping different
things into the AS for each CPU (though that's a hypothetical
at the moment -- I've thought about implementing stuff that
way but we tend to implement that sort of thing by looking
at current_cpu->cpu_index at the moment).

thanks
-- PMM

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [Qemu-devel] [PATCH v7 01/16] hw/cpu: introduce CPU clusters
  2018-12-04 19:16                       ` Peter Maydell
@ 2018-12-04 19:45                         ` Eduardo Habkost
  2018-12-05 14:02                           ` Luc Michel
  0 siblings, 1 reply; 45+ messages in thread
From: Eduardo Habkost @ 2018-12-04 19:45 UTC (permalink / raw)
  To: Peter Maydell
  Cc: Luc Michel, Philippe Mathieu-Daudé, Alistair Francis,
	Mark Burton, QEMU Developers, Philippe Mathieu-Daudé,
	Sai Pavan Boddu, Edgar Iglesias, qemu-arm, Paolo Bonzini

On Tue, Dec 04, 2018 at 07:16:39PM +0000, Peter Maydell wrote:
> On Tue, 4 Dec 2018 at 19:05, Eduardo Habkost <ehabkost@redhat.com> wrote:
> > On Tue, Dec 04, 2018 at 06:24:19PM +0000, Peter Maydell wrote:
> > > A cluster is a group of CPUs which are all identical and have
> > > the same view of the rest of the system.
> 
> > With that definition in mind, why can't QEMU cluster CPUs
> > automatically by looking at CPU models and address space objects?
> 
> That sounds like it is in theory feasible and in practice
> quite tricky. You would have to look not just at the CPU
> model name but also introspect all its properties for
> ones which change features of the CPU and are set differently
> on different CPUs (and I don't think there's any way to
> automatically tell which properties are ones which make
> the CPU different for which-cluster purposes and which aren't).
> And if we automatically checked whether address space objects
> were the same it would rule out implementing devices with
> per-cpu banked memory mapped registers by mapping different
> things into the AS for each CPU (though that's a hypothetical
> at the moment -- I've thought about implementing stuff that
> way but we tend to implement that sort of thing by looking
> at current_cpu->cpu_index at the moment).

I see.

Can't we at least do something to make sure the cluster objects
make sense?  e.g. by ensuring at least QOM CPU type is the same,
and that cpu->address_space somehow points to
cpu->cluster->address_space?

CCing Paolo, so he can correct me if this doesn't make sense.

-- 
Eduardo

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [Qemu-devel] [PATCH v7 01/16] hw/cpu: introduce CPU clusters
  2018-12-04 18:24                   ` Peter Maydell
  2018-12-04 19:05                     ` Eduardo Habkost
@ 2018-12-05 13:44                     ` Luc Michel
  1 sibling, 0 replies; 45+ messages in thread
From: Luc Michel @ 2018-12-05 13:44 UTC (permalink / raw)
  To: Peter Maydell, Eduardo Habkost
  Cc: Philippe Mathieu-Daudé, Alistair Francis, Mark Burton,
	QEMU Developers, Philippe Mathieu-Daudé, Sai Pavan Boddu,
	Edgar Iglesias, qemu-arm

On 12/4/18 7:24 PM, Peter Maydell wrote:
> On Tue, 4 Dec 2018 at 18:06, Eduardo Habkost <ehabkost@redhat.com> wrote:
>> In either case, I'm still missing a clear description of what a
>> cluster is supposed to represent, exactly (see my previous reply
>> on this thread).
> 
> Here's my attempt:
> 
> A cluster is a group of CPUs which are all identical and have
> the same view of the rest of the system.
> If CPUs are not identical (for example, Cortex-A53 and
> Cortex-A57 CPUs in an Arm big.LITTLE system) they should be
> in different clusters.
> If the CPUs do not have the same view of memory (for example
> the main CPU and a management controller processor) they should
> be in different clusters.
Thanks, I'll use that as a start point for the documentation in cluster.h

-- 
Luc

> 
> I agree that this is slightly confusing, because the concept
> is on the boundary between something that's real in hardware
> (eg in a big.LITTLE setup the CPUs are in separate hardware
> clusters, and of coures a BMC processor and the main CPU
> are definitely different things) and something that we're
> defining for its effects on the GDB UI and so we can make
> sure we don't share TCG translated code where we shouldn't.
> 
> thanks
> -- PMM
> 

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [Qemu-devel] [PATCH v7 01/16] hw/cpu: introduce CPU clusters
  2018-12-04 18:06                 ` Eduardo Habkost
  2018-12-04 18:24                   ` Peter Maydell
@ 2018-12-05 13:47                   ` Luc Michel
  1 sibling, 0 replies; 45+ messages in thread
From: Luc Michel @ 2018-12-05 13:47 UTC (permalink / raw)
  To: Eduardo Habkost
  Cc: Peter Maydell, Philippe Mathieu-Daudé, Alistair Francis,
	Mark Burton, QEMU Developers, Philippe Mathieu-Daudé,
	Sai Pavan Boddu, Edgar Iglesias, qemu-arm

On 12/4/18 7:06 PM, Eduardo Habkost wrote:
> On Mon, Dec 03, 2018 at 03:09:14PM +0100, Luc Michel wrote:
>>
>>
>> On 12/3/18 12:23 PM, Peter Maydell wrote:
>>> On Mon, 3 Dec 2018 at 11:21, Luc Michel <luc.michel@greensocs.com> wrote:
>>>>
>>>> On 11/30/18 5:52 PM, Peter Maydell wrote:
>>>>> Luc: what are the requirements on boards using CPU cluster
>>>>> objects? I assume these are both OK:
>>>>>  * does not use cluster objects at all
>>>>>    (the gdbstub puts all the CPUs in one process?)
>>>> Yes, when no clusters are found, a process with PID 1 is created and
>>>> used for all CPUs.
>>>>>  * all CPUs created are in some CPU cluster
>>>>>    (the gdbstub uses one process per CPU cluster)
>>>>> but what about
>>>>>  * some CPUs are created in a CPU cluster, but some
>>>>>    are "loose", not in any cluster at all> ?
>>>>> Is that just invalid, or do the "loose" CPUs end up in
>>>>> a default cluster (gdbstub process), or do they get
>>>>> one cluster each?
>>>> Currently this is valid and the "loose" CPUs end up in the first
>>>> available process (which will be PID 1 if we are in your first case,
>>>> i.e. no clusters in the machine).
>>>
>>> So if there are some defined clusters 1 and 2, and some
>>> loose CPUs, the clusters get PID 1 and PID 2, and the
>>> loose CPUs end up in PID 3? That seems reasonable.
>> No sorry I was not clear, the loose CPUs would end up on PID 1 in that case.
>>
>> The current behaviour is as follows:
>> During gdb stub initialization:
>>   - A process is created per cluster.
>>   - If no cluster are found, an unique process is created with PID 1
>>
>> When trying to find a CPU PID:
>>   - If it is attached to a cluster, return the associated process,
>>   - If it is loose, return the first available process.
> 
> The behavior described by Peter would be more reasonable to me.
> Otherwise what was just an accident (getting the CPU assigned to
> the first process) will become expected behavior of the API, and
> it will be hard to change it later.
Make sense, I'll change that to adopt the behaviour described by Peter.

> 
> In either case, I'm still missing a clear description of what a
> cluster is supposed to represent, exactly (see my previous reply
> on this thread).
I'll add a description in the next re-roll, using the Peter's one as a
starting point.

Thanks.

-- 
Luc

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [Qemu-devel] [PATCH v7 01/16] hw/cpu: introduce CPU clusters
  2018-12-04 19:45                         ` Eduardo Habkost
@ 2018-12-05 14:02                           ` Luc Michel
  2018-12-05 16:47                             ` Eduardo Habkost
  0 siblings, 1 reply; 45+ messages in thread
From: Luc Michel @ 2018-12-05 14:02 UTC (permalink / raw)
  To: Eduardo Habkost, Peter Maydell
  Cc: Philippe Mathieu-Daudé, Alistair Francis, Mark Burton,
	QEMU Developers, Philippe Mathieu-Daudé, Sai Pavan Boddu,
	Edgar Iglesias, qemu-arm, Paolo Bonzini

On 12/4/18 8:45 PM, Eduardo Habkost wrote:
> On Tue, Dec 04, 2018 at 07:16:39PM +0000, Peter Maydell wrote:
>> On Tue, 4 Dec 2018 at 19:05, Eduardo Habkost <ehabkost@redhat.com> wrote:
>>> On Tue, Dec 04, 2018 at 06:24:19PM +0000, Peter Maydell wrote:
>>>> A cluster is a group of CPUs which are all identical and have
>>>> the same view of the rest of the system.
>>
>>> With that definition in mind, why can't QEMU cluster CPUs
>>> automatically by looking at CPU models and address space objects?
>>
>> That sounds like it is in theory feasible and in practice
>> quite tricky. You would have to look not just at the CPU
>> model name but also introspect all its properties for
>> ones which change features of the CPU and are set differently
>> on different CPUs (and I don't think there's any way to
>> automatically tell which properties are ones which make
>> the CPU different for which-cluster purposes and which aren't).
>> And if we automatically checked whether address space objects
>> were the same it would rule out implementing devices with
>> per-cpu banked memory mapped registers by mapping different
>> things into the AS for each CPU (though that's a hypothetical
>> at the moment -- I've thought about implementing stuff that
>> way but we tend to implement that sort of thing by looking
>> at current_cpu->cpu_index at the moment).
> 
> I see.
> 
> Can't we at least do something to make sure the cluster objects
> make sense?  e.g. by ensuring at least QOM CPU type is the same,
> and that cpu->address_space somehow points to
> cpu->cluster->address_space?
Where such a check should be placed? Cluster realize function is not
good since children can still be added after device realization. A good
place would be when object_property_add_child() is called, but I'm not
aware of a way of hooking into that with the QOM API...

-- 
Luc

> 
> CCing Paolo, so he can correct me if this doesn't make sense.
> 

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [Qemu-devel] [PATCH v7 01/16] hw/cpu: introduce CPU clusters
  2018-12-05 14:02                           ` Luc Michel
@ 2018-12-05 16:47                             ` Eduardo Habkost
  0 siblings, 0 replies; 45+ messages in thread
From: Eduardo Habkost @ 2018-12-05 16:47 UTC (permalink / raw)
  To: Luc Michel
  Cc: Peter Maydell, Philippe Mathieu-Daudé, Alistair Francis,
	Mark Burton, QEMU Developers, Philippe Mathieu-Daudé,
	Sai Pavan Boddu, Edgar Iglesias, qemu-arm, Paolo Bonzini,
	Dr. David Alan Gilbert, Markus Armbruster

On Wed, Dec 05, 2018 at 03:02:45PM +0100, Luc Michel wrote:
> On 12/4/18 8:45 PM, Eduardo Habkost wrote:
> > On Tue, Dec 04, 2018 at 07:16:39PM +0000, Peter Maydell wrote:
> >> On Tue, 4 Dec 2018 at 19:05, Eduardo Habkost <ehabkost@redhat.com> wrote:
> >>> On Tue, Dec 04, 2018 at 06:24:19PM +0000, Peter Maydell wrote:
> >>>> A cluster is a group of CPUs which are all identical and have
> >>>> the same view of the rest of the system.
> >>
> >>> With that definition in mind, why can't QEMU cluster CPUs
> >>> automatically by looking at CPU models and address space objects?
> >>
> >> That sounds like it is in theory feasible and in practice
> >> quite tricky. You would have to look not just at the CPU
> >> model name but also introspect all its properties for
> >> ones which change features of the CPU and are set differently
> >> on different CPUs (and I don't think there's any way to
> >> automatically tell which properties are ones which make
> >> the CPU different for which-cluster purposes and which aren't).
> >> And if we automatically checked whether address space objects
> >> were the same it would rule out implementing devices with
> >> per-cpu banked memory mapped registers by mapping different
> >> things into the AS for each CPU (though that's a hypothetical
> >> at the moment -- I've thought about implementing stuff that
> >> way but we tend to implement that sort of thing by looking
> >> at current_cpu->cpu_index at the moment).
> > 
> > I see.
> > 
> > Can't we at least do something to make sure the cluster objects
> > make sense?  e.g. by ensuring at least QOM CPU type is the same,
> > and that cpu->address_space somehow points to
> > cpu->cluster->address_space?
> Where such a check should be placed? Cluster realize function is not
> good since children can still be added after device realization. A good
> place would be when object_property_add_child() is called, but I'm not
> aware of a way of hooking into that with the QOM API...

Not sure.  Maybe it can be a post-machine_init hook.  Maybe this
can be done by a separate (avocado_qemu-based?) test script.

It looks like address space info is available through HMP ("info
mtree") and not QMP, so we'd need to fix that first.  CCing HMP
and QMP maintainers.

I don't think this should block the series, but it would be a
welcome addition.

-- 
Eduardo

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [Qemu-devel] [PATCH v7 03/16] gdbstub: add multiprocess support to '?' packets
  2018-11-25 21:22   ` Philippe Mathieu-Daudé
@ 2018-12-06 12:27     ` Luc Michel
  0 siblings, 0 replies; 45+ messages in thread
From: Luc Michel @ 2018-12-06 12:27 UTC (permalink / raw)
  To: Philippe Mathieu-Daudé, qemu-devel
  Cc: Peter Maydell, Eduardo Habkost, alistair, mark.burton,
	Philippe Mathieu-Daudé, saipava, edgari, qemu-arm

On 11/25/18 10:22 PM, Philippe Mathieu-Daudé wrote:
> Hi Luc,
> 
> On 23/11/18 10:17, Luc Michel wrote:
>> The gdb_get_cpu_pid() function does the PID lookup for the given CPU. It
>> checks if the CPU is a direct child of a CPU cluster. If it is, the
>> returned PID is the cluster ID plus one (cluster IDs start at 0, GDB
>> PIDs at 1). When the CPU is not a child of such a container, the PID of
>> the first process is returned.
>>
>> The gdb_fmt_thread_id() function generates the string to be used to identify
>> a given thread, in a response packet for the peer. This function
>> supports generating thread IDs when multiprocess mode is enabled (in the
>> form `p<pid>.<tid>').
>>
>> Use them in the reply to a '?' request.
>>
>> Signed-off-by: Luc Michel <luc.michel@greensocs.com>
>> Acked-by: Alistair Francis <alistair.francis@wdc.com>
>> Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
>> ---
>>  gdbstub.c | 60 +++++++++++++++++++++++++++++++++++++++++++++++++++++--
>>  1 file changed, 58 insertions(+), 2 deletions(-)
>>
>> diff --git a/gdbstub.c b/gdbstub.c
>> index 26f5a7449a..4fbc05dfe3 100644
>> --- a/gdbstub.c
>> +++ b/gdbstub.c
>> @@ -638,10 +638,52 @@ static int memtox(char *buf, const char *mem, int len)
>>          }
>>      }
>>      return p - buf;
>>  }
>>  
>> +static uint32_t gdb_get_cpu_pid(const GDBState *s, CPUState *cpu)
>> +{
>> +#ifndef CONFIG_USER_ONLY
>> +    gchar *path, *name;
> 
> Setting ...
> 
>        gchar *path, *name = NULL;
> 
>> +    Object *obj;
>> +    CPUClusterState *cluster;
>> +    uint32_t ret;
>> +
>> +    path = object_get_canonical_path(OBJECT(cpu));
>> +    name = object_get_canonical_path_component(OBJECT(cpu));
> 
> ... we might move this line ...
> 
>> +
>> +    if (path == NULL) {
>> +        ret = s->processes[0].pid;
>> +        goto out;
>> +    }
> 
> ... hereOK I'll change that.
> 
>        name = object_get_canonical_path_component(OBJECT(cpu));
> 
>> +
>> +    /*
>> +     * Retrieve the CPU parent path by removing the last '/' and the CPU name
>> +     * from the CPU canonical path. */
>> +    path[strlen(path) - strlen(name) - 1] = '\0';
> 
> Can we get there with path != NULL && name == NULL?
I think the only way we could end up in this case is if cpu ==
object_get_root(), which does not make much sense. I can add an
assert(name != NULL) here to enforce that.

> 
>> +
>> +    obj = object_resolve_path_type(path, TYPE_CPU_CLUSTER, NULL);
>> +
>> +    if (obj == NULL) {
>> +        ret = s->processes[0].pid;
>> +        goto out;
>> +    }
>> +
>> +    cluster = CPU_CLUSTER(obj);
>> +    ret = cluster->cluster_id + 1;
>> +
>> +out:
>> +    g_free(name);
>> +    g_free(path);
>> +
>> +    return ret;
>> +
>> +#else
> 
> [*]
> 
>> +    return s->processes[0].pid;
>> +#endif
>> +}
>> +
>>  static const char *get_feature_xml(const char *p, const char **newp,
>>                                     CPUClass *cc)
>>  {
>>      size_t len;
>>      int i;
>> @@ -907,10 +949,23 @@ static CPUState *find_cpu(uint32_t thread_id)
>>      }
>>  
>>      return NULL;
>>  }
>>  
>> +static char *gdb_fmt_thread_id(const GDBState *s, CPUState *cpu,
>> +                           char *buf, size_t buf_size)
>> +{
>> +    if (s->multiprocess) {
>> +        snprintf(buf, buf_size, "p%02x.%02x",
>> +                 gdb_get_cpu_pid(s, cpu), cpu_gdb_index(cpu));
>> +    } else {
>> +        snprintf(buf, buf_size, "%02x", cpu_gdb_index(cpu));
>> +    }
>> +
>> +    return buf;
>> +}
>> +
>>  static int is_query_packet(const char *p, const char *query, char separator)
>>  {
>>      unsigned int query_len = strlen(query);
>>  
>>      return strncmp(p, query, query_len) == 0 &&
>> @@ -1018,22 +1073,23 @@ static int gdb_handle_packet(GDBState *s, const char *line_buf)
>>      const char *p;
>>      uint32_t thread;
>>      int ch, reg_size, type, res;
>>      uint8_t mem_buf[MAX_PACKET_LENGTH];
>>      char buf[sizeof(mem_buf) + 1 /* trailing NUL */];
>> +    char thread_id[16];
>>      uint8_t *registers;
>>      target_ulong addr, len;
>>  
>>      trace_gdbstub_io_command(line_buf);
>>  
>>      p = line_buf;
>>      ch = *p++;
>>      switch(ch) {
>>      case '?':
>>          /* TODO: Make this return the correct value for user-mode.  */
> 
> Is this comment still relevant?
> 
> If so, wouldn't it be better placed in [*]?
git blame shows that at the time when this comment was added
(1fddef4b1ba from 2005), the Stop Reply packet was like this:

+        /* TODO: Make this return the correct value for user-mode.  */
         snprintf(buf, sizeof(buf), "S%02x", SIGTRAP);

Which is the form that contains only a signal number. So this comment
must refer to this hard-coded signal, so I think it is still valid :-)

However, you are right pointing out that the PID used in user-mode
should probably be the one of the QEMU process running the guest binary
(as it is done for TIDs I believe). I'll add a comment at [*] to point
that out.

Thanks.

-- 
Luc

> 
>> -        snprintf(buf, sizeof(buf), "T%02xthread:%02x;", GDB_SIGNAL_TRAP,
>> -                 cpu_gdb_index(s->c_cpu));
>> +        snprintf(buf, sizeof(buf), "T%02xthread:%s;", GDB_SIGNAL_TRAP,
>> +                 gdb_fmt_thread_id(s, s->c_cpu, thread_id, sizeof(thread_id)));
>>          put_packet(s, buf);
>>          /* Remove all the breakpoints when this query is issued,
>>           * because gdb is doing and initial connect and the state
>>           * should be cleaned up.
>>           */
>>

^ permalink raw reply	[flat|nested] 45+ messages in thread

end of thread, other threads:[~2018-12-06 12:28 UTC | newest]

Thread overview: 45+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2018-11-23  9:17 [Qemu-devel] [PATCH v7 00/16] gdbstub: support for the multiprocess extension Luc Michel
2018-11-23  9:17 ` [Qemu-devel] [PATCH v7 01/16] hw/cpu: introduce CPU clusters Luc Michel
2018-11-23 18:10   ` Eduardo Habkost
2018-11-23 18:14     ` Peter Maydell
2018-11-23 18:24       ` Eduardo Habkost
2018-11-23 18:53         ` Peter Maydell
2018-11-23 22:38           ` Eduardo Habkost
2018-11-25 21:27     ` Philippe Mathieu-Daudé
2018-11-26 13:27       ` Eduardo Habkost
2018-11-30 16:52         ` Peter Maydell
2018-11-30 18:14           ` Eduardo Habkost
2018-12-03 11:20           ` Luc Michel
2018-12-03 11:23             ` Peter Maydell
2018-12-03 14:09               ` Luc Michel
2018-12-04 18:06                 ` Eduardo Habkost
2018-12-04 18:24                   ` Peter Maydell
2018-12-04 19:05                     ` Eduardo Habkost
2018-12-04 19:16                       ` Peter Maydell
2018-12-04 19:45                         ` Eduardo Habkost
2018-12-05 14:02                           ` Luc Michel
2018-12-05 16:47                             ` Eduardo Habkost
2018-12-05 13:44                     ` Luc Michel
2018-12-05 13:47                   ` Luc Michel
2018-11-23  9:17 ` [Qemu-devel] [PATCH v7 02/16] gdbstub: introduce GDB processes Luc Michel
2018-11-25 20:58   ` Philippe Mathieu-Daudé
2018-11-23  9:17 ` [Qemu-devel] [PATCH v7 03/16] gdbstub: add multiprocess support to '?' packets Luc Michel
2018-11-25 21:22   ` Philippe Mathieu-Daudé
2018-12-06 12:27     ` Luc Michel
2018-11-23  9:17 ` [Qemu-devel] [PATCH v7 04/16] gdbstub: add multiprocess support to 'H' and 'T' packets Luc Michel
2018-11-23  9:17 ` [Qemu-devel] [PATCH v7 05/16] gdbstub: add multiprocess support to vCont packets Luc Michel
2018-11-23  9:17 ` [Qemu-devel] [PATCH v7 06/16] gdbstub: add multiprocess support to 'sC' packets Luc Michel
2018-11-23  9:17 ` [Qemu-devel] [PATCH v7 07/16] gdbstub: add multiprocess support to (f|s)ThreadInfo and ThreadExtraInfo Luc Michel
2018-11-30 10:35   ` Edgar E. Iglesias
2018-11-23  9:17 ` [Qemu-devel] [PATCH v7 08/16] gdbstub: add multiprocess support to Xfer:features:read: Luc Michel
2018-11-23  9:17 ` [Qemu-devel] [PATCH v7 09/16] gdbstub: add multiprocess support to gdb_vm_state_change() Luc Michel
2018-11-23  9:17 ` [Qemu-devel] [PATCH v7 10/16] gdbstub: add multiprocess support to 'D' packets Luc Michel
2018-11-23  9:17 ` [Qemu-devel] [PATCH v7 11/16] gdbstub: add support for extended mode packet Luc Michel
2018-11-23  9:17 ` [Qemu-devel] [PATCH v7 12/16] gdbstub: add support for vAttach packets Luc Michel
2018-11-25 21:00   ` Philippe Mathieu-Daudé
2018-11-23  9:17 ` [Qemu-devel] [PATCH v7 13/16] gdbstub: processes initialization on new peer connection Luc Michel
2018-11-25 21:02   ` Philippe Mathieu-Daudé
2018-11-23  9:17 ` [Qemu-devel] [PATCH v7 14/16] gdbstub: gdb_set_stop_cpu: ignore request when process is not attached Luc Michel
2018-11-23  9:17 ` [Qemu-devel] [PATCH v7 15/16] gdbstub: add multiprocess extension support Luc Michel
2018-11-23  9:17 ` [Qemu-devel] [PATCH v7 16/16] arm/xlnx-zynqmp: put APUs and RPUs in separate CPU clusters Luc Michel
2018-11-25 21:10   ` Philippe Mathieu-Daudé

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).