From: Anthony Liguori <aliguori@us.ibm.com>
To: Alexey Kardashevskiy <aik@ozlabs.ru>
Cc: Alexander Graf <agraf@suse.de>,
qemu-devel@nongnu.org, qemu-ppc@nongnu.org,
Paolo Bonzini <pbonzini@redhat.com>,
Paul Mackerras <paulus@samba.org>,
David Gibson <david@gibson.dropbear.id.au>
Subject: Re: [Qemu-devel] [PATCH 04/17] target-ppc: Convert ppc cpu savevm to VMStateDescription
Date: Tue, 09 Jul 2013 09:08:01 -0500 [thread overview]
Message-ID: <87r4f74yim.fsf@codemonkey.ws> (raw)
In-Reply-To: <51DB9C4B.6080106@ozlabs.ru>
Alexey Kardashevskiy <aik@ozlabs.ru> writes:
> On 07/09/2013 04:29 AM, Anthony Liguori wrote:
>> Alexey Kardashevskiy <aik@ozlabs.ru> writes:
>>
>>> From: David Gibson <david@gibson.dropbear.id.au>
>>>
>>> The savevm code for the powerpc cpu emulation is currently based around
>>> the old register_savevm() rather than register_vmstate() method. It's also
>>> rather broken, missing some important state on some CPU models.
>>>
>>> This patch completely rewrites the savevm for target-ppc, using the new
>>> VMStateDescription approach. Exactly what needs to be saved in what
>>> configurations has been more carefully examined, too. This introduces a
>>> new version (5) of the cpu save format. The old load function is retained
>>> to support version 4 images.
>>
>> Supporting "version 4" is purely an academic exercise. I wouldn't bother.
>
>
> Sorry, I do not get it. Will or will not the patch be accepted as is (with
> removed comments from the bottom)? Or do I have to remove the old handlers
> to get it in upstream? Thanks.
It's dead code. Please remove it.
>>> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
>>> [aik: ppc cpu savevm convertion fixed to use PowerPCCPU instead of CPUPPCState]
>>> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
>>> ---
>>> target-ppc/cpu-qom.h | 4 +
>>> target-ppc/cpu.h | 8 +-
>>> target-ppc/machine.c | 533 ++++++++++++++++++++++++++++++++++++-------
>>> target-ppc/translate_init.c | 2 +
>>> 4 files changed, 454 insertions(+), 93 deletions(-)
>>>
>>> diff --git a/target-ppc/cpu-qom.h b/target-ppc/cpu-qom.h
>>> index eb03a00..2b96b04 100644
>>> --- a/target-ppc/cpu-qom.h
>>> +++ b/target-ppc/cpu-qom.h
>>> @@ -102,4 +102,8 @@ PowerPCCPUClass *ppc_cpu_class_by_pvr(uint32_t pvr);
>>>
>>> void ppc_cpu_do_interrupt(CPUState *cpu);
>>>
>>> +#ifndef CONFIG_USER_ONLY
>>> +extern const struct VMStateDescription vmstate_ppc_cpu;
>>> +#endif
>>> +
>>> #endif
>>> diff --git a/target-ppc/cpu.h b/target-ppc/cpu.h
>>> index 0ede077..f30577d 100644
>>> --- a/target-ppc/cpu.h
>>> +++ b/target-ppc/cpu.h
>>> @@ -948,7 +948,7 @@ struct CPUPPCState {
>>> #if defined(TARGET_PPC64)
>>> /* PowerPC 64 SLB area */
>>> ppc_slb_t slb[64];
>>> - int slb_nr;
>>> + int32_t slb_nr;
>>> #endif
>>> /* segment registers */
>>> hwaddr htab_base;
>>> @@ -957,11 +957,11 @@ struct CPUPPCState {
>>> /* externally stored hash table */
>>> uint8_t *external_htab;
>>> /* BATs */
>>> - int nb_BATs;
>>> + uint32_t nb_BATs;
>>> target_ulong DBAT[2][8];
>>> target_ulong IBAT[2][8];
>>> /* PowerPC TLB registers (for 4xx, e500 and 60x software driven TLBs) */
>>> - int nb_tlb; /* Total number of TLB */
>>> + int32_t nb_tlb; /* Total number of TLB */
>>> int tlb_per_way; /* Speed-up helper: used to avoid divisions at run time */
>>> int nb_ways; /* Number of ways in the TLB set */
>>> int last_way; /* Last used way used to allocate TLB in a LRU way */
>>> @@ -1176,8 +1176,6 @@ static inline CPUPPCState *cpu_init(const char *cpu_model)
>>> #define cpu_signal_handler cpu_ppc_signal_handler
>>> #define cpu_list ppc_cpu_list
>>>
>>> -#define CPU_SAVE_VERSION 4
>>> -
>>> /* MMU modes definitions */
>>> #define MMU_MODE0_SUFFIX _user
>>> #define MMU_MODE1_SUFFIX _kernel
>>> diff --git a/target-ppc/machine.c b/target-ppc/machine.c
>>> index 2d10adb..1fcc6bc 100644
>>> --- a/target-ppc/machine.c
>>> +++ b/target-ppc/machine.c
>>> @@ -1,96 +1,12 @@
>>> #include "hw/hw.h"
>>> #include "hw/boards.h"
>>> #include "sysemu/kvm.h"
>>> +#include "helper_regs.h"
>>>
>>> -void cpu_save(QEMUFile *f, void *opaque)
>>> +static int cpu_load_old(QEMUFile *f, void *opaque, int version_id)
>>> {
>>> - CPUPPCState *env = (CPUPPCState *)opaque;
>>> - unsigned int i, j;
>>> - uint32_t fpscr;
>>> - target_ulong xer;
>>> -
>>> - for (i = 0; i < 32; i++)
>>> - qemu_put_betls(f, &env->gpr[i]);
>>> -#if !defined(TARGET_PPC64)
>>> - for (i = 0; i < 32; i++)
>>> - qemu_put_betls(f, &env->gprh[i]);
>>> -#endif
>>> - qemu_put_betls(f, &env->lr);
>>> - qemu_put_betls(f, &env->ctr);
>>> - for (i = 0; i < 8; i++)
>>> - qemu_put_be32s(f, &env->crf[i]);
>>> - xer = cpu_read_xer(env);
>>> - qemu_put_betls(f, &xer);
>>> - qemu_put_betls(f, &env->reserve_addr);
>>> - qemu_put_betls(f, &env->msr);
>>> - for (i = 0; i < 4; i++)
>>> - qemu_put_betls(f, &env->tgpr[i]);
>>> - for (i = 0; i < 32; i++) {
>>> - union {
>>> - float64 d;
>>> - uint64_t l;
>>> - } u;
>>> - u.d = env->fpr[i];
>>> - qemu_put_be64(f, u.l);
>>> - }
>>> - fpscr = env->fpscr;
>>> - qemu_put_be32s(f, &fpscr);
>>> - qemu_put_sbe32s(f, &env->access_type);
>>> -#if defined(TARGET_PPC64)
>>> - qemu_put_betls(f, &env->spr[SPR_ASR]);
>>> - qemu_put_sbe32s(f, &env->slb_nr);
>>> -#endif
>>> - qemu_put_betls(f, &env->spr[SPR_SDR1]);
>>> - for (i = 0; i < 32; i++)
>>> - qemu_put_betls(f, &env->sr[i]);
>>> - for (i = 0; i < 2; i++)
>>> - for (j = 0; j < 8; j++)
>>> - qemu_put_betls(f, &env->DBAT[i][j]);
>>> - for (i = 0; i < 2; i++)
>>> - for (j = 0; j < 8; j++)
>>> - qemu_put_betls(f, &env->IBAT[i][j]);
>>> - qemu_put_sbe32s(f, &env->nb_tlb);
>>> - qemu_put_sbe32s(f, &env->tlb_per_way);
>>> - qemu_put_sbe32s(f, &env->nb_ways);
>>> - qemu_put_sbe32s(f, &env->last_way);
>>> - qemu_put_sbe32s(f, &env->id_tlbs);
>>> - qemu_put_sbe32s(f, &env->nb_pids);
>>> - if (env->tlb.tlb6) {
>>> - // XXX assumes 6xx
>>> - for (i = 0; i < env->nb_tlb; i++) {
>>> - qemu_put_betls(f, &env->tlb.tlb6[i].pte0);
>>> - qemu_put_betls(f, &env->tlb.tlb6[i].pte1);
>>> - qemu_put_betls(f, &env->tlb.tlb6[i].EPN);
>>> - }
>>> - }
>>> - for (i = 0; i < 4; i++)
>>> - qemu_put_betls(f, &env->pb[i]);
>>> - for (i = 0; i < 1024; i++)
>>> - qemu_put_betls(f, &env->spr[i]);
>>> - qemu_put_be32s(f, &env->vscr);
>>> - qemu_put_be64s(f, &env->spe_acc);
>>> - qemu_put_be32s(f, &env->spe_fscr);
>>> - qemu_put_betls(f, &env->msr_mask);
>>> - qemu_put_be32s(f, &env->flags);
>>> - qemu_put_sbe32s(f, &env->error_code);
>>> - qemu_put_be32s(f, &env->pending_interrupts);
>>> - qemu_put_be32s(f, &env->irq_input_state);
>>> - for (i = 0; i < POWERPC_EXCP_NB; i++)
>>> - qemu_put_betls(f, &env->excp_vectors[i]);
>>> - qemu_put_betls(f, &env->excp_prefix);
>>> - qemu_put_betls(f, &env->ivor_mask);
>>> - qemu_put_betls(f, &env->ivpr_mask);
>>> - qemu_put_betls(f, &env->hreset_vector);
>>> - qemu_put_betls(f, &env->nip);
>>> - qemu_put_betls(f, &env->hflags);
>>> - qemu_put_betls(f, &env->hflags_nmsr);
>>> - qemu_put_sbe32s(f, &env->mmu_idx);
>>> - qemu_put_sbe32(f, 0);
>>> -}
>>> -
>>> -int cpu_load(QEMUFile *f, void *opaque, int version_id)
>>> -{
>>> - CPUPPCState *env = (CPUPPCState *)opaque;
>>> + PowerPCCPU *cpu = opaque;
>>> + CPUPPCState *env = &cpu->env;
>>> unsigned int i, j;
>>> target_ulong sdr1;
>>> uint32_t fpscr;
>>> @@ -177,3 +93,444 @@ int cpu_load(QEMUFile *f, void *opaque, int version_id)
>>>
>>> return 0;
>>> }
>>> +
>>> +static int get_avr(QEMUFile *f, void *pv, size_t size)
>>> +{
>>> + ppc_avr_t *v = pv;
>>> +
>>> + v->u64[0] = qemu_get_be64(f);
>>> + v->u64[1] = qemu_get_be64(f);
>>> +
>>> + return 0;
>>> +}
>>> +
>>> +static void put_avr(QEMUFile *f, void *pv, size_t size)
>>> +{
>>> + ppc_avr_t *v = pv;
>>> +
>>> + qemu_put_be64(f, v->u64[0]);
>>> + qemu_put_be64(f, v->u64[1]);
>>> +}
>>> +
>>> +const VMStateInfo vmstate_info_avr = {
>>> + .name = "avr",
>>> + .get = get_avr,
>>> + .put = put_avr,
>>> +};
>>> +
>>> +#define VMSTATE_AVR_ARRAY_V(_f, _s, _n, _v) \
>>> + VMSTATE_ARRAY(_f, _s, _n, _v, vmstate_info_avr, ppc_avr_t)
>>> +
>>> +#define VMSTATE_AVR_ARRAY(_f, _s, _n) \
>>> + VMSTATE_AVR_ARRAY_V(_f, _s, _n, 0)
>>> +
>>> +static void cpu_pre_save(void *opaque)
>>> +{
>>> + PowerPCCPU *cpu = opaque;
>>> + CPUPPCState *env = &cpu->env;
>>> + int i;
>>> +
>>> + env->spr[SPR_LR] = env->lr;
>>> + env->spr[SPR_CTR] = env->ctr;
>>> + env->spr[SPR_XER] = env->xer;
>>> +#if defined(TARGET_PPC64)
>>> + env->spr[SPR_CFAR] = env->cfar;
>>> +#endif
>>> + env->spr[SPR_BOOKE_SPEFSCR] = env->spe_fscr;
>>> +
>>> + for (i = 0; (i < 4) && (i < env->nb_BATs); i++) {
>>> + env->spr[SPR_DBAT0U + 2*i] = env->DBAT[0][i];
>>> + env->spr[SPR_DBAT0U + 2*i + 1] = env->DBAT[1][i];
>>> + env->spr[SPR_IBAT0U + 2*i] = env->IBAT[0][i];
>>> + env->spr[SPR_IBAT0U + 2*i + 1] = env->IBAT[1][i];
>>> + }
>>> + for (i = 0; (i < 4) && ((i+4) < env->nb_BATs); i++) {
>>> + env->spr[SPR_DBAT4U + 2*i] = env->DBAT[0][i+4];
>>> + env->spr[SPR_DBAT4U + 2*i + 1] = env->DBAT[1][i+4];
>>> + env->spr[SPR_IBAT4U + 2*i] = env->IBAT[0][i+4];
>>> + env->spr[SPR_IBAT4U + 2*i + 1] = env->IBAT[1][i+4];
>>> + }
>>> +}
>>> +
>>> +static int cpu_post_load(void *opaque, int version_id)
>>> +{
>>> + PowerPCCPU *cpu = opaque;
>>> + CPUPPCState *env = &cpu->env;
>>> + int i;
>>> +
>>> + env->lr = env->spr[SPR_LR];
>>> + env->ctr = env->spr[SPR_CTR];
>>> + env->xer = env->spr[SPR_XER];
>>> +#if defined(TARGET_PPC64)
>>> + env->cfar = env->spr[SPR_CFAR];
>>> +#endif
>>> + env->spe_fscr = env->spr[SPR_BOOKE_SPEFSCR];
>>> +
>>> + for (i = 0; (i < 4) && (i < env->nb_BATs); i++) {
>>> + env->DBAT[0][i] = env->spr[SPR_DBAT0U + 2*i];
>>> + env->DBAT[1][i] = env->spr[SPR_DBAT0U + 2*i + 1];
>>> + env->IBAT[0][i] = env->spr[SPR_IBAT0U + 2*i];
>>> + env->IBAT[1][i] = env->spr[SPR_IBAT0U + 2*i + 1];
>>> + }
>>> + for (i = 0; (i < 4) && ((i+4) < env->nb_BATs); i++) {
>>> + env->DBAT[0][i+4] = env->spr[SPR_DBAT4U + 2*i];
>>> + env->DBAT[1][i+4] = env->spr[SPR_DBAT4U + 2*i + 1];
>>> + env->IBAT[0][i+4] = env->spr[SPR_IBAT4U + 2*i];
>>> + env->IBAT[1][i+4] = env->spr[SPR_IBAT4U + 2*i + 1];
>>> + }
>>> +
>>> + /* Restore htab_base and htab_mask variables */
>>> + ppc_store_sdr1(env, env->spr[SPR_SDR1]);
>>> +
>>> + hreg_compute_hflags(env);
>>> + hreg_compute_mem_idx(env);
>>> +
>>> + return 0;
>>> +}
>>> +
>>> +static bool fpu_needed(void *opaque)
>>> +{
>>> + PowerPCCPU *cpu = opaque;
>>> +
>>> + return (cpu->env.insns_flags & PPC_FLOAT);
>>> +}
>>> +
>>> +static const VMStateDescription vmstate_fpu = {
>>> + .name = "cpu/fpu",
>>> + .version_id = 1,
>>> + .minimum_version_id = 1,
>>> + .minimum_version_id_old = 1,
>>> + .fields = (VMStateField []) {
>>> + VMSTATE_FLOAT64_ARRAY(env.fpr, PowerPCCPU, 32),
>>> + VMSTATE_UINTTL(env.fpscr, PowerPCCPU),
>>> + VMSTATE_END_OF_LIST()
>>> + },
>>> +};
>>> +
>>> +static bool altivec_needed(void *opaque)
>>> +{
>>> + PowerPCCPU *cpu = opaque;
>>> +
>>> + return (cpu->env.insns_flags & PPC_ALTIVEC);
>>> +}
>>> +
>>> +static const VMStateDescription vmstate_altivec = {
>>> + .name = "cpu/altivec",
>>> + .version_id = 1,
>>> + .minimum_version_id = 1,
>>> + .minimum_version_id_old = 1,
>>> + .fields = (VMStateField []) {
>>> + VMSTATE_AVR_ARRAY(env.avr, PowerPCCPU, 32),
>>> + VMSTATE_UINT32(env.vscr, PowerPCCPU),
>>> + VMSTATE_END_OF_LIST()
>>> + },
>>> +};
>>> +
>>> +static bool vsx_needed(void *opaque)
>>> +{
>>> + PowerPCCPU *cpu = opaque;
>>> +
>>> + return (cpu->env.insns_flags2 & PPC2_VSX);
>>> +}
>>> +
>>> +static const VMStateDescription vmstate_vsx = {
>>> + .name = "cpu/vsx",
>>> + .version_id = 1,
>>> + .minimum_version_id = 1,
>>> + .minimum_version_id_old = 1,
>>> + .fields = (VMStateField []) {
>>> + VMSTATE_UINT64_ARRAY(env.vsr, PowerPCCPU, 32),
>>> + VMSTATE_END_OF_LIST()
>>> + },
>>> +};
>>> +
>>> +static bool sr_needed(void *opaque)
>>> +{
>>> +#ifdef TARGET_PPC64
>>> + PowerPCCPU *cpu = opaque;
>>> +
>>> + return !(cpu->env.mmu_model & POWERPC_MMU_64);
>>> +#else
>>> + return true;
>>> +#endif
>>> +}
>>> +
>>> +static const VMStateDescription vmstate_sr = {
>>> + .name = "cpu/sr",
>>> + .version_id = 1,
>>> + .minimum_version_id = 1,
>>> + .minimum_version_id_old = 1,
>>> + .fields = (VMStateField []) {
>>> + VMSTATE_UINTTL_ARRAY(env.sr, PowerPCCPU, 32),
>>> + VMSTATE_END_OF_LIST()
>>> + },
>>> +};
>>> +
>>> +#ifdef TARGET_PPC64
>>> +static int get_slbe(QEMUFile *f, void *pv, size_t size)
>>> +{
>>> + ppc_slb_t *v = pv;
>>> +
>>> + v->esid = qemu_get_be64(f);
>>> + v->vsid = qemu_get_be64(f);
>>> +
>>> + return 0;
>>> +}
>>> +
>>> +static void put_slbe(QEMUFile *f, void *pv, size_t size)
>>> +{
>>> + ppc_slb_t *v = pv;
>>> +
>>> + qemu_put_be64(f, v->esid);
>>> + qemu_put_be64(f, v->vsid);
>>> +}
>>> +
>>> +const VMStateInfo vmstate_info_slbe = {
>>> + .name = "slbe",
>>> + .get = get_slbe,
>>> + .put = put_slbe,
>>> +};
>>> +
>>> +#define VMSTATE_SLB_ARRAY_V(_f, _s, _n, _v) \
>>> + VMSTATE_ARRAY(_f, _s, _n, _v, vmstate_info_slbe, ppc_slb_t)
>>> +
>>> +#define VMSTATE_SLB_ARRAY(_f, _s, _n) \
>>> + VMSTATE_SLB_ARRAY_V(_f, _s, _n, 0)
>>> +
>>> +static bool slb_needed(void *opaque)
>>> +{
>>> + PowerPCCPU *cpu = opaque;
>>> +
>>> + /* We don't support any of the old segment table based 64-bit CPUs */
>>> + return (cpu->env.mmu_model & POWERPC_MMU_64);
>>> +}
>>> +
>>> +static const VMStateDescription vmstate_slb = {
>>> + .name = "cpu/slb",
>>> + .version_id = 1,
>>> + .minimum_version_id = 1,
>>> + .minimum_version_id_old = 1,
>>> + .fields = (VMStateField []) {
>>> + VMSTATE_INT32_EQUAL(env.slb_nr, PowerPCCPU),
>>> + VMSTATE_SLB_ARRAY(env.slb, PowerPCCPU, 64),
>>> + VMSTATE_END_OF_LIST()
>>> + }
>>> +};
>>> +#endif /* TARGET_PPC64 */
>>> +
>>> +static const VMStateDescription vmstate_tlb6xx_entry = {
>>> + .name = "cpu/tlb6xx_entry",
>>> + .version_id = 1,
>>> + .minimum_version_id = 1,
>>> + .minimum_version_id_old = 1,
>>> + .fields = (VMStateField []) {
>>> + VMSTATE_UINTTL(pte0, ppc6xx_tlb_t),
>>> + VMSTATE_UINTTL(pte1, ppc6xx_tlb_t),
>>> + VMSTATE_UINTTL(EPN, ppc6xx_tlb_t),
>>> + VMSTATE_END_OF_LIST()
>>> + },
>>> +};
>>> +
>>> +static bool tlb6xx_needed(void *opaque)
>>> +{
>>> + PowerPCCPU *cpu = opaque;
>>> + CPUPPCState *env = &cpu->env;
>>> +
>>> + return env->nb_tlb && (env->tlb_type == TLB_6XX);
>>> +}
>>> +
>>> +static const VMStateDescription vmstate_tlb6xx = {
>>> + .name = "cpu/tlb6xx",
>>> + .version_id = 1,
>>> + .minimum_version_id = 1,
>>> + .minimum_version_id_old = 1,
>>> + .fields = (VMStateField []) {
>>> + VMSTATE_INT32_EQUAL(env.nb_tlb, PowerPCCPU),
>>> + VMSTATE_STRUCT_VARRAY_POINTER_INT32(env.tlb.tlb6, PowerPCCPU,
>>> + env.nb_tlb,
>>> + vmstate_tlb6xx_entry,
>>> + ppc6xx_tlb_t),
>>> + VMSTATE_UINTTL_ARRAY(env.tgpr, PowerPCCPU, 4),
>>> + VMSTATE_END_OF_LIST()
>>> + }
>>> +};
>>> +
>>> +static const VMStateDescription vmstate_tlbemb_entry = {
>>> + .name = "cpu/tlbemb_entry",
>>> + .version_id = 1,
>>> + .minimum_version_id = 1,
>>> + .minimum_version_id_old = 1,
>>> + .fields = (VMStateField []) {
>>> + VMSTATE_UINT64(RPN, ppcemb_tlb_t),
>>> + VMSTATE_UINTTL(EPN, ppcemb_tlb_t),
>>> + VMSTATE_UINTTL(PID, ppcemb_tlb_t),
>>> + VMSTATE_UINTTL(size, ppcemb_tlb_t),
>>> + VMSTATE_UINT32(prot, ppcemb_tlb_t),
>>> + VMSTATE_UINT32(attr, ppcemb_tlb_t),
>>> + VMSTATE_END_OF_LIST()
>>> + },
>>> +};
>>> +
>>> +static bool tlbemb_needed(void *opaque)
>>> +{
>>> + PowerPCCPU *cpu = opaque;
>>> + CPUPPCState *env = &cpu->env;
>>> +
>>> + return env->nb_tlb && (env->tlb_type == TLB_EMB);
>>> +}
>>> +
>>> +static bool pbr403_needed(void *opaque)
>>> +{
>>> + PowerPCCPU *cpu = opaque;
>>> + uint32_t pvr = cpu->env.spr[SPR_PVR];
>>> +
>>> + return (pvr & 0xffff0000) == 0x00200000;
>>> +}
>>> +
>>> +static const VMStateDescription vmstate_pbr403 = {
>>> + .name = "cpu/pbr403",
>>> + .version_id = 1,
>>> + .minimum_version_id = 1,
>>> + .minimum_version_id_old = 1,
>>> + .fields = (VMStateField []) {
>>> + VMSTATE_UINTTL_ARRAY(env.pb, PowerPCCPU, 4),
>>> + VMSTATE_END_OF_LIST()
>>> + },
>>> +};
>>> +
>>> +static const VMStateDescription vmstate_tlbemb = {
>>> + .name = "cpu/tlb6xx",
>>> + .version_id = 1,
>>> + .minimum_version_id = 1,
>>> + .minimum_version_id_old = 1,
>>> + .fields = (VMStateField []) {
>>> + VMSTATE_INT32_EQUAL(env.nb_tlb, PowerPCCPU),
>>> + VMSTATE_STRUCT_VARRAY_POINTER_INT32(env.tlb.tlbe, PowerPCCPU,
>>> + env.nb_tlb,
>>> + vmstate_tlbemb_entry,
>>> + ppcemb_tlb_t),
>>> + /* 403 protection registers */
>>> + VMSTATE_END_OF_LIST()
>>> + },
>>> + .subsections = (VMStateSubsection []) {
>>> + {
>>> + .vmsd = &vmstate_pbr403,
>>> + .needed = pbr403_needed,
>>> + } , {
>>> + /* empty */
>>> + }
>>> + }
>>> +};
>>> +
>>> +static const VMStateDescription vmstate_tlbmas_entry = {
>>> + .name = "cpu/tlbmas_entry",
>>> + .version_id = 1,
>>> + .minimum_version_id = 1,
>>> + .minimum_version_id_old = 1,
>>> + .fields = (VMStateField []) {
>>> + VMSTATE_UINT32(mas8, ppcmas_tlb_t),
>>> + VMSTATE_UINT32(mas1, ppcmas_tlb_t),
>>> + VMSTATE_UINT64(mas2, ppcmas_tlb_t),
>>> + VMSTATE_UINT64(mas7_3, ppcmas_tlb_t),
>>> + VMSTATE_END_OF_LIST()
>>> + },
>>> +};
>>> +
>>> +static bool tlbmas_needed(void *opaque)
>>> +{
>>> + PowerPCCPU *cpu = opaque;
>>> + CPUPPCState *env = &cpu->env;
>>> +
>>> + return env->nb_tlb && (env->tlb_type == TLB_MAS);
>>> +}
>>> +
>>> +static const VMStateDescription vmstate_tlbmas = {
>>> + .name = "cpu/tlbmas",
>>> + .version_id = 1,
>>> + .minimum_version_id = 1,
>>> + .minimum_version_id_old = 1,
>>> + .fields = (VMStateField []) {
>>> + VMSTATE_INT32_EQUAL(env.nb_tlb, PowerPCCPU),
>>> + VMSTATE_STRUCT_VARRAY_POINTER_INT32(env.tlb.tlbm, PowerPCCPU,
>>> + env.nb_tlb,
>>> + vmstate_tlbmas_entry,
>>> + ppcmas_tlb_t),
>>> + VMSTATE_END_OF_LIST()
>>> + }
>>> +};
>>> +
>>> +const VMStateDescription vmstate_ppc_cpu = {
>>> + .name = "cpu",
>>> + .version_id = 5,
>>> + .minimum_version_id = 5,
>>> + .minimum_version_id_old = 4,
>>> + .load_state_old = cpu_load_old,
>>> + .pre_save = cpu_pre_save,
>>> + .post_load = cpu_post_load,
>>> + .fields = (VMStateField []) {
>>> + /* Verify we haven't changed the pvr */
>>> + VMSTATE_UINTTL_EQUAL(env.spr[SPR_PVR], PowerPCCPU),
>>> +
>>> + /* User mode architected state */
>>> + VMSTATE_UINTTL_ARRAY(env.gpr, PowerPCCPU, 32),
>>> +#if !defined(TARGET_PPC64)
>>> + VMSTATE_UINTTL_ARRAY(env.gprh, PowerPCCPU, 32),
>>> +#endif
>>> + VMSTATE_UINT32_ARRAY(env.crf, PowerPCCPU, 8),
>>> + VMSTATE_UINTTL(env.nip, PowerPCCPU),
>>> +
>>> + /* SPRs */
>>> + VMSTATE_UINTTL_ARRAY(env.spr, PowerPCCPU, 1024),
>>> + VMSTATE_UINT64(env.spe_acc, PowerPCCPU),
>>> +
>>> + /* Reservation */
>>> + VMSTATE_UINTTL(env.reserve_addr, PowerPCCPU),
>>> +
>>> + /* Supervisor mode architected state */
>>> + VMSTATE_UINTTL(env.msr, PowerPCCPU),
>>> +
>>> + /* Internal state */
>>> + VMSTATE_UINTTL(env.hflags_nmsr, PowerPCCPU),
>>> + /* FIXME: access_type? */
>>> +
>>> + /* Sanity checking */
>>> + VMSTATE_UINTTL_EQUAL(env.msr_mask, PowerPCCPU),
>>> + VMSTATE_UINT64_EQUAL(env.insns_flags, PowerPCCPU),
>>> + VMSTATE_UINT64_EQUAL(env.insns_flags2, PowerPCCPU),
>>> + VMSTATE_UINT32_EQUAL(env.nb_BATs, PowerPCCPU),
>>> + VMSTATE_END_OF_LIST()
>>> + },
>>> + .subsections = (VMStateSubsection []) {
>>> + {
>>> + .vmsd = &vmstate_fpu,
>>> + .needed = fpu_needed,
>>> + } , {
>>> + .vmsd = &vmstate_altivec,
>>> + .needed = altivec_needed,
>>> + } , {
>>> + .vmsd = &vmstate_vsx,
>>> + .needed = vsx_needed,
>>> + } , {
>>> + .vmsd = &vmstate_sr,
>>> + .needed = sr_needed,
>>> + } , {
>>> +#ifdef TARGET_PPC64
>>> + .vmsd = &vmstate_slb,
>>> + .needed = slb_needed,
>>> + } , {
>>> +#endif /* TARGET_PPC64 */
>>> + .vmsd = &vmstate_tlb6xx,
>>> + .needed = tlb6xx_needed,
>>> + } , {
>>> + .vmsd = &vmstate_tlbemb,
>>> + .needed = tlbemb_needed,
>>> + } , {
>>> + .vmsd = &vmstate_tlbmas,
>>> + .needed = tlbmas_needed,
>>> + } , {
>>> + /* FIXME: DCRs? */
>>> + /* FIXME: timebase? */
>>> + /* empty */
>>
>> Are they needed or not needed?
>
> DCR is not needed, I'll remove it.
>
> Timebase is needed but it requires kernel support and either way it should
> not prevent the rest of the patch to go upstream.
So migration doesn't work?
If you need timebase, what happens without it?
Regards,
Anthony Liguori
>
> I'll remove both comments anyway.
>
>
>> If they're needed, please add them.
>>
>> Regards,
>>
>> Anthony Liguori
>>
>>> + }
>>> + }
>>> +};
>>> diff --git a/target-ppc/translate_init.c b/target-ppc/translate_init.c
>>> index d8758d5..95aebf7 100644
>>> --- a/target-ppc/translate_init.c
>>> +++ b/target-ppc/translate_init.c
>>> @@ -8295,6 +8295,8 @@ static void ppc_cpu_class_init(ObjectClass *oc, void *data)
>>>
>>> cc->class_by_name = ppc_cpu_class_by_name;
>>> cc->do_interrupt = ppc_cpu_do_interrupt;
>>> +
>>> + cpu_class_set_vmsd(cc, &vmstate_ppc_cpu);
>>> }
>>>
>>> static const TypeInfo ppc_cpu_type_info = {
>>> --
>>> 1.7.10.4
>>
>
>
> --
> Alexey
next prev parent reply other threads:[~2013-07-09 14:16 UTC|newest]
Thread overview: 92+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-06-27 6:45 [Qemu-devel] [PATCH 00/17 v3] spapr: migration, pci, msi, power8 Alexey Kardashevskiy
2013-06-27 6:45 ` [Qemu-devel] [PATCH 01/17] pseries: move interrupt controllers to hw/intc/ Alexey Kardashevskiy
2013-07-02 20:54 ` Andreas Färber
2013-07-08 18:15 ` Anthony Liguori
2013-07-08 18:34 ` Alexander Graf
2013-06-27 6:45 ` [Qemu-devel] [PATCH 02/17] pseries: rework XICS Alexey Kardashevskiy
2013-06-27 11:47 ` David Gibson
2013-06-27 12:17 ` Alexey Kardashevskiy
2013-07-02 0:06 ` David Gibson
2013-07-02 0:21 ` Alexander Graf
2013-07-02 2:08 ` Alexey Kardashevskiy
2013-07-08 18:24 ` Anthony Liguori
2013-07-08 18:22 ` Anthony Liguori
2013-07-09 3:40 ` Alexey Kardashevskiy
2013-07-09 4:48 ` Benjamin Herrenschmidt
2013-07-09 13:58 ` Anthony Liguori
2013-07-10 3:06 ` Alexey Kardashevskiy
2013-07-10 3:26 ` Benjamin Herrenschmidt
2013-07-10 12:09 ` Anthony Liguori
2013-06-27 6:45 ` [Qemu-devel] [PATCH 03/17] savevm: Implement VMS_DIVIDE flag Alexey Kardashevskiy
2013-07-08 18:27 ` Anthony Liguori
2013-07-08 23:57 ` David Gibson
2013-07-09 14:06 ` Anthony Liguori
2013-07-09 14:38 ` David Gibson
2013-06-27 6:45 ` [Qemu-devel] [PATCH 04/17] target-ppc: Convert ppc cpu savevm to VMStateDescription Alexey Kardashevskiy
2013-07-08 18:29 ` Anthony Liguori
2013-07-09 5:14 ` Alexey Kardashevskiy
2013-07-09 14:08 ` Anthony Liguori [this message]
2013-07-09 15:11 ` David Gibson
2013-07-10 3:31 ` Benjamin Herrenschmidt
2013-07-10 7:49 ` David Gibson
2013-07-15 13:24 ` Paolo Bonzini
2013-06-27 6:45 ` [Qemu-devel] [PATCH 05/17] pseries: savevm support for XICS interrupt controller Alexey Kardashevskiy
2013-07-08 18:31 ` Anthony Liguori
2013-07-09 0:06 ` Alexey Kardashevskiy
2013-07-09 0:49 ` Anthony Liguori
2013-07-09 0:59 ` Alexey Kardashevskiy
2013-07-09 1:25 ` Anthony Liguori
2013-07-09 3:37 ` Alexey Kardashevskiy
2013-07-15 13:05 ` Paolo Bonzini
2013-07-15 13:13 ` Alexey Kardashevskiy
2013-07-15 13:17 ` Paolo Bonzini
2013-07-09 7:17 ` David Gibson
2013-07-15 13:10 ` Paolo Bonzini
2013-06-27 6:45 ` [Qemu-devel] [PATCH 06/17] pseries: savevm support for VIO devices Alexey Kardashevskiy
2013-07-08 18:35 ` Anthony Liguori
2013-06-27 6:45 ` [Qemu-devel] [PATCH 07/17] pseries: savevm support for PAPR VIO logical lan Alexey Kardashevskiy
2013-07-08 18:36 ` Anthony Liguori
2013-06-27 6:45 ` [Qemu-devel] [PATCH 08/17] pseries: savevm support for PAPR TCE tables Alexey Kardashevskiy
2013-07-08 18:39 ` Anthony Liguori
2013-07-08 21:45 ` Benjamin Herrenschmidt
2013-07-08 22:15 ` Anthony Liguori
2013-07-08 22:41 ` Benjamin Herrenschmidt
2013-07-09 7:20 ` David Gibson
2013-07-09 15:22 ` Anthony Liguori
2013-07-10 7:42 ` David Gibson
2013-07-09 16:26 ` Anthony Liguori
2013-07-15 13:26 ` Paolo Bonzini
2013-07-15 15:06 ` Anthony Liguori
2013-06-27 6:45 ` [Qemu-devel] [PATCH 09/17] pseries: rework PAPR virtual SCSI Alexey Kardashevskiy
2013-07-08 18:42 ` Anthony Liguori
2013-07-15 13:11 ` Paolo Bonzini
2013-06-27 6:45 ` [Qemu-devel] [PATCH 10/17] pseries: savevm support for " Alexey Kardashevskiy
2013-06-27 6:45 ` [Qemu-devel] [PATCH 11/17] pseries: savevm support for pseries machine Alexey Kardashevskiy
2013-07-08 18:45 ` Anthony Liguori
2013-07-08 18:50 ` Alexander Graf
2013-07-08 19:01 ` Anthony Liguori
2013-07-08 21:48 ` Benjamin Herrenschmidt
2013-07-08 22:23 ` Anthony Liguori
2013-06-27 6:45 ` [Qemu-devel] [PATCH 12/17] pseries: savevm support for PCI host bridge Alexey Kardashevskiy
2013-07-08 18:45 ` Anthony Liguori
2013-06-27 6:45 ` [Qemu-devel] [PATCH 13/17] target-ppc: Add helper for KVM_PPC_RTAS_DEFINE_TOKEN Alexey Kardashevskiy
2013-06-27 6:45 ` [Qemu-devel] [PATCH 14/17] pseries: Support for in-kernel XICS interrupt controller Alexey Kardashevskiy
2013-07-08 18:50 ` Anthony Liguori
2013-07-09 3:21 ` Alexey Kardashevskiy
2013-07-09 7:21 ` David Gibson
2013-07-10 3:24 ` Benjamin Herrenschmidt
2013-07-10 7:48 ` David Gibson
2013-06-27 6:45 ` [Qemu-devel] [PATCH 15/17] pseries: savevm support with KVM Alexey Kardashevskiy
2013-06-27 6:45 ` [Qemu-devel] [PATCH 16/17] ppc64: Enable QEMU to run on POWER 8 DD1 chip Alexey Kardashevskiy
2013-07-04 5:54 ` Andreas Färber
2013-07-04 6:26 ` [Qemu-devel] [Qemu-ppc] " Benjamin Herrenschmidt
2013-07-04 6:42 ` [Qemu-devel] " Prerna Saxena
2013-07-10 11:19 ` Alexander Graf
2013-06-27 6:46 ` [Qemu-devel] [PATCH 17/17] spapr-pci: rework MSI/MSIX Alexey Kardashevskiy
2013-07-04 2:31 ` [Qemu-devel] [PATCH 00/17 v3] spapr: migration, pci, msi, power8 Alexey Kardashevskiy
2013-07-04 2:40 ` Anthony Liguori
2013-07-04 2:48 ` Alexey Kardashevskiy
2013-07-08 18:01 ` Anthony Liguori
2013-07-09 6:37 ` Alexey Kardashevskiy
2013-07-09 15:26 ` Anthony Liguori
2013-07-09 14:04 ` Anthony Liguori
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=87r4f74yim.fsf@codemonkey.ws \
--to=aliguori@us.ibm.com \
--cc=agraf@suse.de \
--cc=aik@ozlabs.ru \
--cc=david@gibson.dropbear.id.au \
--cc=paulus@samba.org \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=qemu-ppc@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).