linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] KVM: emulator: Handle wraparound in (cs_base + offset) when fetching.
@ 2011-04-13 15:44 Nelson Elhage
  2011-04-13 16:06 ` Avi Kivity
  2011-04-17  9:44 ` [PATCH] KVM: emulator: Handle wraparound in (cs_base + offset) when fetching Avi Kivity
  0 siblings, 2 replies; 5+ messages in thread
From: Nelson Elhage @ 2011-04-13 15:44 UTC (permalink / raw)
  To: Avi Kivity; +Cc: kvm, linux-kernel, Nelson Elhage

Currently, setting a large (i.e. negative) base address for %cs does not work on
a 64-bit host. The "JOS" teaching operating system, used by MIT and other
universities, relies on such segments while bootstrapping its way to full
virtual memory management.

Signed-off-by: Nelson Elhage <nelhage@ksplice.com>
---
 arch/x86/kvm/emulate.c |    5 ++++-
 1 files changed, 4 insertions(+), 1 deletions(-)

diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
index 0ad47b8..54e84b2 100644
--- a/arch/x86/kvm/emulate.c
+++ b/arch/x86/kvm/emulate.c
@@ -505,9 +505,12 @@ static int do_fetch_insn_byte(struct x86_emulate_ctxt *ctxt,
 	int size, cur_size;
 
 	if (eip == fc->end) {
+		unsigned long linear = eip + ctxt->cs_base;
+		if (ctxt->mode != X86EMUL_MODE_PROT64)
+			linear &= (u32)-1;
 		cur_size = fc->end - fc->start;
 		size = min(15UL - cur_size, PAGE_SIZE - offset_in_page(eip));
-		rc = ops->fetch(ctxt->cs_base + eip, fc->data + cur_size,
+		rc = ops->fetch(linear, fc->data + cur_size,
 				size, ctxt->vcpu, &ctxt->exception);
 		if (rc != X86EMUL_CONTINUE)
 			return rc;
-- 
1.7.2.43.g36c08.dirty


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH] KVM: emulator: Handle wraparound in (cs_base + offset) when fetching.
  2011-04-13 15:44 [PATCH] KVM: emulator: Handle wraparound in (cs_base + offset) when fetching Nelson Elhage
@ 2011-04-13 16:06 ` Avi Kivity
  2011-04-15  3:27   ` [PATCH] KVM: emulator: Use linearize() when fetching instructions Nelson Elhage
  2011-04-17  9:44 ` [PATCH] KVM: emulator: Handle wraparound in (cs_base + offset) when fetching Avi Kivity
  1 sibling, 1 reply; 5+ messages in thread
From: Avi Kivity @ 2011-04-13 16:06 UTC (permalink / raw)
  To: Nelson Elhage; +Cc: kvm, linux-kernel

On 04/13/2011 06:44 PM, Nelson Elhage wrote:
> Currently, setting a large (i.e. negative) base address for %cs does not work on
> a 64-bit host. The "JOS" teaching operating system, used by MIT and other
> universities, relies on such segments while bootstrapping its way to full
> virtual memory management.
>
> Signed-off-by: Nelson Elhage<nelhage@ksplice.com>
> ---
>   arch/x86/kvm/emulate.c |    5 ++++-
>   1 files changed, 4 insertions(+), 1 deletions(-)
>
> diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
> index 0ad47b8..54e84b2 100644
> --- a/arch/x86/kvm/emulate.c
> +++ b/arch/x86/kvm/emulate.c
> @@ -505,9 +505,12 @@ static int do_fetch_insn_byte(struct x86_emulate_ctxt *ctxt,
>   	int size, cur_size;
>
>   	if (eip == fc->end) {
> +		unsigned long linear = eip + ctxt->cs_base;
> +		if (ctxt->mode != X86EMUL_MODE_PROT64)
> +			linear&= (u32)-1;
>   		cur_size = fc->end - fc->start;
>   		size = min(15UL - cur_size, PAGE_SIZE - offset_in_page(eip));
> -		rc = ops->fetch(ctxt->cs_base + eip, fc->data + cur_size,
> +		rc = ops->fetch(linear, fc->data + cur_size,
>   				size, ctxt->vcpu,&ctxt->exception);
>   		if (rc != X86EMUL_CONTINUE)
>   			return rc;

A better fix would be to call linearize() here, which does the necessary 
truncation as well as segment checks.

However, this patch is a lot more backportable, so I think it should be 
applied, and a conversion to linearize() performed afterwards.

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH] KVM: emulator: Use linearize() when fetching instructions.
  2011-04-13 16:06 ` Avi Kivity
@ 2011-04-15  3:27   ` Nelson Elhage
  2011-04-17 12:26     ` Avi Kivity
  0 siblings, 1 reply; 5+ messages in thread
From: Nelson Elhage @ 2011-04-15  3:27 UTC (permalink / raw)
  To: Avi Kivity; +Cc: kvm, linux-kernel, Nelson Elhage

This means that the truncation behavior in linearize needs to grow an additional
slight piece of complexity: when fetching, truncation is dependent on the
execution mode, instead of the current address size.

Signed-off-by: Nelson Elhage <nelhage@ksplice.com>
---
 arch/x86/include/asm/kvm_emulate.h |    1 -
 arch/x86/kvm/emulate.c             |   23 ++++++++++++-----------
 2 files changed, 12 insertions(+), 12 deletions(-)

diff --git a/arch/x86/include/asm/kvm_emulate.h b/arch/x86/include/asm/kvm_emulate.h
index 0818448..9b760c8 100644
--- a/arch/x86/include/asm/kvm_emulate.h
+++ b/arch/x86/include/asm/kvm_emulate.h
@@ -265,7 +265,6 @@ struct x86_emulate_ctxt {
 	unsigned long eip; /* eip before instruction emulation */
 	/* Emulated execution mode, represented by an X86EMUL_MODE value. */
 	int mode;
-	u32 cs_base;
 
 	/* interruptibility state, as a result of execution of STI or MOV SS */
 	int interruptibility;
diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
index a5f63d4..d3d43a7 100644
--- a/arch/x86/kvm/emulate.c
+++ b/arch/x86/kvm/emulate.c
@@ -542,7 +542,7 @@ static int emulate_nm(struct x86_emulate_ctxt *ctxt)
 
 static int linearize(struct x86_emulate_ctxt *ctxt,
 		     struct segmented_address addr,
-		     unsigned size, bool write,
+		     unsigned size, bool write, bool fetch,
 		     ulong *linear)
 {
 	struct decode_cache *c = &ctxt->decode;
@@ -602,7 +602,7 @@ static int linearize(struct x86_emulate_ctxt *ctxt,
 		}
 		break;
 	}
-	if (c->ad_bytes != 8)
+	if (fetch ? ctxt->mode != X86EMUL_MODE_PROT64 : c->ad_bytes != 8)
 		la &= (u32)-1;
 	*linear = la;
 	return X86EMUL_CONTINUE;
@@ -621,7 +621,7 @@ static int segmented_read_std(struct x86_emulate_ctxt *ctxt,
 	int rc;
 	ulong linear;
 
-	rc = linearize(ctxt, addr, size, false, &linear);
+	rc = linearize(ctxt, addr, size, false, false, &linear);
 	if (rc != X86EMUL_CONTINUE)
 		return rc;
 	return ctxt->ops->read_std(linear, data, size, ctxt->vcpu,
@@ -637,11 +637,13 @@ static int do_fetch_insn_byte(struct x86_emulate_ctxt *ctxt,
 	int size, cur_size;
 
 	if (eip == fc->end) {
-		unsigned long linear = eip + ctxt->cs_base;
-		if (ctxt->mode != X86EMUL_MODE_PROT64)
-			linear &= (u32)-1;
+		unsigned long linear;
+		struct segmented_address addr = {VCPU_SREG_CS, eip};
 		cur_size = fc->end - fc->start;
 		size = min(15UL - cur_size, PAGE_SIZE - offset_in_page(eip));
+		rc = linearize(ctxt, addr, size, false, true, &linear);
+		f (rc != X86EMUL_CONTINUE)
+			return rc;
 		rc = ops->fetch(linear, fc->data + cur_size,
 				size, ctxt->vcpu, &ctxt->exception);
 		if (rc != X86EMUL_CONTINUE)
@@ -1047,7 +1049,7 @@ static int segmented_read(struct x86_emulate_ctxt *ctxt,
 	int rc;
 	ulong linear;
 
-	rc = linearize(ctxt, addr, size, false, &linear);
+	rc = linearize(ctxt, addr, size, false, false, &linear);
 	if (rc != X86EMUL_CONTINUE)
 		return rc;
 	return read_emulated(ctxt, ctxt->ops, linear, data, size);
@@ -1061,7 +1063,7 @@ static int segmented_write(struct x86_emulate_ctxt *ctxt,
 	int rc;
 	ulong linear;
 
-	rc = linearize(ctxt, addr, size, true, &linear);
+	rc = linearize(ctxt, addr, size, true, false, &linear);
 	if (rc != X86EMUL_CONTINUE)
 		return rc;
 	return ctxt->ops->write_emulated(linear, data, size,
@@ -1076,7 +1078,7 @@ static int segmented_cmpxchg(struct x86_emulate_ctxt *ctxt,
 	int rc;
 	ulong linear;
 
-	rc = linearize(ctxt, addr, size, true, &linear);
+	rc = linearize(ctxt, addr, size, true, false, &linear);
 	if (rc != X86EMUL_CONTINUE)
 		return rc;
 	return ctxt->ops->cmpxchg_emulated(linear, orig_data, data,
@@ -2576,7 +2578,7 @@ static int em_invlpg(struct x86_emulate_ctxt *ctxt)
 	int rc;
 	ulong linear;
 
-	rc = linearize(ctxt, c->src.addr.mem, 1, false, &linear);
+	rc = linearize(ctxt, c->src.addr.mem, 1, false, false, &linear);
 	if (rc == X86EMUL_CONTINUE)
 		emulate_invlpg(ctxt->vcpu, linear);
 	/* Disable writeback. */
@@ -3154,7 +3156,6 @@ x86_decode_insn(struct x86_emulate_ctxt *ctxt, void *insn, int insn_len)
 	c->fetch.end = c->fetch.start + insn_len;
 	if (insn_len > 0)
 		memcpy(c->fetch.data, insn, insn_len);
-	ctxt->cs_base = seg_base(ctxt, ops, VCPU_SREG_CS);
 
 	switch (mode) {
 	case X86EMUL_MODE_REAL:
-- 
1.7.2.43.g36c08.dirty


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH] KVM: emulator: Handle wraparound in (cs_base + offset) when fetching.
  2011-04-13 15:44 [PATCH] KVM: emulator: Handle wraparound in (cs_base + offset) when fetching Nelson Elhage
  2011-04-13 16:06 ` Avi Kivity
@ 2011-04-17  9:44 ` Avi Kivity
  1 sibling, 0 replies; 5+ messages in thread
From: Avi Kivity @ 2011-04-17  9:44 UTC (permalink / raw)
  To: Nelson Elhage; +Cc: kvm, linux-kernel

On 04/13/2011 06:44 PM, Nelson Elhage wrote:
> Currently, setting a large (i.e. negative) base address for %cs does not work on
> a 64-bit host. The "JOS" teaching operating system, used by MIT and other
> universities, relies on such segments while bootstrapping its way to full
> virtual memory management.

Applied, thanks.

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] KVM: emulator: Use linearize() when fetching instructions.
  2011-04-15  3:27   ` [PATCH] KVM: emulator: Use linearize() when fetching instructions Nelson Elhage
@ 2011-04-17 12:26     ` Avi Kivity
  0 siblings, 0 replies; 5+ messages in thread
From: Avi Kivity @ 2011-04-17 12:26 UTC (permalink / raw)
  To: Nelson Elhage; +Cc: kvm, linux-kernel

On 04/15/2011 06:27 AM, Nelson Elhage wrote:
> This means that the truncation behavior in linearize needs to grow an additional
> slight piece of complexity: when fetching, truncation is dependent on the
> execution mode, instead of the current address size.
>
> diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
> index a5f63d4..d3d43a7 100644
> --- a/arch/x86/kvm/emulate.c
> +++ b/arch/x86/kvm/emulate.c
> @@ -542,7 +542,7 @@ static int emulate_nm(struct x86_emulate_ctxt *ctxt)
>
>   static int linearize(struct x86_emulate_ctxt *ctxt,
>   		     struct segmented_address addr,
> -		     unsigned size, bool write,
> +		     unsigned size, bool write, bool fetch,

Calls to functions with strings of bool arguments are confusing.  Please 
make this __linearize, and introduce a new linearize() which doesn't 
have a fetch argument.

>   		ulong *linear)
>   {
>   	struct decode_cache *c =&ctxt->decode;
> @@ -602,7 +602,7 @@ static int linearize(struct x86_emulate_ctxt *ctxt,
>   		}
>   		break;
>   	}

linearize() will currently fault on an unreadable code segment.  Need to 
avoid that on instruction fetches.

> -	if (c->ad_bytes != 8)
> +	if (fetch ? ctxt->mode != X86EMUL_MODE_PROT64 : c->ad_bytes != 8)
>   		la&= (u32)-1;
>   	*linear = la;
>   	return X86EMUL_CONTINUE;

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2011-04-17 12:26 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-04-13 15:44 [PATCH] KVM: emulator: Handle wraparound in (cs_base + offset) when fetching Nelson Elhage
2011-04-13 16:06 ` Avi Kivity
2011-04-15  3:27   ` [PATCH] KVM: emulator: Use linearize() when fetching instructions Nelson Elhage
2011-04-17 12:26     ` Avi Kivity
2011-04-17  9:44 ` [PATCH] KVM: emulator: Handle wraparound in (cs_base + offset) when fetching Avi Kivity

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).